* Permit Consul Connect Gateways to be used with podman
Enable use of Consul Connect Gateways (ingresss/terminating/mesh)
with podman task driver.
task driver for Connect-enabled tasks for sidecar services which
used podman if any other task in the same task group was using podman
or fell back to docker otherwise.
That PR did not consider consul connect gateways, which remained
hardcoded to using docker task driver always.
This change applies the same heuristic also to gateway tasks,
enabling use of podman.
Limitations: The heuristic only works where the task group containing
the gateway also contains a podman task. Therefore it does not work
for the ingress example in the docs
(https://developer.hashicorp.com/nomad/docs/job-specification/gateway#ingress-gateway)
which uses connect native and requires the gateway be in a separate task.
* cl: add cl for connect gateway podman autodetect
* connect: add test ensuring we guess podman for gateway when possible
---------
Co-authored-by: Seth Hoenig <shoenig@duck.com>
Configuration changes to use backport assistant with LTS support. These include:
* adding a manifest file for active releases
* adding configuration to send backport to ENT repo
While working on #20462, I discovered that some of the scheduler tests for
disconnected clients making long blocking queries. The tests used
`testutil.WaitForResult` to wait for an evaluation to be written to the state
store. The evaluation was never written, but the tests were not correctly
returning an error for an empty query. This resulted in the tests blocking for
5s and then continuing anyways.
In practice, the evaluation is never written to the state store as part of the
test harness `Process` method, so this test assertion was meaningless. Remove
the broken assertion from the two top-level tests that used it, and upgrade
these tests to use `shoenig/test` in the process. This will save us ~50s per
test run.
When a job that implements a plugin is updated to have a new plugin ID, the old
version of the plugin is never deleted. We want to delay deleting plugins until
garbage collection to avoid race conditions between a plugin being registered
and its allocations being marked healthy.
Add logic to the state store's `DeleteCSIPlugin` method (used only by GC) to
check whether any of the jobs associated with the plugin have no allocations and
either have been purged or have been updated to no longer implement that plugin
ID.
This changeset also updates the CSI plugin lifecycle tests in the state store to
use `shoenig/test` over `testify`, and removes a spurious error log that was
happening on every periodic plugin GC attempt.
Fixes: https://github.com/hashicorp/nomad/issues/20225
The CSI hook for each allocation that claims a volume runs concurrently. If a
call to `MountVolume` happens at the same time as a call to `UnmountVolume` for
the same volume, it's possible for the second alloc to detect the volume has
already been staged, then for the original alloc to unpublish and unstage it,
only for the second alloc to then attempt to publish a volume that's been
unstaged.
The usage tracker on the volume manager was intended to prevent this behavior
but the call to claim the volume was made only after staging and publishing was
complete. Move the call to claim the volume for the usage tracker to the top of
the `MountVolume` workflow to prevent it from being unstaged until all consuming
allocations have called `UnmountVolume`.
Fixes: https://github.com/hashicorp/nomad/issues/20424
When the allocation is stopped, we deregister the service in the alloc runner's
`PreKill` hook. This ensures we delete the service registration and wait for the
shutdown delay before shutting down the tasks, so that workloads can drain their
connections. However, the call to remove the workload only logs errors and never
retries them.
Add a short retry loop to the `RemoveWorkload` method for Nomad services, so
that transient errors give us an extra opportunity to deregister the service
before the tasks are stopped, before we need to fall back to the data integrity
improvements implemented in #20590.
Ref: https://github.com/hashicorp/nomad/issues/16616
This changeset fixes three potential data integrity issues between allocations
and their Nomad native service registrations.
* When a node is marked down because it missed heartbeats, we remove Vault and
Consul tokens (for the pre-Workload Identity workflows) after we've written
the node update to Raft. This is unavoidably non-transactional because the
Consul and Vault servers aren't in the same Raft cluster as Nomad itself. But
we've unnecessarily mirrored this same behavior to deregister Nomad
services. This makes it possible for the leader to successfully write the node
update to Raft without removing services.
To address this, move the delete into the same Raft transaction. One minor
caveat with this approach is the upgrade path: if the leader is upgraded first
and a node is marked down during this window, older followers will have stale
information until they are also upgraded. This is unavoidable without
requiring the leader to unconditionally make an extra Raft write for every
down node until 2 LTS versions after Nomad 1.8.0. This temporary reduction in
data integrity for stale reads seems like a reasonable tradeoff.
* When an allocation is marked client-terminal from the client in
`UpdateAllocsFromClient`, we have an opportunity to ensure data integrity by
deregistering services for that allocation.
* When an allocation is deleted during eval garbage collection, we have an
opportunity to ensure data integrity by deregistering services for that
allocation. This is a cheap no-op if the allocation has been previously marked
client-terminal.
This changeset does not address client-side retries for the originally reported
issue, which will be done in a separate PR.
Ref: https://github.com/hashicorp/nomad/issues/16616
In the Unveil filesystem isolation mode we were mounting the shared
alloc dir with the UID/GID of the user of the task dir being mounted
and 0710 filesystem permissions. This was causing the actual task dir
to become inaccessible to other tasks in the allocation (a race where
the last mounter wins). Instead mount the shared alloc dir as nobody
with 0777 filesystem permissions.
Users can override the default sidecar task for Connect workloads. This sidecar
task might need access to certificate stores on the host. Allow adding the
`volume_mount` block to the sidecar task override.
Also fixes a bug where `volume_mount` blocks would not appear in plan diff
outputs.
Fixes: https://github.com/hashicorp/nomad/issues/19786
This change exposes CNI configuration details of a network
namespace as environment variables. This allows a task to use
these value to configure itself; a potential use case is to run
a Raft application binding to IP and Port details configured using
the bridge network mode.
Whenever the "exec" task driver is being used, nomad runs a plug in that in time runs the task on a container under the hood. If by any circumstance the executor is killed, the task is reparented to the init service and wont be stopped by Nomad in case of a job updated or stop.
This commit introduces two mechanisms to avoid this behaviour:
* Adds signal catching and handling to the executor, so in case of a SIGTERM, the signal will also be passed on to the task.
* Adds a pre start clean up of the processes in the container, ensuring only the ones the executor runs are present at any given time.
The `nomad plugin status :plugin_id` command lists allocations that implement
the plugin being queried. This list is filtered by the `-namespace` flag as
usual. Cluster admins will likely deploy plugins to a single namespace, but for
convenience they may want to have the wildcard namespace set in their command
environment.
Add support for handling the wildcard namespace to the CSI plugin RPC handler.
Fixes: https://github.com/hashicorp/nomad/issues/20537
CSI volumes are namespaced. But the client does not include the namespace in the
staging mount path. This causes CSI volumes with the same volume ID but
different namespace to collide if they happen to be placed on the same host. The
per-allocation paths don't need to be namespaced, because an allocation can only
mount volumes from its job's own namespace.
Rework the CSI hook tests to have more fine-grained control over the mock
on-disk state. Add tests covering upgrades from staging paths missing
namespaces.
Fixes: https://github.com/hashicorp/nomad/issues/18741
We bring in `containernetworking/plugins` for the contents of a single file,
which we use in a few places for running a goroutine in a specific network
namespace. This code hasn't needed an update in a couple of years, and a good
chunk of what we need was previously vendored into `client/lib/nsutil`
already.
Updating the library via dependabot is causing errors in Docker driver tests
because it updates a lot of transient dependencies, and it's bringing in a pile
of new transient dependencies like opentelemetry. Avoid this problem going
forward by vendoring the remaining code we hadn't already.
Ref: https://github.com/hashicorp/nomad/pull/20146
The ACL docs have a section explaining that some parts of the UI need slightly
wider read permissions than expected. These docs should include that you need
`plugin:read` to look at CSI volume pages in the UI.
Fixes: https://github.com/hashicorp/nomad/issues/18527
* drivers/raw_exec: enable setting cgroup override values
This PR enables configuration of cgroup override values on the `raw_exec`
task driver. WARNING: setting cgroup override values eliminates any
gauruntee Nomad can make about resource availability for *any* task on
the client node.
For cgroup v2 systems, set a single unified cgroup path using `cgroup_v2_override`.
The path may be either absolute or relative to the cgroup root.
config {
cgroup_v2_override = "custom.slice/app.scope"
}
or
config {
cgroup_v2_override = "/sys/fs/cgroup/custom.slice/app.scope"
}
For cgroup v1 systems, set a per-controller path for each controller using
`cgroup_v1_override`. The path(s) may be either absolute or relative to
the controller root.
config {
cgroup_v1_override = {
"pids": "custom/app",
"cpuset": "custom/app",
}
}
or
config {
cgroup_v1_override = {
"pids": "/sys/fs/cgroup/pids/custom/app",
"cpuset": "/sys/fs/cgroup/cpuset/custom/app",
}
}
* drivers/rawexec: ensure only one of v1/v2 cgroup override is set
* drivers/raw_exec: executor should error if setting cgroup does not work
* drivers/raw_exec: create cgroups in raw_exec tests
* drivers/raw_exec: ensure we fail to start if custom cgroup set and non-root
* move custom cgroup func into shared file
---------
Co-authored-by: Michael Schurter <mschurter@hashicorp.com>
The batch deregister RPC endpoint is only used by the internal
garbage collection process, it is not exposed via the HTTP API or
used anywhere else.
The GC process ensures that a job can only be removed from state
if all related evaluations and allocations are in a state that
means they can also be removed from state. This means that we do
not need to create evaluations when jobs are being deregistered
via this endpoint.
* Hook and latch on the initial index
* Serialization and restart of controller and table
* de-log
* allocBlocks reimplemented at job model level
* totalAllocs doesnt mean on jobmodel what it did in steady.js
* Hamburgers to sausages
* Hacky way to bring new jobs back around and parent job handling in list view
* Getting closer to hook/latch
* Latch from update on hook from initialize, but fickle
* Note on multiple-watch problem
* Sensible monday morning comment removal
* use of abortController to handle transition and reset events
* Next token will now update when there's an on-page shift
* Very rough anti-jostle technique
* Demoable, now to move things out of route and into controller
* Into the controller, generally
* Smarter cancellations
* Reset abortController on index models run, and system/sysbatch jobs now have an improved groupCountSum computed property
* Prev Page reverse querying
* n+1th jobs existing will trigger nextToken/pagination display
* Start of a GET/POST statuses return
* Namespace fix
* Unblock tests
* Realizing to my small horror that this skipURLModification flag may be too heavy handed
* Lintfix
* Default liveupdates localStorage setting to true
* Pagination and index rethink
* Big uncoupling of watchable and url-append stuff
* Testfixes for region, search, and keyboard
* Job row class for test purposes
* Allocations in test now contain events
* Starting on the jobs list tests in earnest
* Forbidden state de-bubbling cleanup
* Job list page size fixes
* Facet/Search/Filter jobs list tests skipped
* Maybe it's the automatic mirage logging
* Unbreak task unit test
* Pre-sort sort
* styling for jobs list pagination and general PR cleanup
* moving from Job.ActiveDeploymentID to Job.LatestDeployment.ID
* modifyIndex-based pagination (#20350)
* modifyIndex-based pagination
* modifyIndex gets its own column and pagination compacted with icons
* A generic withPagination handler for mirage
* Some live-PR changes
* Pagination and button disabled tests
* Job update handling tests for jobs index
* assertion timeout in case of long setTimeouts
* assert.timeouts down to 500ms
* de-to-do
* Clarifying comment and test descriptions
* Bugfix: resizing your browser on the new jobs index page would make the viz grow forever (#20458)
* [ui] Searching and filtering options (#20459)
* Beginnings of a search box for filter expressions
* jobSearchBox integration test
* jobs list updateFilter initial test
* Basic jobs list filtering tests
* First attempt at side-by-side facets and search with a computed filter
* Weirdly close to an iterative approach but checked isnt tracked properly
* Big rework to make filter composition and decomposition work nicely with the url
* Namespace facet dropdown added
* NodePool facet dropdown added
* hdsFacet for future testing and basic namespace filtering test
* Namespace filter existence test
* Status filtering
* Node pool/dynamic facet test
* Test patchups
* Attempt at optimize test fix
* Allocation re-load on optimize page explainer
* The Big Un-Skip
* Post-PR-review cleanup
* todo-squashing
* [ui] Handle parent/child jobs with the paginated Jobs Index route (#20493)
* First pass at a non-watchQuery version
* Parameterized jobs get child fetching and jobs index status style for parent jobs
* Completed allocs vs Running allocs in a child-job context, and fix an issue where moving from parent to parent would not reset index
* Testfix and better handling empty-child-statuses-list
* Parent/child test case
* Dont show empty allocation-status bars for parent jobs with no children
* Splits Settings into 2 sections, sign-in/profile and user settings (#20535)
* Changelog
introduce a new API /v1/jobs/statuses, primarily for use in the UI,
which collates info about jobs, their allocations, and latest deployment.
currently the UI gets *all* of /v1/jobs and sorts and paginates them client-side
in the browser, and its "summary" column is based on historical summary data
(which can be visually misleading, and sometimes scary when a job has failed
at some point in the not-yet-garbage-collected past).
this does pagination and filtering and such, and returns jobs sorted by ModifyIndex,
so latest-changed jobs still come first. it pulls allocs and latest deployment
straight out of current state for more a more robust, holistic view of the job status.
it is less efficient per-job, due to the extra state lookups, but should be more efficient
per-page (excepting perhaps for job(s) with very-many allocs).
if a POST body is sent like `{"jobs": [{"namespace": "cool-ns", "id": "cool-job"}]}`,
then the response will be limited to that subset of jobs. the main goal here is to
prevent "jostling" the user in the UI when jobs come into and out of existence.
and if a blocking query is started with `?index=N`, then the query should only
unblock if jobs "on page" change, rather than any change to any of the state
tables being queried ("jobs", "allocs", and "deployment"), to save unnecessary
HTTP round trips.
Nomad agents expect to receive `SIGHUP` to reload their configuration. The
signal handler for this is installed fairly late in agent startup, after the
client or server components are up and running. This means that configuration
management tools can potentially reload the configuration before the agent can
handle it, causing the agent to crash.
We don't want to allow configuration reload during client or server component
startup, because it would significantly complicate initialization. Instead,
we'll implement the systemd notify protocol. This causes systemd to block
sending configuration reload signals until the agent is actually ready. Users
can still bypass this by sending signals directly.
Note that there are several Go libraries that implement the sdnotify protocol,
but most are part of much larger projects which would create a lot of dependabot
burden. The bits of the protocol we need are extremely simple to implement in a
just a couple of functions.
For non-Linux or non-systemd Linux systems, this feature is a no-op. In future
work we could potentially implement service notification for Windows as well.
Fixes: https://github.com/hashicorp/nomad/issues/3885
When available, we provide an environment variable `CONSUL_TOKEN` to tasks, but
this isn't the environment variable expected by the Consul CLI. Job
specifications like deploying an API Gateway become noticeably nicer if we can
instead provide the expected env var.
When setting up auth methods for Consul and Vault in production environments, we
can typically assume that the CA certificate for the JWKS endpoint will be in
the host certificate store (as part of the usual configuration management
cluster admins needs to do). But for quick demos with `-dev` agents, this won't
be the case.
Add a `-jwks-ca-file` parameter to the setup commands so that we can use this
tool to quickly setup WI with `-dev` agents running TLS.
When a job is purged, we delete all its allocations and the client detects the
absense of the allocations to clean up its resources locally. But the client
won't be able to send an allocation status update in this case, which frees the
quota being used by that allocation. Instead, we need to free the quota usage
inside the state store immediately. To do so, we check if the allocation is
already client-terminal before copying it and passing it into the Enterprise
code for cleanup.
This commit also refactors the job delete to make it clear there's a single
caller of this alloc deletion path. This refactoring eliminates some wasteful
logic that queries the "allocs" table, allocates a slice of strings for their
IDs, and then queries the "allocs" table one-by-one for each of them for
deletion anyways.
Tests for this code can be found in the linked ENT repo PR.
Fixes: https://github.com/hashicorp/nomad-enterprise/issues/1422
Ref: https://hashicorp.atlassian.net/browse/NOMAD-620
Ref: https://github.com/hashicorp/nomad-enterprise/pull/1432
In some cases, Nomad job scaling will not generate evaluations
such as parameterized jobs. This change fixes the CLI behaviour
in this case, and copies the job run command for consistency.
This reverts commit 45b36371a12ffae5b5bfaaeadb08f801fb6bc98d. Now that Vault
1.16.2 has shipped, the E2E test will pick up only a working version.
Closes: https://github.com/hashicorp/nomad/issues/20298