When logging into a JWT auth method, we need to explicitly supply the Consul
admin partition if the local Consul agent is in a partition. We can't derive
this from agent configuration because the Consul agent's configuration is
canonical, so instead we get the partition from the fingerprint (if
available). This changeset updates the Consul client constructor so that we
close over the partition from the fingerprint.
Ref: https://hashicorp.atlassian.net/browse/NET-9451
When we write Connect gateway configuation entries from the server, we're not
passing in the intended partition. This means we're using the server's own
partition to submit the configuration entries and this may not match. Note this
requires the Nomad server's token has permission to that partition.
Also, move the config entry write after we check Sentinel policies. This allows
us to return early if we hit a Sentinel error without making Consul RPCs first.
The process by which we tag AMIs with the commit SHA of the Packer directory
isn't documented in this repository, which makes it easy to accidentally build
an AMI that will break nightly E2E.
* Maintains rawSearchText separate from searchText
* Filter expression suggestions
* Now super-stops duelling queries on else-type error
* Filter suggestions and corrections
* Errorlink is now template standard and testfixes
* Mirage simulates healthy errors
* Test for bad filter expressions and snapshots
* Gallery allows picking stuff
* Small fixes
* added sentinel templates
* Can set enforcement level on policies
* Working on the interactive sentinel dev mode
* Very rough development flow on FE
* Changed position in gutter menu
* More sentinel stuff
* PR cleanup: removed testmode, removed unneeded mixins and deps
* Heliosification
* Index-level sentinel policy deletion and page title fixes
* Makes the Canaries sentinel policy real and then comments out the unfinished ones
* rename Access Control to Administration in prep for moving Sentinel Policies and Node Pool admin there
* Sentinel policies moved within the Administration section
* Mirage fixture for sentinel policy endpoints
* Description length check and 500 prevention
* Sync review PR feedback addressed, implied butons on radio cards
* Cull un-used sentinel policies
---------
Co-authored-by: Mike Nomitch <mail@mikenomitch.com>
this is the CE side of an Enterprise-only feature.
a job trying to use this in CE will fail to validate.
to enable daily-scheduled execution entirely client-side,
a job may now contain:
task "name" {
schedule {
cron {
start = "0 12 * * * *" # may not include "," or "/"
end = "0 16" # partial cron, with only {minute} {hour}
timezone = "EST" # anything in your tzdb
}
}
...
and everything about the allocation will be placed as usual,
but if outside the specified schedule, the taskrunner will block
on the client, waiting on the schedule start, before proceeding
with the task driver execution, etc.
this includes a taksrunner hook, which watches for the end of
the schedule, at which point it will kill the task.
then, restarts-allowing, a new task will start and again block
waiting for start, and so on.
this also includes all the plumbing required to pipe API calls
through from command->api->agent->server->client, so that
tasks can be force-run, force-paused, or resume the schedule
on demand.
* Hacky but shows links and desc
* markdown
* Small pre-test cleanup
* Test for UI description and link rendering
* JSON jobspec docs and variable example job get UI block
* Jobspec documentation for UI block
* Description and links moved into the Title component and made into Helios components
* Marked version upgrade
* Allow links without a description and max description to 1000 chars
* Node 18 for setup-js
* markdown sanitization
* Ui to UI and docs change
* Canonicalize, copy and diff for job.ui
* UI block added to testJob for structs testing
* diff test
* Remove redundant reset
* For readability, changing the receiving pointer of copied job variables
* TestUI endpiont conversion tests
* -require +must
* Nil check on Links
* JobUIConfig.Links as pointer
---------
Co-authored-by: Tim Gross <tgross@hashicorp.com>
While working on #20462#12319 I found that some of our scheduler tests around
down nodes or disconnected clients were enforcing invariants that were
unclear. This changeset pulls out some minor refactorings so that the bug fix PR
is easier to review. This includes:
* Migrating a few tests from `testify` to `shoenig/test` that I'm going to touch
in #12319 anyways.
* Adding test names to the node down test
* Update the disconnected client test so that we always re-process the
pending/blocked eval it creates; this eliminates 2 redundant sub-tests.
* Update the disconnected client test assertions so that they're explicit in the
test setup rather than implied by whether we re-process the pending/blocked
eval.
Ref: https://github.com/hashicorp/nomad/issues/20462
Ref: https://github.com/hashicorp/nomad/pull/12319
The new transparent proxy feature already has an implicity constraint on the
presence of the CNI plugin. But if the CNI plugin is installed on an older
version of Nomad, this isn't sufficient to protect against placing tproxy
workloads on clients that can't support it. Add a Nomad version constraint as
well.
Fixes: https://github.com/hashicorp/nomad/issues/20614
* Permit Consul Connect Gateways to be used with podman
Enable use of Consul Connect Gateways (ingresss/terminating/mesh)
with podman task driver.
task driver for Connect-enabled tasks for sidecar services which
used podman if any other task in the same task group was using podman
or fell back to docker otherwise.
That PR did not consider consul connect gateways, which remained
hardcoded to using docker task driver always.
This change applies the same heuristic also to gateway tasks,
enabling use of podman.
Limitations: The heuristic only works where the task group containing
the gateway also contains a podman task. Therefore it does not work
for the ingress example in the docs
(https://developer.hashicorp.com/nomad/docs/job-specification/gateway#ingress-gateway)
which uses connect native and requires the gateway be in a separate task.
* cl: add cl for connect gateway podman autodetect
* connect: add test ensuring we guess podman for gateway when possible
---------
Co-authored-by: Seth Hoenig <shoenig@duck.com>
Configuration changes to use backport assistant with LTS support. These include:
* adding a manifest file for active releases
* adding configuration to send backport to ENT repo
While working on #20462, I discovered that some of the scheduler tests for
disconnected clients making long blocking queries. The tests used
`testutil.WaitForResult` to wait for an evaluation to be written to the state
store. The evaluation was never written, but the tests were not correctly
returning an error for an empty query. This resulted in the tests blocking for
5s and then continuing anyways.
In practice, the evaluation is never written to the state store as part of the
test harness `Process` method, so this test assertion was meaningless. Remove
the broken assertion from the two top-level tests that used it, and upgrade
these tests to use `shoenig/test` in the process. This will save us ~50s per
test run.
When a job that implements a plugin is updated to have a new plugin ID, the old
version of the plugin is never deleted. We want to delay deleting plugins until
garbage collection to avoid race conditions between a plugin being registered
and its allocations being marked healthy.
Add logic to the state store's `DeleteCSIPlugin` method (used only by GC) to
check whether any of the jobs associated with the plugin have no allocations and
either have been purged or have been updated to no longer implement that plugin
ID.
This changeset also updates the CSI plugin lifecycle tests in the state store to
use `shoenig/test` over `testify`, and removes a spurious error log that was
happening on every periodic plugin GC attempt.
Fixes: https://github.com/hashicorp/nomad/issues/20225
The CSI hook for each allocation that claims a volume runs concurrently. If a
call to `MountVolume` happens at the same time as a call to `UnmountVolume` for
the same volume, it's possible for the second alloc to detect the volume has
already been staged, then for the original alloc to unpublish and unstage it,
only for the second alloc to then attempt to publish a volume that's been
unstaged.
The usage tracker on the volume manager was intended to prevent this behavior
but the call to claim the volume was made only after staging and publishing was
complete. Move the call to claim the volume for the usage tracker to the top of
the `MountVolume` workflow to prevent it from being unstaged until all consuming
allocations have called `UnmountVolume`.
Fixes: https://github.com/hashicorp/nomad/issues/20424
When the allocation is stopped, we deregister the service in the alloc runner's
`PreKill` hook. This ensures we delete the service registration and wait for the
shutdown delay before shutting down the tasks, so that workloads can drain their
connections. However, the call to remove the workload only logs errors and never
retries them.
Add a short retry loop to the `RemoveWorkload` method for Nomad services, so
that transient errors give us an extra opportunity to deregister the service
before the tasks are stopped, before we need to fall back to the data integrity
improvements implemented in #20590.
Ref: https://github.com/hashicorp/nomad/issues/16616
This changeset fixes three potential data integrity issues between allocations
and their Nomad native service registrations.
* When a node is marked down because it missed heartbeats, we remove Vault and
Consul tokens (for the pre-Workload Identity workflows) after we've written
the node update to Raft. This is unavoidably non-transactional because the
Consul and Vault servers aren't in the same Raft cluster as Nomad itself. But
we've unnecessarily mirrored this same behavior to deregister Nomad
services. This makes it possible for the leader to successfully write the node
update to Raft without removing services.
To address this, move the delete into the same Raft transaction. One minor
caveat with this approach is the upgrade path: if the leader is upgraded first
and a node is marked down during this window, older followers will have stale
information until they are also upgraded. This is unavoidable without
requiring the leader to unconditionally make an extra Raft write for every
down node until 2 LTS versions after Nomad 1.8.0. This temporary reduction in
data integrity for stale reads seems like a reasonable tradeoff.
* When an allocation is marked client-terminal from the client in
`UpdateAllocsFromClient`, we have an opportunity to ensure data integrity by
deregistering services for that allocation.
* When an allocation is deleted during eval garbage collection, we have an
opportunity to ensure data integrity by deregistering services for that
allocation. This is a cheap no-op if the allocation has been previously marked
client-terminal.
This changeset does not address client-side retries for the originally reported
issue, which will be done in a separate PR.
Ref: https://github.com/hashicorp/nomad/issues/16616
In the Unveil filesystem isolation mode we were mounting the shared
alloc dir with the UID/GID of the user of the task dir being mounted
and 0710 filesystem permissions. This was causing the actual task dir
to become inaccessible to other tasks in the allocation (a race where
the last mounter wins). Instead mount the shared alloc dir as nobody
with 0777 filesystem permissions.
Users can override the default sidecar task for Connect workloads. This sidecar
task might need access to certificate stores on the host. Allow adding the
`volume_mount` block to the sidecar task override.
Also fixes a bug where `volume_mount` blocks would not appear in plan diff
outputs.
Fixes: https://github.com/hashicorp/nomad/issues/19786
This change exposes CNI configuration details of a network
namespace as environment variables. This allows a task to use
these value to configure itself; a potential use case is to run
a Raft application binding to IP and Port details configured using
the bridge network mode.
Whenever the "exec" task driver is being used, nomad runs a plug in that in time runs the task on a container under the hood. If by any circumstance the executor is killed, the task is reparented to the init service and wont be stopped by Nomad in case of a job updated or stop.
This commit introduces two mechanisms to avoid this behaviour:
* Adds signal catching and handling to the executor, so in case of a SIGTERM, the signal will also be passed on to the task.
* Adds a pre start clean up of the processes in the container, ensuring only the ones the executor runs are present at any given time.
The `nomad plugin status :plugin_id` command lists allocations that implement
the plugin being queried. This list is filtered by the `-namespace` flag as
usual. Cluster admins will likely deploy plugins to a single namespace, but for
convenience they may want to have the wildcard namespace set in their command
environment.
Add support for handling the wildcard namespace to the CSI plugin RPC handler.
Fixes: https://github.com/hashicorp/nomad/issues/20537