Linux capabilities configurable by the task must be a subset of those configured
in the plugin configuration. Clarify this implies that `"all"` is not permitted
if the plugin is not also configured to allow all capabilities.
Fixes: https://github.com/hashicorp/nomad/issues/19059
The `qemu` driver uses our universal executor to run the qemu command line
tool. Because qemu owns the resource isolation, we don't pass in the resource
block that the universal executor uses to configure cgroups and core
pinning. This resulted in a panic.
Fix the panic by returning early in the cgroup configuration in the universal
executor. This fixes `qemu` but also any third-party drivers that might exist
and are using our executor code without passing in the resource block.
In future work, we should ensure that the `resources` block is being translated
into qemu equivalents, so that we have support for things like NUMA-aware
scheduling for that driver.
Fixes: https://github.com/hashicorp/nomad/issues/19078
We want to run the Vault compatibility E2E test with Vault Enterprise binaries
and use Vault namespaces. Refactor the `vaultcompat` test so as to parameterize
most of the test setup logic with the namespace, and add the appropriate build
tag for the CE version of the test.
We want to run the Consul compatibility E2E test with Consul Enterprise binaries
and use Consul namespaces. Refactor the `consulcompat` test so as to
parameterize most of the test setup logic with the namespace, and add the
appropriate build tag for the CE version of the test.
Ref: https://github.com/hashicorp/nomad-enterprise/pull/1305
Just because an alloc is running does not mean nomad is ready to serve
task logs. In a test case where you immediatly read logs after starting
a task, it could be that nomad responds with "no logs found" when you
try to read logs, in which case you just need to wait longer. Do so in
the v3 TaskLogs helper function.
The template hook must use the Consul token for the cluster defined in
the task-level `consul` block or, if `nil, in the group-level `consul`
block.
The Consul tokens are generated by the allocrunner consul hook, but
during the transition period we must fallback to the Nomad agent token
if workload identities are not being used.
So an empty token returned from `GetConsulTokens()` is not enough to
determine if we should use the legacy flow (either this is an old task
or the cluster is not configured for Consul WI), or if there is a
misconfiguration (task or group is `consul` block is using a cluster
that doesn't have an `identity` set).
In order to distinguish between the two scenarios we must iterate over
the task identities looking for one suitable for the Consul cluster
being used.
* e2e: remove old oversubscription test
* e2e: fixup and cleanup oversubscription test suite
Fix and cleanup this old oversubscription test.
* use t.Cleanup instead of defer in tests
The version we have of `hc-install` doesn't allow installing Enterprise
binaries. Upgrade so that this is available to the development team and to our
E2E tests in the Enterprise repo.
The Consul compatibility test focuses on Connect, but it'd be a good idea to
ensure we can successfully get template data out of Consul as well.
Also tightens up the test's Consul ACL policy for the Nomad agent.
PRs #19034 and #19040 accidentally conflicted with each other without a merge
conflict when #19034 changes the method signature of `SetConsulTokens`. Because
CI doesn't rebase, both PRs tested fine and only were broken once they landed on
`main`. Fix that.
Using the latest version of terraform, the lock file is not the same
as when it was generated. Seems like the http module is not needed?
versioned? present? anymore.
Add a `Postrun` and `Destroy` hook to the allocrunner's `consul_hook` to ensure
that Consul tokens we've created via WI get revoked via the logout API when
we're done with them. Also add the logout to the `Prerun` hook if we've hit an
error.
The allocrunner has a service registration handler that proxies various API
calls to Consul. With multi-cluster support (for ENT), the service registration
handler is what selects the correct Consul client. The name of this field in the
allocrunner and taskrunner code base looks like it's referring to the actual
Consul API client. This was actually the case before Nomad native service
discovery was implemented, but now the name is misleading.
When creating the binding rule, `BindName` must match the pattern used
for the role name, otherwise the task will not be able to login to
Consul.
Also update the equality check for the binding rule to ensure this
property is held even if the auth method already has existing binding
rules attached.
* make the little dots consistent
* don't trim delimiter as that over matches
* test jobspec2 package
* copy api/WorkloadIdentity.TTL -> structs
* test ttl parsing
* fix hcl1 v 2 parsing mismatch
* make jobspec(1) tests match jobspec2 tests
A series of errors may happen when a token is invalidated while the
Vault client is waiting to renew it. The token may have been invalidated
for several reasons, such as the alloc finished running and it's now
terminal or the token may have been change directly on Vault
out-of-band.
Most of the errors are caused by retries that will never succeed until
Vault fully removes the token from its state.
This commit prevents the retries by making the error `invalid lease ID`
a fatal error.
In earlier versions of Vault, this case was covered by the error `lease
not found or lease is not renewable`, which is already considered to be
a fatal error by Nomad:
2d0cde4ccc/vault/expiration.go (L636-L639)
But https://github.com/hashicorp/vault/pull/5346 introduced an earlier
`nil` check that generates a different error message:
750ab337ea/vault/expiration.go (L1362-L1364)
Both errors happen for the same reason (`le == nil`) and so should be
considered fatal on renewal.
Previously, a Vault token could renewed either periodically via the
renewal loop or immediately by calling `RenewToken()`.
But a race condition in the renewal loop could cause an attempt to renew
an expired token. If both `updateCh` and `renewalCh` are active (such as
when a task stops at the same time its token is waiting for renewal),
the following `select` picks a `case` at random.
78f0c6b2a9/client/vaultclient/vaultclient.go (L557-L564)
If `case <-renewalCh` is picked, the token is incorrectly re-added to
the heap, causing unnecessary renewals of a token that is already expired.
1604dba508/client/vaultclient/vaultclient.go (L505-L510)
To prevent this situation, the `renew()` function should only renew
tokens that are currently in the heap, so `RenewToken()` must first push
the token to the heap and wait for the renewal to happen instead of
calling `renew()` directly since this could cause another race condition
where the token is renewed twice: once by `RenewToken()` calling
`renew()` directly and a second time if the renewal happens to pick the
token as soon as `RenewToken()` adds it to the heap.
* runAction model and adapter funcs
* Hacky but functional action running from job index
* remove proxy hack
* runAction added to taskSubRow
* Added tty and ws_handshake to job action endpoint call
* delog
* Bunch of streaming work
* action started, running, and finished notification titles, neutral color, and ansi escape
* Handle random alloc selection in the web ui
* Run on All implementation in web ui
* [ui] Helios two-step button and uniform title bar for Actions (#18912)
* Initial pass at title bar button uniformity
* Vertical align on actions dropdown toggle and small edits to prevent keynav overflow issue
* We represent loading state w text and disable now
* Pageheader component to align buttons
* Buttons standardized
* Actions dropdown reveal for multi-alloc job
* Notification code styles
* An action-having single alloc job
* Mirageed
* Actions-laden jobs in mirage
* Separating allocCount and taskCount in mirage mocks
* Unbreak stop job tests
* Permissions for actions dropdown
* tests for running actions from the job index page
* running from a task row actions tests
* some todocleanup
* PR feedback addressed, including page helper for actions
Remove the now-unused original configuration blocks for Consul and Vault from
the client. When the client needs to refer to a Consul or Vault block it will
always be for a specific cluster for the task/service. Add a helper for
accessing the default clusters (for the client's own use).
This is two of three changesets for this work. The remainder will implement the
same changes in the `command/agent` package.
As part of this work I discovered and fixed two bugs:
* The gRPC proxy socket that we create for Envoy is only ever created using the
default Consul cluster's configuration. This will prevent Connect from being
used with the non-default cluster.
* The Consul configuration we use for templates always comes from the default
Consul cluster's configuration, but will use the correct Consul token for the
non-default cluster. This will prevent templates from being used with the
non-default cluster.
Ref: https://github.com/hashicorp/nomad/issues/18947
Ref: https://github.com/hashicorp/nomad/pull/18991
Fixes: https://github.com/hashicorp/nomad/issues/18984
Fixes: https://github.com/hashicorp/nomad/issues/18983
Submitting a Consul or Vault token with a job is deprecated in Nomad 1.7 and
intended for removal in Nomad 1.9. We added a deprecation warning to the CLI
when the user passes in the appropriate flag or environment variable in
does not use Vault or Consul but happen to have the appropriate environment
variable in your environment. While this is generally a bad practice (because
the token is leaked to Nomad), it's also the existing practice for some users.
Move the warning to the job admission hook. This will allow us to warn only when
appropriate, and that will also help the migration process by producing warnings
only for the relevant jobs.
Remove the now-unused original configuration blocks for Consul and Vault from
the server. When the server needs to refer to a Consul or Vault block it will
always be for a specific cluster for the task/service. Add a helper for
accessing the default clusters (for the servers own use).
This is one of three changesets for this work. The remainder will implement the
same changes in the `client` package and on the `command/agent` package.
As part of this work I discovered that the job submission hook for Vault only
checks the enabled flag on the default cluster, rather than the clusters that
are used by the job being submitted. This will return an error on job
registration saying that Vault is disabled. Fix that to check only the
cluster(s) used by the job.
Ref: https://github.com/hashicorp/nomad/issues/18947
Fixes: https://github.com/hashicorp/nomad/issues/18990