When a Connect service is registered with Consul, Nomad includes the nested
`Connect.SidecarService` field that includes health checks for the Envoy
proxy. Because these are not part of the job spec, the alloc health tracker
created by `health_hook` doesn't know to read the value of these checks.
In many circumstances this won't be noticed, but if the Envoy health check
happens to take longer than the `update.min_healthy_time` (perhaps because it's
been set low), it's possible for a deployment to progress too early such that
there will briefly be no healthy instances of the service available in Consul.
Update the Consul service client to find the nested sidecar service in the
service catalog and attach it to the results provided to the tracker. The
tracker can then check the sidecar health checks.
Fixes: https://github.com/hashicorp/nomad/issues/19269
The EBS snapshot operation can take a long time to complete. Recent runs have
shown we sometimes get up to the 10s timeout on the context we're giving the CLI
command. Extend this so that we're not getting spurious timeouts.
Fixes: https://github.com/hashicorp/nomad/issues/19118
* cleanup consul tokens by accessor id
rather than secret id, which has been failing for some time with:
> 404 (Cannot find token to delete)
* expect subset of consul namespaces
the consul test cluster may have namespaces from other unrelated tests
This commit introduces the parameter preventRescheduleOnLost which indicates that the task group can't afford to have multiple instances running at the same time. In the case of a node going down, its allocations will be registered as unknown but no replacements will be rescheduled. If the lost node comes back up, the allocs will reconnect and continue to run.
In case of max_client_disconnect also being enabled, if there is a reschedule policy, an error will be returned.
Implements issue #10366
Co-authored-by: Dom Lavery <dom@circleci.com>
Co-authored-by: Tim Gross <tgross@hashicorp.com>
Co-authored-by: Luiz Aoqui <luiz@hashicorp.com>
Some of our documentation on `tls` configuration could be more clear as to
whether we're referring to mTLS or TLS. Also, when ACLs are enabled it's fine to
have `verify_https_client=false` (the default). Make it clear that this is an
acceptably secure configuration and that it's in fact recommended in order to
avoid pain of distributing client certs to user browsers.
* An example job with a few interesting actions
* A pretty different example job
* Tests updated with const'd number of default templates
* Removed default jobspec params and formatted
This will dump much of the interesting parts of cluster state, including
available nodes and their status, existing allocations and their status,
and existing evaluations and their status.
Fixes some errors in the documentation for the Consul integration, based on
tests locally without using the `nomad setup consul` command and updating the
docs to match.
* Consul CE doesn't support the `-namespace-rule-bind-namespace` option.
* The binding rule for services should not including the Nomad namespace in the
`bind-name` parameter (the service is registered in the appropriate Consul
namespace).
* The role for tasks should include the suffix "-tasks" in the name to match the
binding rule we create.
* Fix the Consul bound audiences to be a list of strings
* Fix some quoting issues in the commands.
In #18754 we accidentally fixed a bug that prevented poststop tasks from getting
access to Variables. This was fixed in the 1.6.x branch in #19270, at which
point we discovered the fix had been done in main already as part of the auth
refactor. Add a changelog entry for it.
Clients prior to Nomad 1.7 cannot support the new workload identity-based
authentication to Consul and Vault. Add an implicit Nomad version constraint on
job submission for task groups that use the new workflow.
Includes a constraint test showing same-version prelease handling.
Some sections of the `consul` configuration are relevant only for clients or
servers. We updated our Vault docs to split these parameters out into their own
sections for clarity. Match that for the Consul docs.
The new Workload Identity workflow for Vault tokens correctly handles post-stop
tasks, however the legacy workflow does not. Attempts to get a Vault token are
rejected if the allocation is server-terminal or client-terminal, but we should
be waiting until the allocation is client-terminal (only) so that poststop tasks
get a chance to get Vault tokens too.
Fixes: https://github.com/hashicorp/nomad/issues/16886
Update the `nomad setup consul` command to include a `Selector` for the
`NamespaceRule` so the logic is only applied when the token has a claim
for `consul_namespace`.
Jobs without an explicit `consul.namespace` value receive a JWT without
the `consul_namespace` claim because Nomad is unable to determine which
Consul namespace should be used.
By using `NamespaceRules`, cluster operators are able to set a default
value for these jobs.
When porting the `ConsulTemplate` test, I made a last-minute refactor to the
assertions for waiting on files, and accidentally inverted the test assertion in
the process.
Also, when running `jobs3.Submit` you need to include the `Namespace` option so
that the cleanup function that gets return deletes the job from the correct
namespace. This was causing the namespace cleanup to fail because the job
deletion had failed.
* API command and jobspec docs
* PR comments addressed
* API docs for job/jobid/action socket
* Removing a perhaps incorrect origin of job_id across the jobs api doc
* PR comments addressed
In order to correctly handle Consul namespaces, auth methods and binding rules
must always be created in the default namespace only.
---------
Co-authored-by: Luiz Aoqui <luiz@hashicorp.com>
and error more verbosely if it fails
also, add extra information to a failed evaluation
for more error visibility in other tests
---------
Co-authored-by: Juanadelacuesta <juanita.delacuestamorales@hashicorp.com>
In the legacy Consul token workflow, we check the user's token's permissions in
Consul at the time of job submit. The new task-level `consul` block was not
being respected when checking the list of namespaces.
The `script_check_hook` runs at the task level but can create script checks for
both task-level services and group-level services. Now that we allow the Consul
namespace to be set at the task-level `consul.namespace`, we need to have both
possible namespaces handy when creating and updating checks.
Refactor the JWT token derivation logic to only take a single request
since it was only ever called with a map of length one.
The original implementation received multiple requets to match the
legacy flow, but but legacy flow requests were batched from the Nomad
client to the server, which doesn't happen for JWT. Each JWT request
goes directly from the Nomad client to the Consul agent, so there is no
batching involved.
Token claims are used in several dynamic configuration in Consul and
Vault, such as Consul ACL bind and namespace rules, and Vault templated
policies.
Adding a claim for the Consul and Vault namespace defined for the
service or task allows cluster operators to create more flexible and
precise rules.
The `consul_namespace` claim is added to workload identities for Consul
services and to task workload identities that have the `consul_` name
prefix and are affected by a task or group `consul` block.
The `vault_namespace` claim is added to task workload identities that
have the `vault_` name prefix and are affected by a `vault` block.
When configuring Consul for multi-namespace support, the JWT auth method
needs to specify namespace rules. This attribute is set to `nil` in CE
but is used in Nomad ENT.