* fix panic from nil ReschedulePolicy
commit 279775082c (pr #26279)
intended to return an error for sysbatch jobs with a reschedule block,
but in bypassing populating the `ReschedulePolicy`'s pointer fields,
a nil pointer panic occurred before the job could get rejected
with the intended error.
in particular, in `command/agent/job_endpoint.go`, `func ApiTgToStructsTG`,
```
if taskGroup.ReschedulePolicy != nil {
tg.ReschedulePolicy = &structs.ReschedulePolicy{
Attempts: *taskGroup.ReschedulePolicy.Attempts,
Interval: *taskGroup.ReschedulePolicy.Interval,
```
`*taskGroup.ReschedulePolicy.Interval` was a nil pointer.
* fix e2e test jobs
In #8099 we fixed a bug where garbage collecting a job with
`disconnect.stop_on_client_after` would spawn recursive delayed evals. But when
applied to disconnected allocs with `replace=true`, the fix prevents us from
emitting a blocked eval if there's no room for the replacement.
Update the guard on creating blocked evals so that rather than checking for
`IsZero` that we check for being later than the `WaitUntil`. This separates this
guard from the logic guarding the creation of delayed evals so that we can
potentially create both when needed.
Ref: https://github.com/hashicorp/nomad/pull/8099/files#r435198418
* fix(doc): fix links for task driver plugins
host URL was wrong, changed from develoepr to developer
* Update stateful-workloads.mdx
Fix link for Nomad event stream page
When emitting rate metrics, we use the identity string within the
labels to better describe the caller. If the register RPC uses an
introduction identity, we can correctly detail this.
The Nomad garbage collector can be triggered manually which among
other things will remove down nodes from state. If a cleaned node
reconnects after this happens, it will be unable to reconnect with
the cluster running strict enforcement, even if it has a valid
node identity token.
This change fixes the issue by allowing nodes to reconnect with a
node identity, even if their state object has been removed by the
GC process. This will only work if the node identity has not
expired. If it has and strict enforcement is enabled, the operator
will have to re-introuduce the node to the cluster which feels like
expected and correct behaviour.
The RPC handler function is quite long, so moving the argument
validation into its own function reduces this and makes sense from
an organisation view.
In https://github.com/hashicorp/nomad/issues/15459 we've had a bit of
back-and-forth as a result of applying Nomad environment variables where they
typically should not be used. Clarify that the env vars are for the CLI and
mostly not for the agent. Also move the `NOMAD_CLI_SHOW_HINTS` description into
the correct section.
The docs for the `template` block accurately describe the template configuration
default function denylist in the body but the default parameters are missing
values. The equivalent docs in the `client` configuration are missing
`executeTemplate` as well.
When a node misses a heartbeat and is marked down, Nomad deletes service
registration instances for that node. But if the node then successfully
heartbeats before its allocations are marked lost, the services are never
restored. The node is unaware that it has missed a heartbeat and there's no
anti-entropy on the node in any case.
We already delete services when the plan applier marks allocations as stopped,
so deleting the services when the node goes down is only an optimization to more
quickly divert service traffic. But because the state after a plan apply is the
"canonical" view of allocation health, this breaks correctness.
Remove the code path that deletes services from nodes when nodes go down. Retain
the state store code that deletes services when allocs are marked terminal by
the plan applier. Also add a path in the state store to delete services when
allocs are marked terminal by the client. This gets back some of the
optimization but avoids the correctness bug because marking the allocation
client-terminal is a one way operation.
Fixes: https://github.com/hashicorp/nomad/issues/16983
* Update UI, code comment, and README links to docs, tutorials
* fix typo in ephemeral disks learn more link url
* feedback on typo
Co-authored-by: Tim Gross <tgross@hashicorp.com>
---------
Co-authored-by: Tim Gross <tgross@hashicorp.com>
This change implements the client -> server workflow for Nomad
node introduction. A Nomad node can optionally be started with an
introduction token, which is a signed JWT containing claims for
the node registration. The server handles this according to the
enforcement configuration.
The introduction token can be provided by env var, cli flag, or
by placing it within a default filesystem location. The latter
option does not override the CLI or env var.
The region claims has been removed from the initial claims set of
the intro identity. This boundary is guarded by mTLS and aligns
with the node identity.
The state store test for Variables check-and-set behavior for deletes uses the
same state store for a set of parallel tests. But one of the tests overlaps
another by using the same path, and this can cause spurious test failures by
hitting the CAS conflict error. This overlap doesn't appear to be intentional,
so change the test to use a different path.
Also cleaned up some unused test helpers in the same file.
* Add -log-file-export and -log-lookback commands to add historical log to
debug capture
* use monitor.PrepFile() helper for other historical log tests
the executor dies, leaving an orphaned process still running.
the panic fix:
* don't `panic()`
* and return an empty, but non-nil, func on cgroup error
feature fix:
* allow non-root agent to proceed with exec when cgroups are off
Whenever we add a new Raft message type, we almost always need to add a new
version check to ensure that leaders aren't trying to write unknown Raft entries
to older followers. Leave a note about this where the edits happen to reduce the
risk of this unfortunately common bug.
Ref: https://github.com/hashicorp/nomad-enterprise/pull/2973