Node reconciler never took node feasibility into account. In cases when
there were nodes excluded from allocation placement due to constraints
not being met, for example, the desired total or desired canary numbers
were never updated in the reconciler to account for that. Thus,
deployments would never become successful.
In cases where system jobs had the same amount of canary allocations
deployed as there were eligible nodes, the scheduler would incorrectly
mark the deployment as complete, as if auto promotion was set. This edge
case uncovered a bug in the setDeploymentStatusAndUpdates method, and
since we round up canary nodes, it may not be such an edge case
afterall.
---------
Co-authored-by: Tim Gross <tgross@hashicorp.com>
This changeset adds system scheduler tests of various permutations of the `update`
block. It also fixes a number of bugs discovered in the process.
* Don't create deployment for in-flight rollout. If a system job is in the
middle of a rollout prior to upgrading to a version of Nomad with system
deployments, we'll end up creating a system deployment which might never
complete because previously placed allocs will not be tracked. Check to see if
we have existing allocs that should belong to the new deployment and prevent a
deployment from being created in that case.
* Ensure we call `Copy` on `Deployment` to avoid state store corruption.
* Don't limit canary counts by `max_parallel`.
* Never create deployments for `sysbatch` jobs.
Ref: https://hashicorp.atlassian.net/browse/NMD-761
In the system scheduler, we need to keep track which nodes were previously used
as "canary nodes" and not pick them at random, in case of previously failed
canaries or changes to the amount of canaries in the jobspec.
---------
Co-authored-by: Tim Gross <tgross@hashicorp.com>
This changeset adjusts the handling of allocations placement when we're
promoting a deployment, and it corrects the behavior of isDeploymentComplete,
which previously would never mark promoted deployment as complete.
This changeset introduces canary deployments for system jobs.
Canaries work a little different for system jobs than for service jobs. The
integer in the update block of a task group is interpreted as a percentage of
eligible nodes that this task group update should be deployed to (rounded up
to the nearest integer, so, e.g., for 5 eligible nodes and canary value set to
50, we will deploy to 3 nodes).
In contrast to service jobs, system job canaries are not tracked, i.e., the
scheduler doesn't need to know which allocations are canaries and which are not,
since any node can only run one system job. Canary deployments are marked for
promotion and if promoted, the scheduler simply performs an update as usual,
replacing allocations belonging to a previous job version, and leaving new ones
intact.
This is the initial implementation of deployments for the system and sysbatch
reconciler. It does not support updates or canaries at this point, it simply
provides the necessary plumbing for deployments.
In #26169 we started emitting structured logs from the reconciler. But the node
reconciler results are `AllocTuple` structs and not counts, so the information
we put in the logs ends up being pointer addresses in hex. Fix this so that
we're recording the number of allocs in each bucket instead.
Fix another misleading log-line while we're here.
Ref: https://github.com/hashicorp/nomad/pull/26169
The `computeUpdate` method returns 4 different values, some of which are just
different shapes of the same data and only ever get used to be applied to the
result in the caller. Move the mutation of the result into `computeUpdates` to
match the work done in #26325. Clean up the return signature so that only slices
we need downstream are returned, and fix the incorrect docstring.
Also fix a silent bug where the `inplace` set includes the original alloc and
not the updated version. This has no functional change because all existing
callers only ever look at the length of this slice, but it will prevent future
bugs if that ever changes.
Ref: https://github.com/hashicorp/nomad/pull/26325
Ref: https://hashicorp.atlassian.net/browse/NMD-819
Refactors of the `computeGroup` code in the reconciler to make understanding its
mutations more manageable. Some of this work makes mutation more consistent but
more importantly it's intended to make it readily _detectable_ while still being
readable. Includes:
* In the `computeCanaries` function, we mutate the dstate and the result and
then the return values are used to further mutate the result in the
caller. Move all this mutation into the function.
* In the `computeMigrations` function, we mutate the result and then the return
values are used to further mutate the result in the caller. Move all this
mutation into the function.
* In the `cancelUnneededCanaries` function, we mutate the result and then the
return values are used to further mutate the result in the caller. Move all
this mutation into the function, and annotate which `allocSet`s are mutated by
taking a pointer to the set.
* The `createRescheduleLaterEvals` function currently mutates the results and
returns updates to mutate the results in the caller. Move all this mutation
into the function to help cleanup `computeGroup`.
* Extract `computeReconnecting` method from `computeGroup`. There's some tangled
logic in `computeGroup` for determining changes to make for reconnecting
allocations. Pull this out into its own function. Annotate mutability in the
function by passing pointers to `allocSet` where needed, and mutate the result
to update counts. Rename the old `computeReconnecting` method to
`appendReconnectingUpdates` to mirror the naming of the similar logic for
disconnects.
* Extract `computeDisconnecting` method from `computeGroup`. There's some
tangled logic in `computeGroup` for determining changes to make for
disconnected allocations. Pull this out into its own function. Annotate
mutability in the function by passing pointers to `allocSet` where needed, and
mutate the result to update counts.
* The `appendUnknownDisconnectingUpdates` method used to create updates for
disconnected allocations mutates one of its `allocSet` arguments to change the
allocations that the reschedule now set points to. Pull this update out into
the caller.
* A handful of small docstring and helper function fixes
Ref: https://hashicorp.atlassian.net/browse/NMD-819
The reconciler contains a large set of methods and functions that operate on
`allocSet` (a map of allocation IDs to their allocs). Update these so that they
are consistently methods that are documented to not consume the `allocSet`. This
sets the stage for further improvements around mutability in the reconciler.
This changeset also includes a few related refactors:
* Use the `allocSet` alias in every location it's relevant in the reconciler,
for consistency and clarity.
* Move the filter functions and related helpers in the `allocs.go` file into the
`filters.go` file.
* Update the method receiver on `allocSet` to match everywhere and generally
improve the docstrings on the filter functions.
Ref: https://hashicorp.atlassian.net/browse/NMD-819
When a task group is removed from a jobspec, the reconciler stops all
allocations and immediately returns from `computeGroup`. We can do the same for
when the group has been scaled-to-zero, but doing so runs into an inconsistency
in the way that server-terminal allocations are handled.
Prior to this change server-terminal allocations fall through `computeGroup`
without being marked as `ignore`, unless they are terminal canaries, in which
case they are marked `stop` (but this is a no-op). This inconsistency causes a
_tiny_ amount of extra `Plan.Submit`/Raft traffic, but more importantly makes it
more difficult to make test assertions for `stop` vs `ignore` vs
fallthrough. Remove this inconsistency by filtering out server-terminal
allocations early in `computeGroup`.
This brings the cluster reconciler's behavior closer to the node reconciler's
behavior, except that the node reconciler discards _all_ terminal allocations
because it doesn't support rescheduling.
This changeset required adjustments to two tests, but the tests themselves were
a bit of a mess:
* In https://github.com/hashicorp/nomad/pull/25726 we added a test of how
canaries were treated when on draining nodes. But the test didn't correctly
configure the job with an update block, leading to misleading test
behavior. Fix the test to exercise the intended behavior and refactor for
clarity.
* While working on reconciler behaviors around stopped allocations, I found it
extremely hard to follow the intent of the disconnected client tests because
many of the fields in the table-driven test are switches for more complex
behavior or just tersely named. Attempt to make this a little more legible by
moving some branches directly into fields, renaming some fields, and
flattening out some branching.
Ref: https://hashicorp.atlassian.net/browse/NMD-819
The `DesiredUpdates` struct that we send to the Read Eval API doesn't include
information about disconnect/reconnect and rescheduling. Annotate the
`DesiredUpdates` with this data, and adjust the `eval status` command to display
only those fields that have non-zero values in order to make the output width
manageable.
Ref: https://hashicorp.atlassian.net/browse/NMD-815
Refactor the reconciler property tests to extract functions for safety property
assertions we'll share between different job types for the same reconciler.
While working on property testing in #26172 we discovered there are scenarios
where the reconciler will produce more than the expected number of
placements. Testing of those scenarios at the whole-scheduler level shows that
this gets handled correctly downstream of the reconciler, but this makes it
harder to reason about reconciler behavior. Cap the number of placements in the
reconciler.
Ref: https://github.com/hashicorp/nomad/pull/26172
While working on property testing in #26216, I discovered we had unreachable
code in the node reconciler. The `diffSystemAllocsForNode` function receives a
set of non-terminal allocations, but then has branches where it assumes the
allocations might be terminal. It's trivially provable that these allocs are
always live, as the system scheduler splits the set of known allocs into live
and terminal sets before passing them into the node reconciler.
Eliminate the unreachable code and improve the variable names to make the known
state of the allocs more clear in the reconciler code.
Ref: https://github.com/hashicorp/nomad/pull/26216
Both the cluster reconciler and node reconciler emit a debug-level log line with
their results, but these are unstructured multi-line logs that are annoying for
operators to parse. Change these to emit structured key-value pairs like we do
everywhere else.
Ref: https://hashicorp.atlassian.net/browse/NMD-818
Ref: https://go.hashi.co/rfc/nmd-212
As part of ongoing work to make the scheduler more legible and more robustly
tested, we're implementing property testing of at least the reconciler. This
changeset provides some infrastructure we'll need for generating the test cases
using `pgregory.net/rapid`, without building out any of the property assertions
yet (that'll be in upcoming PRs over the next couple weeks).
The alloc reconciler generator produces a job, a previous version of the job, a
set of tainted nodes, and a set of existing allocations. The node reconciler
generator produces a job, a set of nodes, and allocations on those
nodes. Reconnecting allocs are not yet well-covered by these generators, and
with ~40 dimensions covered so far we may need to pull those out to their own
tests in order to get good coverage.
Note the scenarios only randomize fields of interest; fields like the job name
that don't impact the reconciler would use up available shrink cycles on failed
tests without actually reducing the scope of the scenario.
Ref: https://hashicorp.atlassian.net/browse/NMD-814
Ref: https://github.com/flyingmutant/rapid
This changeset separates reconciler fields into their own sub-struct to make
testing easier and the code more explicit about what fields relate to which
state.
Cluster reconciler code is notoriously hard to follow because most of its
method continuously mutate the fields of the allocReconciler object. Even
for top-level methods it makes the code hard to follow, but gets really gnarly
with lower-level methods (of which there are many). This changeset proposes a
refactoring that makes the vast majority of said methods return explicit values,
and avoid mutating object fields.