In #24650 we switched to using ephemeral state for CNI plugins, so that when a
host reboots and we lose all the allocations we don't end up trying to use IPs
we created in network namespaces we just destroyed. Unfortunately upgrade
testing missed that in a non-reboot scenario, the existing CNI state was being
used by plugins like the ipam plugin to hand out the "next available" IP
address. So with no state carried over, we might allocate new addresses that
conflict with existing allocations. (This can be avoided by draining the node
first.)
As a compatibility shim, copy the old CNI state directory to the new CNI state
directory during agent startup, if the new CNI state directory doesn't already
exist.
Ref: https://github.com/hashicorp/nomad/pull/24650
Add a README describing the setup required for running upgrade testing via
Enos. Also fix the authorization header of our `wget` to use the proper header
for short-lived tokens, and the output path variable of the artifactory step.
Co-authored-by: Juanadelacuesta <8647634+Juanadelacuesta@users.noreply.github.com>
A return statement was missing in the sticky volume check—when we weren't able
to find a suitable volume, we did not return false. This was caught by e2e
test.
This PR fixes the issue, and corrects and expands the unit test.
CE side of ENT PR:
task schedule: pauses are not restart "attempts"
distinguish between these two cases:
1. task dies because we "paused" it (on purpose)
- should not count against restarts,
because nothing is wrong.
2. task dies because it didn't work right
- should count against restart attempts,
so users can address application issues.
with this, the restart{} block is back to its normal
behavior, so its documentation applies without caveat.
* Docs SEO: task drivers and plugins; refactor virt section
* add redirects for virt driver files
* Some updates. committing rather than stashing
* fix content-check errors
* Remove docs/devices/ and redirect to plugins/devices
* Update docs/drivers descriptions
* Move USB device plugin up a level. Finish descriptions.
* Apply suggestions from Jeff's code review
Co-authored-by: Jeff Boruszak <104028618+boruszak@users.noreply.github.com>
* Apply title case suggestions from code review
Co-authored-by: Jeff Boruszak <104028618+boruszak@users.noreply.github.com>
* apply title case suggestions; fix indentation
---------
Co-authored-by: Jeff Boruszak <104028618+boruszak@users.noreply.github.com>
This dependency is only used to generate mock `Variables`. The only time the
faked values are meaningful would be in the state store and RPC handler tests,
where we are always setting the values directly so that we can control
unblocking behaviors. Remove most of the random generation and remove the
dependency.
Closes: https://github.com/hashicorp/nomad/pull/25066
* func: add a new output that merges both windowa and linux clients, but add tags to distinguish them
* fix: outputs cant referrence other outputs in terraform
* Update e2e/terraform/provision-infra/compute.tf
Co-authored-by: Tim Gross <tgross@hashicorp.com>
---------
Co-authored-by: Tim Gross <tgross@hashicorp.com>
* func: add module to upgrade clients
* func: add polling to verify the metadata to make sure all clients are up
* style: remove unused code
* fix: Give the allocations a little time to get to the expected number on teh test health check, to avoid possible flaky tests in the future
* fix: set the upgrade version as clients version for the last health check
I merged #24869 having forgotten we don't run these tests in PR CI, so there's a compile error in the test. Fix that error and add the no-op import we use to catch this kind of thing.
Ref: https://github.com/hashicorp/nomad/pull/24869
Add tests for dynamic host volumes where the claiming jobs have `volume.sticky =
true`. Includes a test for forced rescheduling and a test for node drain.
This changeset includes a new `e2e/v3`-style package for creating dynamic host
volumes, so we can reuse that across other tests.
at least one bug has been created because it's
easy to miss a future.set() in pullImageImpl()
this pulls future.set() out to PullImage(),
the same level where it's created and wait()ed
We introduce an alternative solution to the one presented in #24960 which is
based on the state store and not previous-next allocation tracking in the
reconciler. This new solution reduces cognitive complexity of the scheduler
code at the cost of slightly more boilerplate code, but also opens up new
possibilities in the future, e.g., allowing users to explicitly "un-stick"
volumes with workloads still running.
The diagram below illustrates the new logic:
SetVolumes() upsertAllocsImpl()
sets ns, job +-----------------checks if alloc requests
tg in the scheduler v sticky vols and consults
| +-----------------------+ state. If there is no claim,
| | TaskGroupVolumeClaim: | it creates one.
| | - namespace |
| | - jobID |
| | - tg name |
| | - vol ID |
v | uniquely identify vol |
hasVolumes() +----+------------------+
consults the state | ^
and returns true | | DeleteJobTxn()
if there's a match <-----------+ +---------------removes the claim from
or if there is no the state
previous claim
| | | |
+-----------------------------+ +------------------------------------------------------+
scheduler state store
The variables definitions for Enos upgrade scenarios have a couple of unused
variables and some of the documentation strings are ambiguous:
* `nomad_region` and `binary_local_path` variables are unused and can be removed.
* `nomad_local_binary` refers to the directory where the binaries will be
download, not the binaries themselves. Rename to make it clear this belongs to
the artifactory fetch and not the provisioning step (which uses the
artifactory fetch outputs).
When a blocking query on the client hits a retryable error, we change the max
query time so that it falls within the `RPCHoldTimeout` timeout. But when the
retry succeeds we don't reset it to the original value.
Because the calls to `Node.GetClientAllocs` reuse the same request struct
instead of reallocating it, any retry will cause the agent to poll at a faster
frequency until the agent restarts. No other current RPC on the client has this
behavior, but we'll fix this in the `rpc` method rather than in the caller so
that any future users of the `rpc` method don't have to remember this detail.
Fixes: https://github.com/hashicorp/nomad/issues/25033
* func: add initial enos skeleton
* style: add headers
* func: change the variables input to a map of objects to simplify the workloads creation
* style: formating
* Add tests for servers and clients
* style: separate the tests in diferent scripts
* style: add missing headers
* func: add tests for allocs
* style: improve output
* func: add step to copy remote upgrade version
* style: hcl formatting
* fix: remove the terraform nomad provider
* fix: Add clean token to remove extra new line added in provision
* fix: Add clean token to remove extra new line added in provision
* fix: Add clean token to remove extra new line added in provision
* fix: add missing license headers
* style: hcl fmt
* style: rename variables and fix format
* func: remove the template step on the workloads module and chop the noamd token output on the provide module
* fix: correct the jobspec path on the workloads module
* fix: add missing variable definitions on job specs for workloads
* style: formatting
* fix: Add clean token to remove extra new line added in provision
* func: add module to upgrade servers
* style: missing headers
* func: add upgrade module
* func: add install for windows as well
* func: add an intermediate module that runs the upgrade server for each server
* fix: add missing license headers
* fix: remove extra input variables and connect upgrade servers to the scenario
* fix: rename missing env variables for cluster health scripts
* func: move the cluster health test outside of the modules and into the upgrade scenario
* fix: fix the regex to ignore snap files on the gitignore file
* fix: Add clean token to remove extra new line added in provision
* fix: Add clean token to remove extra new line added in provision
* fix: Add clean token to remove extra new line added in provision
* fix: remove extra input variables and connect upgrade servers to the scenario
* style: formatting
* fix: move taken and restoring snapshots out of the upgrade_single_server to avoid possible race conditions
* fix: rename variable in health test
* fix: Add clean token to remove extra new line added in provision
* func: add an intermediate module that runs the upgrade server for each server
* fix: Add clean token to remove extra new line added in provision
* fix: Add clean token to remove extra new line added in provision
* fix: Add clean token to remove extra new line added in provision
* func: fix the last_log_index check and add a versions check
* func: done use for_each when upgrading the servers, hardcodes each one to ensure they are upgraded one by one
* Update enos/modules/upgrade_instance/variables.tf
Co-authored-by: Tim Gross <tgross@hashicorp.com>
* Update enos/modules/upgrade_instance/variables.tf
Co-authored-by: Tim Gross <tgross@hashicorp.com>
* Update enos/modules/upgrade_instance/variables.tf
Co-authored-by: Tim Gross <tgross@hashicorp.com>
* func: make snapshot by calling every server and allowing stale data
* style: formatting
* fix: make the source for the upgrade binary unknow until apply
* func: use enos bundle to install remote upgrade version, enos_files is not meant for dynamic files
---------
Co-authored-by: Tim Gross <tgross@hashicorp.com>
Internally, sizes are always in binary units; this documentation is misleading and implies that they work in decimal units.
Without going through and replacing _every_ "MB" -> "MiB" this is the best way to hint to developers that binary sizes are used.
* Adds Actions to job status command output
* Adds Actions to job status command output
* Status documentation updated to show actions and formatJobActions no longer cares about pipe delineation
* Multi-condition start/revert/edit buttons when a job isn't running
* mirage-mocked revertable jobs and acceptance tests
* Remove version-watching from job index route
The `volume delete` command doesn't allow using a prefix for the volume ID for
either CSI or dynamic host volumes. Use a prefix search and wildcard namespace
as we do for other CLI commands.
Ref: https://hashicorp.atlassian.net/browse/NET-12057