Add tests for dynamic host volumes where the claiming jobs have `volume.sticky =
true`. Includes a test for forced rescheduling and a test for node drain.
This changeset includes a new `e2e/v3`-style package for creating dynamic host
volumes, so we can reuse that across other tests.
at least one bug has been created because it's
easy to miss a future.set() in pullImageImpl()
this pulls future.set() out to PullImage(),
the same level where it's created and wait()ed
We introduce an alternative solution to the one presented in #24960 which is
based on the state store and not previous-next allocation tracking in the
reconciler. This new solution reduces cognitive complexity of the scheduler
code at the cost of slightly more boilerplate code, but also opens up new
possibilities in the future, e.g., allowing users to explicitly "un-stick"
volumes with workloads still running.
The diagram below illustrates the new logic:
SetVolumes() upsertAllocsImpl()
sets ns, job +-----------------checks if alloc requests
tg in the scheduler v sticky vols and consults
| +-----------------------+ state. If there is no claim,
| | TaskGroupVolumeClaim: | it creates one.
| | - namespace |
| | - jobID |
| | - tg name |
| | - vol ID |
v | uniquely identify vol |
hasVolumes() +----+------------------+
consults the state | ^
and returns true | | DeleteJobTxn()
if there's a match <-----------+ +---------------removes the claim from
or if there is no the state
previous claim
| | | |
+-----------------------------+ +------------------------------------------------------+
scheduler state store
The variables definitions for Enos upgrade scenarios have a couple of unused
variables and some of the documentation strings are ambiguous:
* `nomad_region` and `binary_local_path` variables are unused and can be removed.
* `nomad_local_binary` refers to the directory where the binaries will be
download, not the binaries themselves. Rename to make it clear this belongs to
the artifactory fetch and not the provisioning step (which uses the
artifactory fetch outputs).
When a blocking query on the client hits a retryable error, we change the max
query time so that it falls within the `RPCHoldTimeout` timeout. But when the
retry succeeds we don't reset it to the original value.
Because the calls to `Node.GetClientAllocs` reuse the same request struct
instead of reallocating it, any retry will cause the agent to poll at a faster
frequency until the agent restarts. No other current RPC on the client has this
behavior, but we'll fix this in the `rpc` method rather than in the caller so
that any future users of the `rpc` method don't have to remember this detail.
Fixes: https://github.com/hashicorp/nomad/issues/25033
* func: add initial enos skeleton
* style: add headers
* func: change the variables input to a map of objects to simplify the workloads creation
* style: formating
* Add tests for servers and clients
* style: separate the tests in diferent scripts
* style: add missing headers
* func: add tests for allocs
* style: improve output
* func: add step to copy remote upgrade version
* style: hcl formatting
* fix: remove the terraform nomad provider
* fix: Add clean token to remove extra new line added in provision
* fix: Add clean token to remove extra new line added in provision
* fix: Add clean token to remove extra new line added in provision
* fix: add missing license headers
* style: hcl fmt
* style: rename variables and fix format
* func: remove the template step on the workloads module and chop the noamd token output on the provide module
* fix: correct the jobspec path on the workloads module
* fix: add missing variable definitions on job specs for workloads
* style: formatting
* fix: Add clean token to remove extra new line added in provision
* func: add module to upgrade servers
* style: missing headers
* func: add upgrade module
* func: add install for windows as well
* func: add an intermediate module that runs the upgrade server for each server
* fix: add missing license headers
* fix: remove extra input variables and connect upgrade servers to the scenario
* fix: rename missing env variables for cluster health scripts
* func: move the cluster health test outside of the modules and into the upgrade scenario
* fix: fix the regex to ignore snap files on the gitignore file
* fix: Add clean token to remove extra new line added in provision
* fix: Add clean token to remove extra new line added in provision
* fix: Add clean token to remove extra new line added in provision
* fix: remove extra input variables and connect upgrade servers to the scenario
* style: formatting
* fix: move taken and restoring snapshots out of the upgrade_single_server to avoid possible race conditions
* fix: rename variable in health test
* fix: Add clean token to remove extra new line added in provision
* func: add an intermediate module that runs the upgrade server for each server
* fix: Add clean token to remove extra new line added in provision
* fix: Add clean token to remove extra new line added in provision
* fix: Add clean token to remove extra new line added in provision
* func: fix the last_log_index check and add a versions check
* func: done use for_each when upgrading the servers, hardcodes each one to ensure they are upgraded one by one
* Update enos/modules/upgrade_instance/variables.tf
Co-authored-by: Tim Gross <tgross@hashicorp.com>
* Update enos/modules/upgrade_instance/variables.tf
Co-authored-by: Tim Gross <tgross@hashicorp.com>
* Update enos/modules/upgrade_instance/variables.tf
Co-authored-by: Tim Gross <tgross@hashicorp.com>
* func: make snapshot by calling every server and allowing stale data
* style: formatting
* fix: make the source for the upgrade binary unknow until apply
* func: use enos bundle to install remote upgrade version, enos_files is not meant for dynamic files
---------
Co-authored-by: Tim Gross <tgross@hashicorp.com>
Internally, sizes are always in binary units; this documentation is misleading and implies that they work in decimal units.
Without going through and replacing _every_ "MB" -> "MiB" this is the best way to hint to developers that binary sizes are used.
* Adds Actions to job status command output
* Adds Actions to job status command output
* Status documentation updated to show actions and formatJobActions no longer cares about pipe delineation
* Multi-condition start/revert/edit buttons when a job isn't running
* mirage-mocked revertable jobs and acceptance tests
* Remove version-watching from job index route
The `volume delete` command doesn't allow using a prefix for the volume ID for
either CSI or dynamic host volumes. Use a prefix search and wildcard namespace
as we do for other CLI commands.
Ref: https://hashicorp.atlassian.net/browse/NET-12057
If you create a volume via `volume create/register` and want to update it later,
you need to change the volume spec to add the ID that was returned. This isn't a
very nice UX, so let's add an `-id` argument that allows you to update existing
volumes that have that ID.
Ref: https://hashicorp.atlassian.net/browse/NET-12083
* Upgrade to using hashicorp/go-metrics@v0.5.4
This also requires bumping the dependencies for:
* memberlist
* serf
* raft
* raft-boltdb
* (and indirectly hashicorp/mdns due to the memberlist or serf update)
Unlike some other HashiCorp products, Nomads root module is currently expected to be consumed by others. This means that it needs to be treated more like our libraries and upgrade to hashicorp/go-metrics by utilizing its compat packages. This allows those importing the root module to control the metrics module used via build tags.
* func: add initial enos skeleton
* style: add headers
* func: change the variables input to a map of objects to simplify the workloads creation
* style: formating
* Add tests for servers and clients
* style: separate the tests in diferent scripts
* style: add missing headers
* func: add tests for allocs
* style: improve output
* func: add step to copy remote upgrade version
* style: hcl formatting
* fix: remove the terraform nomad provider
* fix: Add clean token to remove extra new line added in provision
* fix: Add clean token to remove extra new line added in provision
* fix: Add clean token to remove extra new line added in provision
* fix: add missing license headers
* style: hcl fmt
* style: rename variables and fix format
* func: remove the template step on the workloads module and chop the noamd token output on the provide module
* fix: correct the jobspec path on the workloads module
* fix: add missing variable definitions on job specs for workloads
* style: formatting
* fix: rename variable in health test
* quota spec:
if `region_limit.storage.host_volumes` is set,
do not require that `variables` also be set,
and vice versa.
* subtract from quota usage on volume delete
* stub CE quota subtraction method
When a client restarts but can't restore a volume (ex. the plugin is now
missing), it's removed from the node fingerprint. So we won't allow future
scheduling of the volume, but we were not updating the volume state field to
report this reasoning to operators. Make debugging easier and the state field
more meaningful by setting the value to "unavailable".
Also, remove the unused "deleted" field. We did not implement soft deletes and
aren't planning on it for Nomad 1.10.0.
Ref: https://hashicorp.atlassian.net/browse/NET-11551
When we implemented CSI, the types of the fields for access mode and attachment
mode on volume requests were defined with a prefix "CSI". This gets confusing
now that we have dynamic host volumes using the same fields. Fortunately the
original was a typedef on string, and the Go API in the `api` package just uses
strings directly, so we can change the name of the type without breaking
backwards compatibility for the msgpack wire format.
Update the names to `VolumeAccessMode` and `VolumeAttachmentMode`. Keep the CSI
and DHV specific value constant names for these fields (they aren't currently
1:1), so that we can easily differentiate in a given bit of code which values
are valid.
Ref: https://github.com/hashicorp/nomad/pull/24881#discussion_r1920702890