During the upgrade test we can trigger a re-render of the Vault secret due to
client restart before the allocrunner has marked the task as running, which
triggers the change mode on the template and restarts the task. This results in
a race where the alloc is still "pending" when we go to check it. We never
change the value of this secret in upgrade testing, so paper over this race
condition by setting a "noop" change mode.
This changeset includes several adjustments to the upgrade testing scripts to
reduce flakes and make problems more understandable:
* When a node is drained prior to the 3rd client upgrade, it's entirely
possible the 3rd client to be upgraded is the drained node. This results in
miscounting the expected number of allocations because many of them will be
"complete" (service/batch) or "pending" (system). Leave the system jobs running
during drains and only count the running allocations at that point as the
expected set. Move the inline script that gets this count into a script file for
legibility.
* When the last initial workload is deployed, it's possible for it to be
briefly still in "pending" when we move to the next step. Poll for a short
window for the expected count of jobs.
* Make sure that any scripts that are being run right after a server or client
is coming back up can handle temporary unavailability gracefully.
* Change the debugging output of several scripts to avoid having the debug
output run into the error message (Ex. "some allocs are not running" looked like
the first allocation running was the missing allocation).
* Add some notes to the README about running locally with `-dev` builds and
tagging a cluster with your own name.
Ref: https://hashicorp.atlassian.net/browse/NMD-162
* fix: wait for all allocs to be running before checking for their IDs after client upgrade
* style: linter fix
* fix: filter running allocs per client ID when checking for allocs after upgrade
Add an upgrade test workload for that continuously writes to a Nomad
Variable. In order to run this workload, we'll need to deploy a
Workload-Associated ACL policy. So this extends the `run_workloads` module to
allow for a "pre script" to be run before a given job is deployed. We can use
that as a model for other test workloads.
Ref: https://hashicorp.atlassian.net/browse/NET-12217
Add an upgrade test workload for Consul service mesh with transparent
proxy. Note this breaks from the "countdash" demo. The dashboard application
only can verify the backend is up by making a websocket connection, which we
can't do as a health check, and the health check it exposes for that purpose
only passes once the websocket connection has been made. So replace the
dashboard with a minimal nginx reverse proxy to the count-api instead.
Ref: https://hashicorp.atlassian.net/browse/NET-12217
The check to read back node metadata depends on a resource that waits for the
Nomad API, but that resource doesn't wait for the metadata to be written in the
first place (and the client subsequently upgraded). Add this dependency so that
we're reading back the node metadata as the last step.
Ref: https://github.com/hashicorp/nomad-e2e/actions/runs/13690355150/job/38282457406
Getting the CSI test to work with AWS EFS or EBS has proven to be awkward
because we're having to deal with external APIs with their own consistency
guarantees, as well as challenges around teardown. Make the CSI test entirely
self-contained by using a userland NFS server and the rocketduck CSI plugin.
Ref: https://hashicorp.atlassian.net/browse/NET-12217
Ref: https://gitlab.com/rocketduck/csi-plugin-nfs
* func: add dependencies to avoid race conditions and move the update to each client to the main upgrade scenario
* Update enos/enos-scenario-upgrade.hcl
Co-authored-by: Tim Gross <tgross@hashicorp.com>
* Update enos/enos-scenario-upgrade.hcl
Co-authored-by: Tim Gross <tgross@hashicorp.com>
---------
Co-authored-by: Tim Gross <tgross@hashicorp.com>
Add an upgrade test workload for CSI with the AWS EFS plugin. In order to
validate this workload, we'll need to deploy the plugin job and then register a
volume with it. So this extends the `run_workloads` module to allow for "pre
scripts" and "post scripts" to be run before and after a given job has been
deployed. We can use that as a model for other test workloads.
Ref: https://hashicorp.atlassian.net/browse/NET-12217
* func: Add more workloads
* Update jobs.sh
* Update versions.sh
* style: format
* Update enos/modules/test_cluster_health/scripts/allocs.sh
Co-authored-by: Tim Gross <tgross@hashicorp.com>
* docs: improve outputs descriptions
* func: change docker workloads to be redis boxes and add healthchecks
* func: register the services on consul
* style: format
---------
Co-authored-by: Tim Gross <tgross@hashicorp.com>
* func: add possibility of having different binaries for server and clients
* style: rename binaries modules
* func: remove the check for last configuration log, and only take one snapshot when upgrading the servers
* Update enos/modules/upgrade_servers/main.tf
Co-authored-by: Tim Gross <tgross@hashicorp.com>
---------
Co-authored-by: Tim Gross <tgross@hashicorp.com>
* func: add possibility of having different binaries for server and clients
* style: rename binaries modules
* docs: update comments
* fix: correct the token input variable for fetch binaries
We're using `set -eo pipefail` everywhere in the Enos scripts, several of the
scripts used for checking assertions didn't take advantage of pipefail in such a
way that we could avoid early exits from transient errors. This meant that if a
server was slightly late to come back up, we'd hit an error and exit the whole
script instead of polling as expected.
While fixing this, I've made a number of other improvements to the shell scripts:
* I've changed the design of the polling loops so that we're calling a function
that returns an exit code and sets `last_error` value, along with any global
variables required by downstream functions. This makes the loops more readable
by reducing the number of global variables, and helped identify some places
where we're exiting instead of returning into the loop.
* Using `shellcheck -s bash` I fixes some unused variables and undefined
variables that we were missing because they were only used on the error paths.
* fix: change the value of the version used for testing to account for ent versions
* func: add more specific test for servers stability
* func: change the criteria we use to verify the cluster stability after server upgrades
* style: syntax
Add a README describing the setup required for running upgrade testing via
Enos. Also fix the authorization header of our `wget` to use the proper header
for short-lived tokens, and the output path variable of the artifactory step.
Co-authored-by: Juanadelacuesta <8647634+Juanadelacuesta@users.noreply.github.com>
* func: add module to upgrade clients
* func: add polling to verify the metadata to make sure all clients are up
* style: remove unused code
* fix: Give the allocations a little time to get to the expected number on teh test health check, to avoid possible flaky tests in the future
* fix: set the upgrade version as clients version for the last health check
The variables definitions for Enos upgrade scenarios have a couple of unused
variables and some of the documentation strings are ambiguous:
* `nomad_region` and `binary_local_path` variables are unused and can be removed.
* `nomad_local_binary` refers to the directory where the binaries will be
download, not the binaries themselves. Rename to make it clear this belongs to
the artifactory fetch and not the provisioning step (which uses the
artifactory fetch outputs).
* func: add initial enos skeleton
* style: add headers
* func: change the variables input to a map of objects to simplify the workloads creation
* style: formating
* Add tests for servers and clients
* style: separate the tests in diferent scripts
* style: add missing headers
* func: add tests for allocs
* style: improve output
* func: add step to copy remote upgrade version
* style: hcl formatting
* fix: remove the terraform nomad provider
* fix: Add clean token to remove extra new line added in provision
* fix: Add clean token to remove extra new line added in provision
* fix: Add clean token to remove extra new line added in provision
* fix: add missing license headers
* style: hcl fmt
* style: rename variables and fix format
* func: remove the template step on the workloads module and chop the noamd token output on the provide module
* fix: correct the jobspec path on the workloads module
* fix: add missing variable definitions on job specs for workloads
* style: formatting
* fix: Add clean token to remove extra new line added in provision
* func: add module to upgrade servers
* style: missing headers
* func: add upgrade module
* func: add install for windows as well
* func: add an intermediate module that runs the upgrade server for each server
* fix: add missing license headers
* fix: remove extra input variables and connect upgrade servers to the scenario
* fix: rename missing env variables for cluster health scripts
* func: move the cluster health test outside of the modules and into the upgrade scenario
* fix: fix the regex to ignore snap files on the gitignore file
* fix: Add clean token to remove extra new line added in provision
* fix: Add clean token to remove extra new line added in provision
* fix: Add clean token to remove extra new line added in provision
* fix: remove extra input variables and connect upgrade servers to the scenario
* style: formatting
* fix: move taken and restoring snapshots out of the upgrade_single_server to avoid possible race conditions
* fix: rename variable in health test
* fix: Add clean token to remove extra new line added in provision
* func: add an intermediate module that runs the upgrade server for each server
* fix: Add clean token to remove extra new line added in provision
* fix: Add clean token to remove extra new line added in provision
* fix: Add clean token to remove extra new line added in provision
* func: fix the last_log_index check and add a versions check
* func: done use for_each when upgrading the servers, hardcodes each one to ensure they are upgraded one by one
* Update enos/modules/upgrade_instance/variables.tf
Co-authored-by: Tim Gross <tgross@hashicorp.com>
* Update enos/modules/upgrade_instance/variables.tf
Co-authored-by: Tim Gross <tgross@hashicorp.com>
* Update enos/modules/upgrade_instance/variables.tf
Co-authored-by: Tim Gross <tgross@hashicorp.com>
* func: make snapshot by calling every server and allowing stale data
* style: formatting
* fix: make the source for the upgrade binary unknow until apply
* func: use enos bundle to install remote upgrade version, enos_files is not meant for dynamic files
---------
Co-authored-by: Tim Gross <tgross@hashicorp.com>
* func: add initial enos skeleton
* style: add headers
* func: change the variables input to a map of objects to simplify the workloads creation
* style: formating
* Add tests for servers and clients
* style: separate the tests in diferent scripts
* style: add missing headers
* func: add tests for allocs
* style: improve output
* func: add step to copy remote upgrade version
* style: hcl formatting
* fix: remove the terraform nomad provider
* fix: Add clean token to remove extra new line added in provision
* fix: Add clean token to remove extra new line added in provision
* fix: Add clean token to remove extra new line added in provision
* fix: add missing license headers
* style: hcl fmt
* style: rename variables and fix format
* func: remove the template step on the workloads module and chop the noamd token output on the provide module
* fix: correct the jobspec path on the workloads module
* fix: add missing variable definitions on job specs for workloads
* style: formatting
* fix: rename variable in health test