Nomad 1.10.0 is removing the legacy Vault token based workflow
which means the legacy e2e compatibility tests will fail and not
work.
The Nomad e2e cluster was using the legacy Vault token based
workflow for initial cluster build. This change migrates to using
the workload identity flow which utilizes authentication methods,
roles, and policies.
The Nomad server network has been modified to allow traffic from
the HCP Vault HVN which is a private network peered into our AWS
account. This is required, so that Vault can pull JWKS
information from the Nomad API without going over the public
internet.
The cluster build will now also configure a Vault KV v2 mount at
a unique indentifier for the e2e cluster. This allows all Nomad
workloads and tests to use this if required.
The vaultsecrets suite has been updated to accommodate the new
changes and extended to test the default workload ID flow for
allocations which use Vault for secrets.
The nightly E2E run only builds a new AMI when required by changes to the
build. The AMI is tagged with the SHA of the commit that forced that build,
which may not be the commit that's spawning a particular test run. So we have a
resource in the `provision-infra` module that finds that SHA.
But when we run upgrade testing via Enos, we're running the E2E Terraform
configuration from outside the `e2e/terraform` folder. So the script that
resource runs will fail and prevent us from getting the AMI. Fix the script so
it can be run from any folder.
We also have duplicate resources for the "ubuntu jammy" AMI, but this is because
the Enos matrix might (in the near future) test with ARM64. For now, we'll pin
the Consul server to AMD64. Rename the resource appropriately to make the source
of the duplicate obvious.
* func: remove the lists to override the nomad_local_binary for servers and clients
* docs: add a note to the terraform e2e readme
* fix: remove the extra 'windows' from the aws_ami filter
* style: hcl fmt
CE side of ENT PR:
task schedule: pauses are not restart "attempts"
distinguish between these two cases:
1. task dies because we "paused" it (on purpose)
- should not count against restarts,
because nothing is wrong.
2. task dies because it didn't work right
- should count against restart attempts,
so users can address application issues.
with this, the restart{} block is back to its normal
behavior, so its documentation applies without caveat.
* func: add a new output that merges both windowa and linux clients, but add tags to distinguish them
* fix: outputs cant referrence other outputs in terraform
* Update e2e/terraform/provision-infra/compute.tf
Co-authored-by: Tim Gross <tgross@hashicorp.com>
---------
Co-authored-by: Tim Gross <tgross@hashicorp.com>
I merged #24869 having forgotten we don't run these tests in PR CI, so there's a compile error in the test. Fix that error and add the no-op import we use to catch this kind of thing.
Ref: https://github.com/hashicorp/nomad/pull/24869
Add tests for dynamic host volumes where the claiming jobs have `volume.sticky =
true`. Includes a test for forced rescheduling and a test for node drain.
This changeset includes a new `e2e/v3`-style package for creating dynamic host
volumes, so we can reuse that across other tests.
* func: add initial enos skeleton
* style: add headers
* func: change the variables input to a map of objects to simplify the workloads creation
* style: formating
* Add tests for servers and clients
* style: separate the tests in diferent scripts
* style: add missing headers
* func: add tests for allocs
* style: improve output
* func: add step to copy remote upgrade version
* style: hcl formatting
* fix: remove the terraform nomad provider
* fix: Add clean token to remove extra new line added in provision
* fix: Add clean token to remove extra new line added in provision
* fix: Add clean token to remove extra new line added in provision
* fix: add missing license headers
* style: hcl fmt
* style: rename variables and fix format
* func: remove the template step on the workloads module and chop the noamd token output on the provide module
* fix: correct the jobspec path on the workloads module
* fix: add missing variable definitions on job specs for workloads
* style: formatting
* fix: rename variable in health test
* func: make windows arch dependant
* func: unify keys and make them cluster grouped
* Update README.md
* Update e2e/terraform/provision-infra/provision-nomad/variables.tf
Co-authored-by: Tim Gross <tgross@hashicorp.com>
* Update .gitignore
* style: add an output with the custer identifier
---------
Co-authored-by: Tim Gross <tgross@hashicorp.com>
The volume_mounts test is flaky due to slow starts from the exec-driver and some
incorrect wait code. Refactor the volume_mounts test to use the `e2e/v3` package
helpers, and use these to give it enough time to start the exec tasks.
The nightly runs for E2E have been failing the recently added dynamic host
volumes tests for a number of reasons:
* Adding timing logs to the tests shows that it can take over 5s (the original
test timeout) for the client fingerprint to show up on the client. This seems
like a lot but seems to be host-dependent because it's much faster locally.
Extend the timeout and leave in the timing logs so that we can keep an eye on
this problem in the future.
* The register test doesn't wait for the dispatched job to complete, and the
dispatched job was actually broken when TLS was in use because we weren't using
the Task API socket. Fix the jobspec for the dispatched job and add waiting
for the dispatched allocation to be marked complete before checking for the
volume on the server.
I've also change both the mounter jobs to batch workloads, so that we don't have
to wait 10s for the deployment to complete.
In #24694 we did a major refactoring of the E2E Terraform configuration. After
deploying a cluster this morning, I noticed a few moved/removed files were not
reflected in the .gitignore files. This changeset updates the .gitignore to have
no unstaged files after applying.
* func: move infra provisionining to a module and remove providers
* func: update paths
* func: update more paths
* func: update path inside bootstrap scrip
* style: remove debug prints on bootstrap scripts
* Delete e2e/terraform/csi/input/volume-efs.hcl
* fix: update keys path to use module path instead pf root
* fix: add missing headers
* fix: update keys directory inside provision-nomad
* style; format hcl files
* Update compute.tf
* Update e2e/terraform/main.tf
Co-authored-by: Tim Gross <tgross@hashicorp.com>
* Update e2e/terraform/provision-infra/compute.tf
Co-authored-by: Tim Gross <tgross@hashicorp.com>
* fix: update more paths
* fix: fmt hcl files
* func: final paths revision for running e2e locally
* fix: make path of certs relative to module for the bootstrap
* func: final paths revision for running e2e locally
* Update network.tf
* fix: fix typo and add success message
* fix: remove the test name from token to avoid long names and use name for vol to avoid colisions
* func: unify the uploads folder
* func: make the uploads file one per cluster
* func: Add outputs with all data necessary to connect to the cluster
* fix: make nomad token a sensitive output
* Update bootstrap-nomad.sh
---------
Co-authored-by: Tim Gross <tgross@hashicorp.com>
Initial end-to-end tests for dynamic host volumes. This includes tests for two
workflows:
* One where a dynamic host volume is created by a plugin and then mounted by a job.
* Another where a dynamic host volume is created out-of-band and registered by a
job, then mounted by another job.
This changeset also moves the existing `volumes` E2E test package to the
better-named `volume_mounts`.
Ref: https://hashicorp.atlassian.net/browse/NET-11551
In #16872 we added support for unix domain sockets, but this required mutating
the `Config` when parsing the address so as to remove the port number. In #23785
we fixed a bug where if the configuration was used across multiple clients that
mutation would happen multiple times and the address would be incorrectly
parsed.
When making `alloc log`, `alloc fs`, or `alloc exec` calls where we have
line-of-sight to the client, we attempt to make a HTTP API call directly to the
client node. So we create a new API client from the same configuration and then
set the address. But in this case we copy the private `url` field and that
causes the URL parsing to be skipped for the new client.
This results in the region always being set to the string literal
`"global"` (because of mTLS handling code introduced all the way back in
4d3b75d867), unless the user has set the region specifically. This fails with
an error "no path to region" when the cluster isn't non-global and requests are
sent to a non-leader.
Arguably the "right" way of fixing this would be for `ClientConfig` not to
change the API client's region to `"global"` in the first place, but as this is
a public API and extremely longstanding behavior, it could potentially be a
breaking change for some downstream consumers. Instead, we'll avoid copying the
private `url` field so that the new address is re-parsed.
Fixes: https://github.com/hashicorp/nomad/issues/24635
Fixes: https://github.com/hashicorp/nomad/issues/24609
Ref: https://github.com/hashicorp/nomad/pull/16872
Ref: https://github.com/hashicorp/nomad/pull/23785
Ref: 4d3b75d867
* func: make paths relative
* func: make paths relative to the module inside the e2e terraform folder
* fix: add license files to gitignore
* func: move /etc and update all paths
* Uncomment forgotten code
* fix: update the path to the tls certificates to be local to the instance
Some plugins emit multiple topology segment entries for the same segment (ex. newer versions of AWS EBS) to accommodate convention changes in k8s. Check that segments are a superset instead of exactly equal to the plugin's topology segments.
* func: remove validation scaling for system jobs and dont canonicalize to 1
* test: update test to validate for 0 and improve error message
* func: remove the canonicalization to 1 from system jobs
* docs: add changelog
* func: add test for scaling system jobs
* temp: add logging to debug test
* fix: clean up after test is done
* fix: scaled down jobs will still have the stop allocation, update test to account for it
* Update the e2e test to accomodate for system jobs to have an alloc per node
* fix: filter to only count ready nodes on the node count
* fix: remove the datacenter constrain from the system job definition
* fix: compare alloc IDs to avoid flaky tests when verifying no alloc was stoped
* fix: remove duplicated code
In #23966 we introduced an official Docker client and did not notice that in
contrast to our previous 3rd party client, the official SDK PullOptions object
expects a base64 encoded JSON with username and password, instead of username/
password pair.
Installing Vault and Consul from releases.hashicorp.com via `hc-install` has
been failing intermittently. Update the `hc-install` binaries to be current and
add one retry to downloads for our compat tests so that we can get builds more
reliably green while the underlying issue is being debugged.
In #24095 we made a fix for non-streaming exec into Docker tasks for script
checks and `change_mode = "script"`, but didn't complete E2E testing. We need to
use `ContainerExecAttach` in the new API in order to get stdout/stderr from
tasklets, but the previous `ContainerExecStart` call will prevent this from
running successfully with an error that the exec has already run.
* Ref: [NET-11202 (comment)](https://hashicorp.atlassian.net/browse/NET-11202?focusedCommentId=551618)
* This has shipped in Nomad 1.9.0-beta.1 but not production yet.
* This should fix the remaining issues in nightly E2E for Docker.
When we start the Consul agent in the `consulcompat` test package, we check that
the version matches the version we expect. But Consul agents may omit non-core
parts of the version string (ex. `1.20.0-rc1` displays `1.20.0`). Compare only
the core portions of the version string.
* build: update golangci-lint to 1.60.1
* ci: update golangci-lint to v1.60.1
Helps with go1.23 compatability. Introduces some breaking changes / newly
enforced linter patterns so those are fixed as well.
Although we encourage users to use Vault roles, sometimes they're going to want
to assign policies based on entity and pre-create entities and aliases based on
claims. This allows them to use single default role (or at least small number of
them) that has a templated policy, but have an escape hatch from that.
When defining Vault entities the `user_claim` must be unique. When writing Vault
binding rules for use with Nomad workload identities the binding rule won't be
able to create a 1:1 mapping because the selector language allows accessing only
a single field. The `nomad_job_id` claim isn't sufficient to uniquely identify a
job because of namespaces. It's possible to create a JWT auth role with
`bound_claims` to avoid this becoming a security problem, but this doesn't allow
for correct accounting of user claims.
Add support for an `extra_claims` block on the server's `default_identity`
blocks for Vault. This allows a cluster administrator to add a custom claim on
all allocations. The values for these claims are interpolatable with a limited
subset of fields, similar to how we interpolate the task environment.
Fixes: https://github.com/hashicorp/nomad/issues/23510
Ref: https://hashicorp.atlassian.net/browse/NET-10372
Ref: https://hashicorp.atlassian.net/browse/NET-10387