Files
nomad/e2e/README.md
James Rasell 8bce0b0954 e2e: Migrate legacy Vault token based workflow to workload ID (#25139)
Nomad 1.10.0 is removing the legacy Vault token based workflow
which means the legacy e2e compatibility tests will fail and not
work.

The Nomad e2e cluster was using the legacy Vault token based
workflow for initial cluster build. This change migrates to using
the workload identity flow which utilizes authentication methods,
roles, and policies.

The Nomad server network has been modified to allow traffic from
the HCP Vault HVN which is a private network peered into our AWS
account. This is required, so that Vault can pull JWKS
information from the Nomad API without going over the public
internet.

The cluster build will now also configure a Vault KV v2 mount at
a unique indentifier for the e2e cluster. This allows all Nomad
workloads and tests to use this if required.

The vaultsecrets suite has been updated to accommodate the new
changes and extended to test the default workload ID flow for
allocations which use Vault for secrets.
2025-02-20 14:06:25 +00:00

139 lines
4.8 KiB
Markdown

# End to End Tests
This package contains integration tests. Unlike tests alongside Nomad code,
these tests expect there to already be a functional Nomad cluster accessible
(either on localhost or via the `NOMAD_ADDR` env var).
See [`framework/doc.go`](framework/doc.go) for how to write tests.
The `NOMAD_E2E=1` environment variable must be set for these tests to run.
## Provisioning Test Infrastructure on AWS
The `terraform/` folder has provisioning code to spin up a Nomad cluster on
AWS. You'll need both Terraform and AWS credentials to setup AWS instances on
which e2e tests will run. See the
[README](https://github.com/hashicorp/nomad/blob/main/e2e/terraform/README.md)
for details. The number of servers and clients is configurable, as is the
specific build of Nomad to deploy and the configuration file for each client
and server.
## Provisioning Local Clusters
To run tests against a local cluster, you'll need to make sure the following
environment variables are set:
* `NOMAD_ADDR` should point to one of the Nomad servers
* `CONSUL_HTTP_ADDR` should point to one of the Consul servers
* `NOMAD_E2E=1`
_TODO: the scripts in `./bin` currently work only with Terraform, it would be
nice for us to have a way to deploy Nomad to Vagrant or local clusters._
## Running
After completing the provisioning step above, you can set the client
environment for `NOMAD_ADDR` and run the tests as shown below:
```sh
# from the ./e2e/terraform directory, set your client environment
# if you haven't already
$(terraform output environment)
cd ..
go test -v ./...
```
If you want to run a specific suite, you can specify the `-suite` flag as
shown below. Only the suite with a matching `Framework.TestSuite.Component`
will be run, and all others will be skipped.
```sh
go test -v -suite=Consul .
```
If you want to run a specific test, you'll need to regex-escape some of the
test's name so that the test runner doesn't skip over framework struct method
names in the full name of the tests:
```sh
go test -v . -run 'TestE2E/Consul/\*consul\.ScriptChecksE2ETest/TestGroup'
^ ^ ^ ^
| | | |
Component | | Test func
| |
Go Package Struct
```
We're also in the process of migrating to "stdlib-style" tests that
use the standard go `testing` package without a notion of "suite". You
can run these with `-run` regexes the same way you would any other go
test:
```sh
go test -v . -run TestExample/TestExample_Simple
```
## I Want To...
### ...SSH Into One Of The Test Machines
You can use the Terraform output to find the IP address. The keys will
in the `./terraform/keys/` directory.
```sh
ssh -i keys/nomad-e2e-*.pem ubuntu@${EC2_IP_ADDR}
```
Run `terraform output` for IP addresses and details.
### ...Deploy a Cluster of Mixed Nomad Versions
The `variables.tf` file describes the `nomad_version`, and
`nomad_local_binary` variables that can be used for most circumstances. But if
you want to deploy mixed Nomad versions, you can provide a list of versions in
your `terraform.tfvars` file.
For example, if you want to provision 3 servers all using Nomad 0.12.1, and 2
Linux clients using 0.12.1 and 0.12.2, you can use the following variables:
```hcl
# will be used for servers
nomad_version = "0.12.1"
# will override the nomad_version for Linux clients
nomad_version_client_linux = [
"0.12.1",
"0.12.2"
]
```
### ...Deploy Custom Configuration Files
Set the `profile` field to `"custom"` and put the configuration files in
`./terraform/config/custom/` as described in the
[README](https://github.com/hashicorp/nomad/blob/main/e2e/terraform/README.md#Profiles).
### ...Deploy More Than 4 Linux Clients
Use the `"custom"` profile as described above.
### ...Change the Nomad Version After Provisioning
You can update the `nomad_version` variable, or simply rebuild the binary you
have at the `nomad_local_binary` path so that Terraform picks up the
changes. Then run `terraform plan`/`terraform apply` again. This will update
Nomad in place, making the minimum amount of changes necessary.
### ...Use Vault within a Test
The infrastructure build enables a Vault KV2 mount whose mount point matches the value of the
`CLUSTER_UNIQUE_IDENTIFIER` environment variable and is generated
[here](https://github.com/hashicorp/nomad/blob/687335639bc6d4d522c91d6026d9e3f149aa75dc/e2e/terraform/provision-infra/main.tf#L16).
All Nomad workloads which include a
[Vault block](https://developer.hashicorp.com/nomad/docs/job-specification/vault) will be granted
access to secrets according to the
[default policy document](./terraform/provision-infra/templates/vault-acl-jwt-policy-nomad-workloads.hcl.tpl).