Some time ago the Windows host we were using as a Nomad client agent test target
started failing to allow ssh connections. The underlying problem appears to be
with sysprep but I wasn't able to debug the exact cause as it's not an area I
have a lot of expertise in.
Swap out the deprecated Windows 2016 host for a Windows 2022 host. This will use
a base image provided by Amazon and then we'll use a userdata script to
bootstrap ssh and some target directories for Terraform to upload files to. The
more modern Windows will let us drop some of extra powershell scripts we were
using as well.
Fixes: https://hashicorp.atlassian.net/browse/NMD-151
Fixes: https://github.com/hashicorp/nomad-e2e/issues/125
* func: remove the lists to override the nomad_local_binary for servers and clients
* docs: add a note to the terraform e2e readme
* fix: remove the extra 'windows' from the aws_ami filter
* style: hcl fmt
* func: make windows arch dependant
* func: unify keys and make them cluster grouped
* Update README.md
* Update e2e/terraform/provision-infra/provision-nomad/variables.tf
Co-authored-by: Tim Gross <tgross@hashicorp.com>
* Update .gitignore
* style: add an output with the custer identifier
---------
Co-authored-by: Tim Gross <tgross@hashicorp.com>
* func: move infra provisionining to a module and remove providers
* func: update paths
* func: update more paths
* func: update path inside bootstrap scrip
* style: remove debug prints on bootstrap scripts
* Delete e2e/terraform/csi/input/volume-efs.hcl
* fix: update keys path to use module path instead pf root
* fix: add missing headers
* fix: update keys directory inside provision-nomad
* style; format hcl files
* Update compute.tf
* Update e2e/terraform/main.tf
Co-authored-by: Tim Gross <tgross@hashicorp.com>
* Update e2e/terraform/provision-infra/compute.tf
Co-authored-by: Tim Gross <tgross@hashicorp.com>
* fix: update more paths
* fix: fmt hcl files
* func: final paths revision for running e2e locally
* fix: make path of certs relative to module for the bootstrap
* func: final paths revision for running e2e locally
* Update network.tf
* fix: fix typo and add success message
* fix: remove the test name from token to avoid long names and use name for vol to avoid colisions
* func: unify the uploads folder
* func: make the uploads file one per cluster
* func: Add outputs with all data necessary to connect to the cluster
* fix: make nomad token a sensitive output
* Update bootstrap-nomad.sh
---------
Co-authored-by: Tim Gross <tgross@hashicorp.com>
Our `consulcompat` tests exercise both the Workload Identity and legacy Consul
token workflow, but they are limited to running single node tests. The E2E
cluster is network isolated, so using our HCP Consul cluster runs into a
problem validating WI tokens because it can't reach the JWKS endpoint. In real
production environments, you'd solve this with a CNAME pointing to a public IP
pointing to a proxy with a real domain name. But that's logisitcally
impractical for our ephemeral nightly cluster.
Migrate the HCP Consul to a single-node Consul cluster on AWS EC2 alongside our
Nomad cluster. Bootstrap TLS and ACLs in Terraform and ensure all nodes can
reach each other. This will allow us to update our Consul tests so they can use
Workload Identity, in a separate PR.
Ref: #19698
While working on infrastructure for testing the UI in E2E, we needed
to upgrade the certificate provider. Performing a provider upgrade via
the TF `init -upgrade` brought in updates for the file and AWS
providers as well. These updates include deprecating the use of
`sensitive_content` fields, removing CA algorithm parameters that can
be inferred from keys, and removing the requirement to manually
specify AWS assume role parameters in the provider config if they're
available in the calling environment's AWS config file (as they are
via doormat or our E2E environment).
Use HCP Consul and HCP Vault for the Consul and Vault clusters used in E2E testing. This has the following benefits:
* Without the need to support mTLS bootstrapping for Consul and Vault, we can simplify the mTLS configuration by leaning on Terraform instead of janky bash shell scripting.
* Vault bootstrapping is no longer required, so we can eliminate even more janky shell scripting
* Our E2E exercises HCP, which is important to us as an organization
* With the reduction in configurability, we can simplify the Terraform configuration and drop the complicated `provision.sh`/`provision.ps1` scripts we were using previously. We can template Nomad configuration files and upload them with the `file` provisioner.
* Packer builds for Linux and Windows become much simpler.
tl;dr way less janky shell scripting!
This allows us to spin up e2e clusters with mTLS configured for all HashiCorp services, i.e. Nomad, Consul, and Vault. Used it for testing #11089 .
mTLS is disabled by default. I have not updated Windows provisioning scripts yet - Windows also lacks ACL support from before. I intend to follow up for them in another round.
Ease spinning up a cluster, where binaries are fetched from arbitrary
urls. These could be CircleCI `build-binaries` job artifacts, or
presigned S3 urls.
Co-authored-by: Tim Gross <tgross@hashicorp.com>
We intend to expand the nightly E2E test to cover multiple distros and
platforms. Change the naming structure for "Linux client" to the more precise
"Ubuntu Bionic", and "Windows" to "Windows 2016" to make it easier to add new
targets without additional refactoring.
For everyday developer use, we don't need volumes for testing CSI. Providing a
flag to opt-in speeds up deploying dev clusters and slightly reduces infra costs.
Skip CSI test if missing volume specs.
Provisions vault with the policies described in the Nomad Vault integration
guide, and drops a configuration file for Nomad vault server configuration
with its token. The vault root token is exposed to the E2E runner so that
tests can write additional policies to vault.
Adds a `nomad_acls` flag to our Terraform stack that bootstraps Nomad ACLs via
a `local-exec` provider. There's no way to set the `NOMAD_TOKEN` in the Nomad
TF provider if we're bootstrapping in the same Terraform stack, so instead of
using `resource.nomad_acl_token`, we also bootstrap a wide-open anonymous
policy. The resulting management token is exported as an environment var with
`$(terraform output environment)` and tests that want stricter ACLs will be
able to write them using that token.
This should also provide a basis to do similar work with Consul ACLs in the
future.
Have Terraform run the target-specific `provision.sh`/`provision.ps1` script
rather than the test runner code which needs to be customized for each
distro. Use Terraform's detection of variable value changes so that we can
re-run the provisioning without having to re-install Nomad on those specific
hosts that need it changed.
Allow the configuration "profile" (well-known directory) to be set by a
Terraform variable. The default configurations are installed during Packer
build time, and symlinked into the live configuration directory by the
provision script. Detect changes in the file contents so that we only upload
custom configuration files that have changed between Terraform runs