tests that use this local docker registry (docker and podman tests)
occasionally flake, I think due to the timeout being reached,
despite passing after a restart.
> jobs3.go:658: tg 'create-files' task 'create-auth-file' event: Task received by client
> jobs3.go:658: tg 'create-files' task 'create-auth-file' event: Building Task Directory
> jobs3.go:658: tg 'create-files' task 'create-auth-file' event: Task started by client
> jobs3.go:658: tg 'create-files' task 'create-auth-file' event: Exit Code: 1
> jobs3.go:658: tg 'create-files' task 'create-auth-file' event: Task restarting in 16.212149445s
> jobs3.go:658: tg 'create-files' task 'create-auth-file' event: Task started by client
> jobs3.go:658: tg 'create-files' task 'create-auth-file' event: Exit Code: 0
setting the delay lower will (hopefully) keep within the job timeout.
I'm not sure why the `pledge` task apparently flakes like this;
I could find no useful info in the logs.
don't require "bridge" network mode when using connect{}
we document this as "at your own risk" because CNI configuration
is so flexible that we can't guarantee a user's network will work,
but Nomad's "bridge" CNI config may be used as a reference.
look, I know I misspelled "locater" in the code comment, but it's easier to acknowledge that here in this commit message than it is to push a new commit with all the test/approval machinery in github.
The call to IMDSv1 has been failing since we switched to v2 which
meant the UI e2e script attempted to use the service IP address
for its tests. The service IP address is the Nomad client's
private address which is not routable from the e2e test runner
which means the test fails.
This change updates the IP discovery to use IMDSv2 which means the
address is correctly populated and routable. The change also makes
this discovery method by a job action within the proxy job. This
exercises that feature and utilizes it in a way for which it was
designed.
Ensuring the keyring is ready before starting the Nomad client in
the client intro e2e test speeds up execution. This is because the
client does not have to wait to retry failed registrations due to
the keyring not being ready.
The new client intro test mimics the Consul and Vault compat tests
and uses local agents to perform the required setup. This method
allows us the flexibility moving forward to test when enforcement
mode is in strict.
The test suite will now be triggered from the test-e2e CI run
and can also be called by a make target.
When we refactored the E2E provisioning to allow it to be reused by the upgrade
testing, we didn't thread the `instance_type` variable from the main module down
into the `provision-infra` module. This prevents you from setting a custom
instance size when deploying the E2E cluster manually.
* e2e: update standalone envoy binary version
fix for:
> === FAIL: e2e/exec2 TestExec2/testCountdash (21.25s)
> exec2_test.go:71:
> ...
> [warning][config] [./source/extensions/config_subscription/grpc/grpc_stream.h:155] DeltaAggregatedResources gRPC config stream to local_agent closed: 3, Envoy 1.29.4 is too old and is not supported by Consul
there's also this warning, but it doesn't seem so fatal:
> [warning][main] [source/server/server.cc:910] There is no configured limit to the number of allowed active downstream connections. Configure a limit in `envoy.resource_monitors.downstream_connections` resource monitor.
picked latest supported from latest consul (1.21.4):
```
$ curl -s localhost:8500/v1/agent/self | jq .xDS.SupportedProxies
{
"envoy": [
"1.34.1",
"1.33.2",
"1.32.5",
"1.31.8"
]
}
```
* e2e: exec2: remove extraneous bits
* reschedule: no reschedule for batch jobs
* unveil: nomad paths get auto-unveiled with unveil_defaults
https://github.com/hashicorp/nomad-driver-exec2/blob/v0.1.0/plugin/driver.go#L514-L522
* fix panic from nil ReschedulePolicy
commit 279775082c (pr #26279)
intended to return an error for sysbatch jobs with a reschedule block,
but in bypassing populating the `ReschedulePolicy`'s pointer fields,
a nil pointer panic occurred before the job could get rejected
with the intended error.
in particular, in `command/agent/job_endpoint.go`, `func ApiTgToStructsTG`,
```
if taskGroup.ReschedulePolicy != nil {
tg.ReschedulePolicy = &structs.ReschedulePolicy{
Attempts: *taskGroup.ReschedulePolicy.Attempts,
Interval: *taskGroup.ReschedulePolicy.Interval,
```
`*taskGroup.ReschedulePolicy.Interval` was a nil pointer.
* fix e2e test jobs
* Update UI, code comment, and README links to docs, tutorials
* fix typo in ephemeral disks learn more link url
* feedback on typo
Co-authored-by: Tim Gross <tgross@hashicorp.com>
---------
Co-authored-by: Tim Gross <tgross@hashicorp.com>
Update our E2E compatibility test for Consul and Vault to only include back to
the oldest-supported LTS versions of Consul and Vault. This will still leave
a few unsupported non-LTS versions in the matrix between the two oldest LTS, but
this is a small number of tests and fixing it would mean hard-coding the LTS
support matrix in our tests.
In our E2E environment we've seen some flakiness with the Consul-related
tests. As it turns out, the Consul agents are getting restarted every 90s or so
because they're timing out their systemd notification.
> consul.service: start operation timed out. Terminating.
This appears to be a known issue in Consul and we'll try to contribute some help
to hunt down the cause if they want help, but in the meantime let's remove it
from our systemd unit files for the Consul agents.
Ref: https://github.com/hashicorp/consul/issues/16844#issuecomment-1913282248
* E2E: fix scaling test assertion for extra Windows host
The scaling test assumes that all nodes will receive the system job. But the job
can only run on Linux hosts, so the count will be wrong if we're running a
Windows host as part of the cluster. Filter the expected count by the OS.
While we're touching this test, let's also migrate it off the legacy framework.
* address comments from code review
Some time ago the Windows host we were using as a Nomad client agent test target
started failing to allow ssh connections. The underlying problem appears to be
with sysprep but I wasn't able to debug the exact cause as it's not an area I
have a lot of expertise in.
Swap out the deprecated Windows 2016 host for a Windows 2022 host. This will use
a base image provided by Amazon and then we'll use a userdata script to
bootstrap ssh and some target directories for Terraform to upload files to. The
more modern Windows will let us drop some of extra powershell scripts we were
using as well.
Fixes: https://hashicorp.atlassian.net/browse/NMD-151
Fixes: https://github.com/hashicorp/nomad-e2e/issues/125
TestSingleAffinities never expected a node with affinity score set to 0 in
the set of returned nodes. However, since #25800, this can happen. What the
test should be checking for instead is that the node with the highest normalized
score has the right affinity.
The DNS configuration for our E2E cluster uses dnsmasq to pass all DNS through
Consul. But there's a circular reference in systemd configurations that
sometimes causes the Docker service to fail, this is causing test flakes during
upgrade testing because we count the number of nodes and expect `system` jobs
using Docker to run on all nodes.
We no longer have any tests that require Consul DNS, so remove the complication
of dnsmasq to break the reference cycle. Also, while I was looking at this I
noticed we still had setup that would configure the ECS remote task driver
plugin, which is archived. Remove this as well.
Ref: https://hashicorp.atlassian.net/browse/NMD-162
The fresh deployment of the Redis job took around 20s which is
also the default context timeout on the e2e util that monitors and
waits for a deployment to complete.
The tight timing meant the test often timed out but sometimes
would complete successfully. Increasing the timeout for this
deployment will remove the flakiness.
As of April 1, Docker Hub rate limits tightened. With only 10 pulls/hr/IP, we're
likely to encounter test failures. Switch all Docker images getting pulled from
this repository to use the HashiCorp managed registry mirror.
Note that most of our tests in `drivers/docker` don't pull from the remote
registry but load a local image, while others will need to pull from the remote
and fetch different images depending on OS/arch. Refactor the definition of test
task configuration to make it clear which is which, and de-factor some false
sharing of setup functions.
Updates the E2E tests to use that registry by configuring the Docker
daemon. This required changing out a few container images that we don't have in
the registry, but these new images are all smaller. There are a couple of tests
that still use explicitly-tagged `docker.io` images or other third-party
registries, which have been left in place.
Ref: https://hashicorp.atlassian.net/browse/NET-12233
update E2E images to those in the registry mirror
fix windows and docklog test build
fix stopsignal test
mop-up
more mop-up
The `ui.enabled` parameter is a non-pointer bool which means the
merge function is unable to differentiate between false and not
set. When e2e introduced the `ui.show_cli_hints` configuration
parameter, the way we merge meant the UI became disabled.
I couldn't find any reason the exec2 HTTP jobs were not being run
with a generated cleanup function, so I added this.
The deletion of the DHV ACL policy does not seem like it would
have any negative impact.