This change removes any blocking calls to destroyAllocRunner, which
caused nomad clients to block when running allocations in certain
scenarios. In addition, this change consolidates client GC by removing
the MakeRoomFor method, which is redundant to keepUsageBelowThreshold.
---------
Co-authored-by: Tim Gross <tgross@hashicorp.com>
Docker driver's TestDockerDriver_OOMKilled should run on cgroups v2 now, since
we're running docker v27 client library and our runners run docker v26 that
contain containerd fixcontainerd/containerd#6323.
* Custom watchQuery equivalent on the storage index
* Tests for live updates to the storage page
* Deconditionalizing the pagination on storage, and fixing a bug where I was looking at filtered but not paginated DHV
* Test for pagination with live-updates
We can't delete a CSI plugin when it has volumes in use. When periodic GC runs,
we send the RPC unconditionally and then let the state store return an error. We
accidentally fixed the excess logging this causes (#17025) in #20555, but we can
also check if the plugin is empty first before sending the RPC to save a
request and subsequent Raft write.
Fixes: https://github.com/hashicorp/nomad/issues/17025
Ref: https://github.com/hashicorp/nomad/pull/20555
When configuring Consul to use Nomad workload identities, you create the Consul
auth method in the default namespace. If you're using Consul Enterprise
namespaces, there are two available approaches: one is to create the tokens in
the default namespace and give them policies that define cross-namespace access,
and the other is to use binding rules that map the login to a particular
namespace. The latter is what we show in our docs, but this was missing a note
that any roles (and their associated policies) targetted by `-bind-type role`
need to exist in the Consul namespace we're logging into.
Also, in Nomad CE, the `consul.namespace` flag is always treated as having been set to
`"default"`. That is, we ignore it and don't return an error even though it's a
Nomad ENT-only feature. Clarify this in the documentation for the field the same
way we've done for the `cluster` field.
Co-authored-by: Aimee Ukasick <aimee.ukasick@hashicorp.com>
The agent retry joiner implementation had different parameters
to control its execution for agents running in server and client
mode. The agent would set up individual joiners depending on the
agent mode, making the object parameter overhead unrequired.
This change removes the excess configuration options for the
joiner, reducing code complexity slighly and hopefully making
future modifications in this area easier to make.
* chore(deps): bump google.golang.org/grpc from 1.69.4 to 1.71.0
* chore(deps): bump github.com/hashicorp/go-memdb from 1.3.4 to 1.3.5
* chore(deps): bump github.com/prometheus/common from 0.62.0 to 0.63.0
* chore(deps): bump github.com/hashicorp/go-kms-wrapping/wrappers/gcpckms/v2
* First batch of x-icon to hds::icons
* Bunch more icons and a note for jobrow
* Fixes for tests that depended on specific action names
* Icon-bumped-down specified to solo-icons in table cells
* Class-basing the icon bump and deferring icon svg load in env
* Exec window sidebar icons were looking a little off
* An option to select, and column etc. to view, sentinel policy scope
* Flake potential: Seed(1) had a couple jobs with the same ModifyIndex
* More de-flaking
If multiple dynamic host volumes are created in quick succession, it's possible
for the server to attempt placement on a host where another volume has been
placed but not yet fingerprinted as ready. Once a `VolumeCreate` RPC returns a
response, we've already invoked the plugin successfully and written to state, so
we're just waiting on the fingerprint for scheduling purposes. Change the
placement selection so that we skip a node if it has a volume, regardless of
whether that volume is ready yet.
Prerelease builds are in a different Artifactory repository than release
builds. Make this a variable option so we can test prerelease builds in the
nightly/weekly runs.
if the auth-url api is getting DOS'd,
then we do not expect it to still function;
we only protect the rest of the system.
users will need to use a break-glass ACL
token if they need Nomad UI/API access
during such a denial of service.
* docs: fix missing api version on acl auth method path
* docs: fix missing api version on acl binding rules path
* docs: fix missing api version on acl policies path
* docs: fix missing api version on acl roles path
* docs: fix missing api version on acl tokens path