In #12458 we added an in-memory connection buffer so that template runners that
want access to the Nomad API for Service Registration and Variables can
communicate with Nomad without having to create a real HTTP client. The size of
this buffer (1 MiB) was taken directly from its usage in Vault, and each
connection makes 2 such buffers (send and receive). Because each template runner
has its own connection, when there are large numbers of allocations this adds up
to significant memory usage.
The largest Nomad Variable payload is 64KiB, and a small amount of
metadata. Service Registration responses are much smaller, and we don't include
check results in them (as Consul does), so the size is relatively bounded. We
should be able to safely reduce the size of the buffer by a factor of 10 or more
without forcing the template runner to make multiple read calls over the buffer.
Fixes: #18508
In Nomad Enterprise, namespace rules can control access to Vault and Consul
clusters. Add job endpoint mutating and validating hooks for both Vault and
Consul so that ENT can enforce these namespace rules. This changeset includes
the stub behaviors for CE.
Ref: https://github.com/hashicorp/nomad-enterprise/pull/1234
This feature will help operator to remove a failed/left node from Serf layer immediately
without waiting for 24 hours for the node to be reaped
* Update CLI with prune flag
* Update API /v1/agent/force-leave with prune query string parameter
* Update CLI and API doc
* Add unit test
the first half of volume expansion,
this allows a user to update requested capacity
("capacity_min" and "capacity_max") in a volume
specification file, and re-issue either Register
or Create volume commands (or api calls).
the requested capacity will now be "reconciled"
with the current real capacity of the volume,
issuing a ControllerExpandVolume RPC call
to a running controller plugin, if requested
"capacity_min" is higher than the current
capacity on the volume in state.
csi spec:
https://github.com/container-storage-interface/spec/blob/c918b7f/spec.md#controllerexpandvolume
note: this does not yet cover NodeExpandVolume
Add support for identity token TTL in agent configuration fields such as
Consul `service_identity` and `template_identity`.
Co-authored-by: Michael Schurter <mschurter@hashicorp.com>
* client: refactor cpuset partitioning
This PR updates the way Nomad client manages the split between tasks
that make use of resources.cpus vs. resources.cores.
Previously, each task was explicitly assigned which CPU cores they were
able to run on. Every time a task was started or destroyed, all other
tasks' cpusets would need to be updated. This was inefficient and would
crush the Linux kernel when a client would try to run ~400 or so tasks.
Now, we make use of cgroup heirarchy and cpuset inheritence to efficiently
manage cpusets.
* cr: tweaks for feedback
This changeset is the documentation for supporting multiple Vault and Consul
clusters in Nomad Enterprise. It includes documentation changes for the agent
configuration (#18255), the namespace specification (#18425), and the vault,
consul, and service blocks of the jobspec (#18409).
This change deduplicates the ACL policy list generated from ACL
roles referenced within an ACL token on the client.
Previously the list could contain duplicates, which would cause
erronous permission denied errors when calling client related RPC/
HTTP API endpoints. This is because the client calls the ACL get
policies endpoint which subsequently ensures the caller has
permission to view the ACL policies. This check is performed by
comparing the requested list args with the policies referenced by
the caller ACL token. When a duplicate is present, this check
fails, as the check must ensure the slices match exactly.
In Nomad Enterprise when multiple Vault/Consul clusters are configured, cluster admins can control access to clusters for jobs via namespace ACLs, similar to how we've done so for node pools. This changeset updates the ACL configuration structs, but doesn't wire them up.
In the original design of Consul fingerprinting, we would poll every period so
that we could change the client's fingerprint if Consul became unavailable. As
of 1.4.0 (ref #14673) we no longer update the fingerprint in order to avoid
excessive `Node.Register` RPCs when someone's Consul cluster is flapping.
This allows us to safely backoff Consul fingerprinting on success, just as we
have with Vault.
fingerprint: add support for fingerprinting multiple Consul clusters
Add fingerprinting we'll need to accept multiple Consul clusters in upcoming
Nomad Enterprise features. The fingerprinter will create a map of Consul clients
by cluster name. In Nomad CE, all but the default cluster will be ignored and
there will be no visible behavior change.
Ref: https://github.com/hashicorp/team-nomad/issues/404
When restoring an allocation `WIDMgr` was not being set in the alloc
runner config, resulting in a nil panic when the task runner attempted
to start.
Since we will often require the same configuration values when creating
or restoring a new allocation, this commit moves the logic to a shared
function to ensure that `addAlloc` and `restoreState` configure alloc
runners with the same values.