This adds artifact inspection after download to detect any issues
with the content fetched. Currently this means checking for any
symlinks within the artifact that resolve outside the task or
allocation directories. On platforms where lockdown is available
(some Linux) this inspection is not performed.
The inspection can be disabled with the DisableArtifactInspection
option. A dedicated option for disabling this behavior allows
the DisableFilesystemIsolation option to be enabled but still
have artifacts inspected after download.
The docs for the `template` block accurately describe the template configuration
default function denylist in the body but the default parameters are missing
values. The equivalent docs in the `client` configuration are missing
`executeTemplate` as well.
* Move commands from docs to its own root-level directory
* temporarily use modified dev-portal branch with nomad ia changes
* explicitly clone nomad ia exp branch
* retrigger build, fixed dev-portal broken build
* architecture, concepts and get started individual pages
* fix get started section destinations
* reference section
* update repo comment in website-build.sh to show branch
* docs nav file update capitalization
* update capitalization to force deploy
* remove nomad-vs-kubernetes dir; move content to what is nomad pg
* job section
* Nomad operations category, deploy section
* operations category, govern section
* operations - manage
* operations/scale; concepts scheduling fix
* networking
* monitor
* secure section
* remote auth-methods folder and move up pages to sso; linkcheck
* Fix install2deploy redirects
* fix architecture redirects
* Job section: Add missing section index pages
* Add section index pages so breadcrumbs build correctly
* concepts/index fix front matter indentation
* move task driver plugin config to new deploy section
* Finish adding full URL to tutorials links in nav
* change SSO to Authentication in nav and file system
* Docs NomadIA: Move tutorials into NomadIA branch (#26132)
* Move governance and policy from tutorials to docs
* Move tutorials content to job-declare section
* run jobs section
* stateful workloads
* advanced job scheduling
* deploy section
* manage section
* monitor section
* secure/acl and secure/authorization
* fix example that contains an unseal key in real format
* remove images from sso-vault
* secure/traffic
* secure/workload-identities
* vault-acl change unseal key and root token in command output sample
* remove lines from sample output
* fix front matter
* move nomad pack tutorials to tools
* search/replace /nomad/tutorials links
* update acl overview with content from deleted architecture/acl
* fix spelling mistake
* linkcheck - fix broken links
* fix link to Nomad variables tutorial
* fix link to Prometheus tutorial
* move who uses Nomad to use cases page; move spec/config shortcuts
add dividers
* Move Consul out of Integrations; move namespaces to govern
* move integrations/vault to secure/vault; delete integrations
* move ref arch to docs; rename Deploy Nomad back to Install Nomad
* address feedback
* linkcheck fixes
* Fixed raw_exec redirect
* add info from /nomad/tutorials/manage-jobs/jobs
* update page content with newer tutorial
* link updates for architecture sub-folders
* Add redirects for removed section index pages. Fix links.
* fix broken links from linkcheck
* Revert to use dev-portal main branch instead of nomadIA branch
* build workaround: add intro-nav-data.json with single entry
* fix content-check error
* add intro directory to get around Vercel build error
* workound for emtpry directory
* remove mdx from /intro/ to fix content-check and git snafu
* Add intro index.mdx so Vercel build should work
---------
Co-authored-by: Tu Nguyen <im2nguyen@gmail.com>
When a node is garbage collected, any dynamic host volumes on the node are
orphaned in the state store. We generally don't want to automatically collect
these volumes and risk data loss, and have provided a CLI flag to `-force`
remove them in #25902. But for clusters running on ephemeral cloud
instances (ex. AWS EC2 in an autoscaling group), deleting host volumes may add
excessive friction. Add a configuration knob to the client configuration to
remove host volumes from the state store on node GC.
Ref: https://github.com/hashicorp/nomad/pull/25902
Ref: https://github.com/hashicorp/nomad/issues/25762
Ref: https://hashicorp.atlassian.net/browse/NMD-705
* Set MaxAllocations in client config
Add NodeAllocationTracker struct to Node struct
Evaluate MaxAllocations in AllocsFit function
Set up cli config parsing
Integrate maxAllocs into AllocatedResources view
Co-authored-by: Tim Gross <tgross@hashicorp.com>
---------
Co-authored-by: Tim Gross <tgross@hashicorp.com>
Before the fixes in #20165, the wait feature was disabled by
default. After these changes, it's always enabled, which - at
least on some platforms - leads to a significant increase in
load (5-7x).
This patch allows disabling the wait feature in the client
stanza of the configuration file by setting min and max to 0:
wait {
min = "0"
max = "0"
}
Per-template wait blocks in the task description still work like
one would expect.
A more comprehensive env.denylist that now includes more token, token file and
license variables.
---------
Co-authored-by: Daniel Bennett <dbennett@hashicorp.com>
When the `client.servers` block is parsed, we split the port from the
address. This does not correctly handle IPv6 addresses when they are in URL
format (wrapped in brackets), which we require to disambiguate the port and
address.
Fix the parser to correctly split out the port and handle a missing port value
for IPv6. Update the documentation to make the URL format requirement clear.
Fixes: https://github.com/hashicorp/nomad/issues/20310
This PR changes the example of the client config option "fingerprint.denylist"
to include all the cloud environment fingerprinters. Each one contains a
2 second HTTP timeout to a metadata endpoint that does not exist if you are not
in that particular cloud. When run in serial on startup, this results in
an 8 second wait where nothing useful is happening.
Closes#16727
When registering a node with a new node pool in a non-authoritative
region we can't create the node pool because this new pool will not be
replicated to other regions.
This commit modifies the node registration logic to only allow automatic
node pool creation in the authoritative region.
In non-authoritative regions, the client is registered, but the node
pool is not created. The client is kept in the `initialing` status until
its node pool is created in the authoritative region and replicated to
the client's region.
Adds a new configuration to clients to optionally allow them to drain their
workloads on shutdown. The client sends the `Node.UpdateDrain` RPC targeting
itself and then monitors the drain state as seen by the server until the drain
is complete or the deadline expires. If it loses connection with the server, it
will monitor local client status instead to ensure allocations are stopped
before exiting.
The configuration docs for `client.template.vault_retry`, `consul_retry`, and
`nomad_retry` incorrectly document the default number of attempts to be
unlimited (0). When we added these config blocks, we defaulted the fields to
`nil` for backwards compatibility, which causes them to fall back to the default
consul-template configuration values.
* artifact: protect against unbounded artifact decompression
Starting with 1.5.0, set defaut values for artifact decompression limits.
artifact.decompression_size_limit (default "100GB") - the maximum amount of
data that will be decompressed before triggering an error and cancelling
the operation
artifact.decompression_file_count_limit (default 4096) - the maximum number
of files that will be decompressed before triggering an error and
cancelling the operation.
* artifact: assert limits cannot be nil in validation
* Add `bridge_network_hairpin_mode` client config setting
* Add node attribute: `nomad.bridge.hairpin_mode`
* Changed format string to use `%q` to escape user provided data
* Add test to validate template JSON for developer safety
Co-authored-by: Daniel Bennett <dbennett@hashicorp.com>
* artifact: enable inheriting environment variables from client
This PR adds client configuration for specifying environment variables that
should be inherited by the artifact sandbox process from the Nomad Client agent.
Most users should not need to set these values but the configuration is provided
to ensure backwards compatability. Configuration of go-getter should ideally be
done through the artifact block in a jobspec task.
e.g.
```hcl
client {
artifact {
set_environment_variables = "TMPDIR,GIT_SSH_OPTS"
}
}
```
Closes#15498
* website: update set_environment_variables text to mention PATH
This PR adds the client config option for turning off filesystem isolation,
applicable on Linux systems where filesystem isolation is possible and
enabled by default.
```hcl
client{
artifact {
disable_filesystem_isolation = <bool:false>
}
}
```
Closes#15496
This reverts PR #12416 and commit 6668ce022a.
While the driver options are well and truly deprecated, this documentation also
covers features like `fingerprint.denylist` that are not available any other
way. Let's revert this until #12420 is ready.
Fixes#13505
This fixes#13505 by treating reserved_ports like we treat a lot of jobspec settings: merging settings from more global stanzas (client.reserved.reserved_ports) "down" into more specific stanzas (client.host_networks[].reserved_ports).
As discussed in #13505 there are other options, and since it's totally broken right now we have some flexibility:
Treat overlapping reserved_ports on addresses as invalid and refuse to start agents. However, I'm not sure there's a cohesive model we want to publish right now since so much 0.9-0.12 compat code still exists! We would have to explain to folks that if their -network-interface and host_network addresses overlapped, they could only specify reserved_ports in one place or the other?! It gets ugly.
Use the global client.reserved.reserved_ports value as the default and treat host_network[].reserverd_ports as overrides. My first suggestion in the issue, but @groggemans made me realize the addresses on the agent's interface (as configured by -network-interface) may overlap with host_networks, so you'd need to remove the global reserved_ports from addresses shared with a shared network?! This seemed really confusing and subtle for users to me.
So I think "merging down" creates the most expressive yet understandable approach. I've played around with it a bit, and it doesn't seem too surprising. The only frustrating part is how difficult it is to observe the available addresses and ports on a node! However that's a job for another PR.