Docs: SEO job spec section (#25612)

* action page

* change all page_title fields

* update title

* constraint through migrate pages

* update page title and heading to use sentence case

* fix front matter description

* Apply suggestions from code review

Co-authored-by: Jeff Boruszak <104028618+boruszak@users.noreply.github.com>

---------

Co-authored-by: Jeff Boruszak <104028618+boruszak@users.noreply.github.com>
This commit is contained in:
Aimee Ukasick
2025-05-19 09:02:07 -05:00
committed by GitHub
parent 77c8acb422
commit 986f3c727a
55 changed files with 538 additions and 574 deletions

View File

@@ -1,10 +1,11 @@
---
layout: docs
page_title: action Block - Job Specification
description: The "action" block allows for executable commands that job authors can predefine for operators to subsequently run against their job.
page_title: action block in the job specification
description: |-
Configure custom commands in the `action` block of the Nomad job specification. Operators may execute these commands on a running allocation as a controlled way to interact with tasks. Review examples that use actions with arguments and templates.
---
# `action` Block
# `action` block in the job specification
<Placement groups={['job', 'group', 'task', 'action']} />
@@ -84,13 +85,13 @@ the use of templating with embedded environment variables and a multi-line scrip
```hcl
action "fetch-latest-nomad-changelog" {
command = "/bin/sh"
args = ["-c",
args = ["-c",
<<EOT
curl -s https://raw.githubusercontent.com/hashicorp/nomad/main/CHANGELOG.md |
curl -s https://raw.githubusercontent.com/hashicorp/nomad/main/CHANGELOG.md |
awk 'BEGIN{
# Setting record and field separators
RS="## "; FS="\n";
RS="## "; FS="\n";
section=""; count=0
}
{

View File

@@ -1,13 +1,11 @@
---
layout: docs
page_title: affinity Block - Job Specification
page_title: affinity block in the job specification
description: |-
The "affinity" block allows restricting the set of eligible nodes.
Affinities may filter on attributes or metadata. Additionally affinities may
be specified at the job, group, or task levels for ultimate flexibility.
Express placement preference for a set of nodes in the `affinity` block of the Nomad job specification. Configure attribute, comparison operator, comparison value, and a scoring weight. Review kernal data, operating system, and metadata examples.
---
# `affinity` Block
# `affinity` block in the job specification
<Placement
groups={[
@@ -66,7 +64,7 @@ Updating the `affinity` block is non-destructive. Updating a job specification
with only non-destructive updates will not migrate or replace existing
allocations.
## `affinity` Parameters
## Parameters
- `attribute` `(string: "")` - Specifies the name or reference of the attribute
to examine for the affinity. This can be any of the [Nomad interpolated
@@ -101,7 +99,7 @@ allocations.
anti affinities, causing nodes that match them to be scored lower. Weights can be used
when there is more than one affinity to express relative preference across them.
### `operator` Values
### `operator` values
This section details the specific values for the "operator" parameter in the
Nomad job specification for affinities. The operator is always specified as a
@@ -172,12 +170,12 @@ affinity {
}
```
## `affinity` Examples
## Examples
The following examples only show the `affinity` blocks. Remember that the
`affinity` block is only valid in the placements listed above.
### Kernel Data
### Kernel data
This example adds a preference for running on nodes which have a kernel version
higher than "3.19".
@@ -191,7 +189,7 @@ affinity {
}
```
### Operating Systems
### Operating systems
This example adds a preference to running on nodes that are running Ubuntu
14.04
@@ -210,7 +208,7 @@ affinity {
}
```
### Meta Data
### Metadata
The following example adds a preference to running on nodes with specific rack metadata
@@ -232,7 +230,7 @@ affinity {
}
```
### Cloud Metadata
### Cloud metadata
When possible, Nomad populates node attributes from the cloud environment. These
values are accessible as filters in affinities. This example adds a preference to run this
@@ -246,21 +244,11 @@ affinity {
}
```
[job]: /nomad/docs/job-specification/job 'Nomad job Job Specification'
[group]: /nomad/docs/job-specification/group 'Nomad group Job Specification'
[client-meta]: /nomad/docs/configuration/client#meta 'Nomad meta Job Specification'
[task]: /nomad/docs/job-specification/task 'Nomad task Job Specification'
[interpolation]: /nomad/docs/runtime/interpolation 'Nomad interpolation'
[node-variables]: /nomad/docs/runtime/interpolation#node-variables- 'Nomad interpolation-Node variables'
[constraint]: /nomad/docs/job-specification/constraint 'Nomad Constraint job Specification'
### Placement Details
## Placement details
Operators can run `nomad alloc status -verbose` to get more detailed information on various
factors, including affinities that affect the final placement.
#### Example Placement Metadata
The following is a snippet from the CLI output of `nomad alloc status -verbose <alloc-id>` showing scoring metadata.
```text
@@ -272,7 +260,7 @@ f2aa8b59-96b8-202f-2258-d98c93e360ab 0.225 -0.6 0
7d6c2e9e-b080-5995-8b9d-ef1695458b52 0.0806 0 0 0 0.0806
```
The placement score is affected by the following factors.
The placement score is affected by the following factors:
- `bin-packing` - Scores nodes according to how well they fit requirements. Optimizes for using minimal number of nodes.
- `job-anti-affinity` - A penalty added for additional instances of the same job on a node, used to avoid having too many instances
@@ -280,3 +268,12 @@ The placement score is affected by the following factors.
- `node-reschedule-penalty` - Used when the job is being rescheduled. Nomad adds a penalty to avoid placing the job on a node where
it has failed to run before.
- `node-affinity` - Used when the criteria specified in the `affinity` block matches the node.
[job]: /nomad/docs/job-specification/job
[group]: /nomad/docs/job-specification/group
[client-meta]: /nomad/docs/configuration/client#meta
[task]: /nomad/docs/job-specification/task
[interpolation]: /nomad/docs/runtime/interpolation
[node-variables]: /nomad/docs/runtime/interpolation#node-variables
[constraint]: /nomad/docs/job-specification/constraint

View File

@@ -1,13 +1,11 @@
---
layout: docs
page_title: artifact Block - Job Specification
page_title: artifact block in the job specification
description: |-
The "artifact" block instructs Nomad to fetch and unpack a remote resource,
such as a file, tarball, or binary, and permits downloading artifacts from a
variety of locations using a URL as the input source.
Configure fetching a remote resource in the `artifact` block of the Nomad job specification. Set the artifact destination, the source URL, HTTP headers, fetch options, mode, and whether Nomad should recursively chown the downloaded artifact. Review examples for downloading a file, fetching from a Git repository, unarchiving, verifying checksums, and downloading from an AWS S3-compatible bucket.
---
# `artifact` Block
# `artifact` block in the job specification
<Placement groups={['job', 'group', 'task', 'artifact']} />
@@ -36,7 +34,7 @@ Nomad supports downloading `http`, `https`, `git`, `hg` and `S3` artifacts. If
these artifacts are archived (`zip`, `tgz`, `bz2`, `xz`), they are
automatically unarchived before the starting the task.
## `artifact` Parameters
## Parameters
- `destination` `(string: "local/")` - Specifies the directory path to
download the artifact, relative to the root of the [task's working
@@ -73,7 +71,7 @@ variables set for the Nomad client. Manage inheritance of environment variables
with the [`artifact.set_environment_variables`][client_artifact] client
configuration.
## Operation Limits
## Operation limits
The client [`artifact`][client_artifact] configuration can set limits to
specific artifact operations to prevent excessive data download or operation
@@ -82,12 +80,12 @@ time.
If a task's `artifact` retrieval exceeds one of those limits, the task will be
interrupted and fail to start. Refer to the task events for more information.
## `artifact` Examples
## Examples
The following examples only show the `artifact` blocks. Remember that the
`artifact` block is only valid in the placements listed above.
### Download File
### Download file
This example downloads the artifact from the provided URL and places it in
`local/file.txt`. The `local/` path is relative to the [task's working
@@ -171,7 +169,7 @@ artifact {
}
```
### Download and Unarchive
### Download and unarchive
This example downloads and unarchives the result in `local/file`. Because the
source URL is an archive extension, Nomad will automatically decompress it:
@@ -193,7 +191,7 @@ artifact {
}
```
### Download and Verify Checksums
### Download and verify checksums
This example downloads an artifact and verifies the resulting artifact's
checksum before proceeding. If the checksum is invalid, an error will be
@@ -209,7 +207,7 @@ artifact {
}
```
### Download from an S3-compatible Bucket
### Download from an S3-compatible bucket
These examples download artifacts from Amazon S3. There are several different
types of [S3 bucket addressing][s3-bucket-addr] and [S3 region-specific

View File

@@ -1,10 +1,11 @@
---
layout: docs
page_title: change_script Block - Job Specification
description: The "change_script" block configures a script to be run on template re-render.
page_title: change_script block in the job specification
description: |-
Configure template change scripts in the `change_script` block of the Nomad job specification. Nomad executes these scripts when a template changes. Configure the command, arguments, and timeout. Enable fail on error. Review an example of how to embed a script in the data block of a template block.
---
# `change_script` Block
# `change_script` block in the job specification
<Placement groups={['job', 'group', 'task', 'template', 'change_script']} />
@@ -50,7 +51,7 @@ job "docs" {
script execution fails. If `false`, script failure will be logged but the task
will continue uninterrupted.
### Template as a script example
### Example
Below is an example of how a script can be embedded in a `data` block of another
`template` block:

View File

@@ -1,11 +1,11 @@
---
layout: docs
page_title: check Block - Job Specification
page_title: check block in the job specification
description: |-
The "check" block declares service check definition for a service registered into the Nomad or Consul service provider.
Configure Nomad to register a service health check in the `check` block of the Nomad job specification. Set the command to use, health check interval, and network values. Configure success and failure behavior. Review HTTP, gRPC, and scripted health checks.
---
# `check` Block
# `check` block in the job specification
<Placement
groups={[
@@ -49,7 +49,7 @@ job "example" {
}
```
### `check` Parameters
## Parameters
- `address_mode` `(string: "host")` - Same as `address_mode` on `service`.
Unlike services, checks do not have an `auto` address mode as there's no way
@@ -205,7 +205,7 @@ job "example" {
`check_restart` can however specify `ignore_warnings = true` with `on_update = "require_healthy"`. If `on_update` is set to `ignore`, `check_restart` must
be omitted entirely.
#### `header` Block
### `header` block
HTTP checks may include a `header` block to set HTTP headers. The `header`
block parameters have lists of strings as values. Multiple values will cause
@@ -227,7 +227,9 @@ service {
}
```
### HTTP Health Check
## Examples
### HTTP health check
This example shows a service with an HTTP health check. This will query the
service on the IP and port registered with Nomad at `/_healthz` every 5 seconds,
@@ -249,7 +251,7 @@ service {
}
```
### Multiple Health Checks
### Multiple health checks
This example shows a service with multiple health checks defined. All health
checks must be passing in order for the service to register as healthy.
@@ -288,7 +290,7 @@ service {
}
```
### gRPC Health Check
### gRPC health check
gRPC health checks use the same host and port behavior as `http` and `tcp`
checks, but gRPC checks also have an optional gRPC service to health check. Not
@@ -311,7 +313,7 @@ service {
In this example Consul would health check the `example.Service` service on the
`rpc` port defined in the task's [network resources][network] block.
### Script Checks with Shells
### Script checks with shells
Note that script checks run inside the task. If your task is a Docker container,
the script will run inside the Docker container. If your task is running in a
@@ -350,7 +352,7 @@ check {
}
```
### Healthiness versus Readiness Checks
### Healthiness versus readiness checks
Multiple checks for a service can be composed to create healthiness and readiness
checks by configuring [`on_update`][on_update] for the check.
@@ -385,7 +387,7 @@ For checks registered into the Nomad service provider, the status information wi
indicate `Mode = readiness` for readiness checks and `Mode = healthiness` for health
checks.
### Check status on CLI
## Check status on CLI
For checks registered into the Nomad service provider, the status information of
checks can be viewed per-allocation. The `alloc status` command now includes

View File

@@ -1,12 +1,11 @@
---
layout: docs
page_title: check_restart Block - Job Specification
page_title: check_restart block in the job specification
description: |-
The "check_restart" block instructs Nomad when to restart tasks with
unhealthy service checks.
Configure Nomad to restart tasks with unhealthy service checks in the `check_restart` block of the Nomad job specification. Set the grace period and health check failure limit. Enable ignore warnings. Review an example and how Nomad behaves based on that example.
---
# `check_restart` Block
# `check_restart` block in the job specification
<Placement
groups={[
@@ -81,7 +80,7 @@ job "mysql" {
treats a `warning` status like `passing` and will not trigger a restart. Only
available in the Consul service provider.
## Example Behavior
## Example behavior
Using the example `mysql` above would have the following behavior:

View File

@@ -1,10 +1,11 @@
---
layout: docs
page_title: connect Block - Job Specification
description: The "connect" block allows specifying options for Consul Connect integration
page_title: connect block in the job specification
description: |-
Configure the `connect` block of the Nomad job specification for Consul service mesh native application integration. Configure sidecar service, sidecar task, and gateway.
---
# `connect` Block
# `connect` block in the job specification
<Placement groups={['job', 'group', 'service', 'connect']} />
@@ -44,13 +45,13 @@ job "countdash" {
}
```
## `connect` Parameters
## Parameters
Used to configure a connect service. Only one of `native`, `sidecar_service`,
or `gateway` may be realized per `connect` block.
- `native` - `(bool: false)` - This is used to configure the service as supporting
[Connect Native](/consul/docs/connect/native) applications.
[Consul service mesh native](/consul/docs/connect/native) applications.
- `sidecar_service` - <code>([sidecar_service][]: nil)</code> - This is used to
configure the sidecar service created by Nomad for Consul Connect.
@@ -61,12 +62,12 @@ or `gateway` may be realized per `connect` block.
- `gateway` - <code>([gateway][]:nil)</code> - This is used to configure the
gateway service created by Nomad for Consul Connect.
## `connect` Examples
## Examples
### Using Connect Native
### Using Consul service mesh native
The following example is a minimal service block for a
[Consul Connect Native](/consul/docs/connect/native)
[Consul service mesh native](/consul/docs/connect/native)
application implemented by a task named `generate`.
```hcl
@@ -81,7 +82,7 @@ service {
}
```
### Using Sidecar Service
### Using sidecar service
The following example is a minimal connect block with defaults and is
sufficient to start an Envoy proxy sidecar for allowing incoming connections
@@ -189,7 +190,7 @@ job "countdash" {
}
```
### Using a Gateway
### Using a gateway
The following is an example service block for creating and using a connect ingress
gateway. It includes a gateway service definition and an api service fronted by
@@ -237,7 +238,7 @@ job "ingress-demo" {
}
```
### Limitations
[gateway]: /nomad/docs/job-specification/gateway

View File

@@ -1,13 +1,11 @@
---
layout: docs
page_title: constraint Block - Job Specification
page_title: constraint block in the job specification
description: |-
The "constraint" block allows restricting the set of eligible nodes.
Constraints may filter on attributes or metadata. Additionally constraints may
be specified at the job, group, or task levels for ultimate flexibility.
Restrict the set of eligible nodes in the `constraint` block of the Nomad job specification. Configure attribute, comparison operator,and comparison value. Review kernal data, distinct property, operating system, and metadata examples.
---
# `constraint` Block
# `constraint` block in the job specification
<Placement
groups={[
@@ -29,6 +27,7 @@ Additionally constraints may be specified at the [job][job], [group][group], or
For example, specifying different [`${attr.unique.hostname}`][node-variables]
constraints at the task level will cause a job to be unplaceable because all
[tasks within a group are scheduled on the same client node][group].
</Warning>
```hcl
@@ -66,7 +65,7 @@ Updating the `constraint` block is non-destructive. Updating a job specification
with only non-destructive updates will not migrate or replace existing
allocations.
## `constraint` Parameters
## Parameters
- `attribute` `(string: "")` - Specifies the name or reference of the attribute
to examine for the constraint. This can be any of the [Nomad interpolated
@@ -102,7 +101,7 @@ allocations.
or any [Nomad interpolated
values](/nomad/docs/runtime/interpolation#interpreted_node_vars).
### `operator` Values
### `operator` values
This section details the specific values for the "operator" parameter in the
Nomad job specification for constraints. The operator is always specified as a
@@ -239,12 +238,12 @@ constraint {
- `"is_not_set"` - Specifies that a given attribute must not be present.
## `constraint` Examples
## Examples
The following examples only show the `constraint` blocks. Remember that the
`constraint` block is only valid in the placements listed above.
### Kernel Data
### Kernel data
This example restricts the task to running on nodes which have a kernel version
higher than "3.19".
@@ -257,7 +256,7 @@ constraint {
}
```
### Distinct Property
### Distinct property
A potential use case of the `distinct_property` constraint is to spread a
service with `count > 1` across racks to minimize correlated failure. Nodes can
@@ -273,7 +272,7 @@ constraint {
}
```
### Operating Systems
### Operating systems
This example restricts the task to running on nodes that are running Ubuntu
14.04
@@ -290,7 +289,7 @@ constraint {
}
```
### Cloud Metadata
### Cloud metadata
When possible, Nomad populates node attributes from the cloud environment. These
values are accessible as filters in constraints. This example constrains this
@@ -303,7 +302,7 @@ constraint {
}
```
### User-Specified Metadata
### User-Specified metadata
This example restricts the task to running on nodes where the binaries for
redis, cypress, and nginx are all cached locally. This particular example is

View File

@@ -1,13 +1,12 @@
---
layout: docs
page_title: consul Block - Job Specification
page_title: consul block in the job specification
description: |-
The "consul" block allows the group or task to specify that it requires a token
from a HashiCorp Consul server. Nomad will automatically retrieve a Consul token
for the group or task.
Configure Consul options in the `consul` block of the Nomad job specification to register them in the Consul catalog. Specify that the group or task requires a Consul token. Configure the Consul cluster, namespace, and partition. Review template, group services, namespace, and admin partition examples.
---
# `consul` Block
# `consul` block in the job specification
<Placement
groups={[
@@ -35,7 +34,7 @@ job "docs" {
}
```
## Workload Identity
## Workload identity
Starting in Nomad 1.7, Nomad clients will use a task or service's [Workload
Identity][] to authenticate to Consul and obtain a Consul token specific to the
@@ -71,7 +70,7 @@ workflow for Consul and Vault before upgrading to Nomad 1.10. Refer to
</Warning>
### Access to Token
### Access to token
The Nomad client will make the Consul token available to the task by writing it
to the secret directory at `secrets/consul_token` and by injecting a
@@ -83,7 +82,7 @@ is set.
The [`template`][template] block can use the Consul token as well.
### `consul` Parameters
### Parameters
- `cluster` `(string: "default")` <EnterpriseAlert inline/> - Specifies the
Consul cluster to use. The Nomad client will retrieve a Consul token from the
@@ -107,13 +106,13 @@ The [`template`][template] block can use the Consul token as well.
with a legacy Consul token. Refer to [Nomad Workload Identities] for more
information on integrating Nomad's workload identity with Consul authentication.
## `consul` Examples
## Examples
The following examples only show the `consul` blocks or other relevant
blocks. Remember that the `consul` block is only valid in the placements listed
above.
### Consul Token for Templates
### Consul token for templates
This example tells the Nomad client to obtain a Consul token using the task's
Workload Identity. The token is available to the task via the canonical
@@ -157,7 +156,7 @@ job "example" {
}
```
### Consul Token for Group Services
### Consul token for group services
This example tells the Nomad client to obtain a Consul Service Identity token
using the service's Workload Identity. The Consul token will be used to register
@@ -189,13 +188,11 @@ job "example" {
}
```
### Consul Namespace
### Consul namespace <EnterpriseAlert product="nomad" inline />
This example shows specifying a particular Consul namespace or Consul cluster
for different tasks within a group.
<EnterpriseAlert />
The `template` block in the `web` task will use the default Consul cluster, and
will obtain a token that allows it access to the `engineering/frontend` Consul
namespace. The `template` block in the `app` task will use the Consul cluster
@@ -235,15 +232,13 @@ job "docs" {
}
```
### Consul Admin Partition
### Consul admin partition <EnterpriseAlert product="nomad" inline />
This example demonstrates how to configure Consul admin partitions for different
tasks within a group. The Consul client agent must separately specify the admin
partition in the agent configuration. Refer to the Consul documentation's
[agent configuration reference][] for more information.
<EnterpriseAlert />
In the following example, the `web` and `app` tasks use the default Consul cluster
and obtain a token that allows access to the `prod` admin partition in Consul. The
Consul configuration occurs at the `group` level because tasks are placed together

View File

@@ -1,12 +1,11 @@
---
layout: docs
page_title: csi_plugin Block - Job Specification
page_title: csi_plugin block in the job specification
description: >-
The "csi_plugin" block allows the task to specify it provides a
Container Storage Interface plugin to the cluster.
Specify that the task provides a Container Storage Interface plugin to the cluster in the `csi_plugin` block of the Nomad job specification. Configure plugin ID, type, mount directory, stage publish base directory, and health timeout. Review recommendations for deploying CSI plugins.
---
# `csi_plugin` Block
# `csi_plugin` block in the job specification
<Placement groups={['job', 'group', 'task', 'csi_plugin']} />
@@ -25,7 +24,7 @@ csi_plugin {
}
```
## `csi_plugin` Parameters
## Parameters
- `id` `(string: <required>)` - This is the ID for the plugin. Some
plugins will require both controller and node plugin types (see
@@ -63,7 +62,7 @@ host. With the Docker task driver, you can use the `privileged = true`
configuration, but no other default task drivers currently have this
option.
## Recommendations for Deploying CSI Plugins
## Recommendations for deploying CSI plugins
CSI plugins run as Nomad tasks, but after mounting the volume are not in the
data path for the volume. Tasks that mount volumes write and read directly to
@@ -88,7 +87,7 @@ volume. This has implications on how to deploy CSI plugins:
your plugin task as recommended by the plugin's documentation to use
the [`topology_request`] field in your volume specification.
## `csi_plugin` Examples
## Examples
```hcl
job "plugin-efs" {

View File

@@ -1,12 +1,11 @@
---
layout: docs
page_title: device Block - Job Specification
page_title: device block in the job specification
description: |-
The "device" block is used to require a certain device be made available
to the task.
Configure task access to devices in the `device` block of the Nomad job specification.
---
# `device` Block
# `device` block in the job specification
<Placement groups={['job', 'group', 'task', 'resources', 'device']} />

View File

@@ -1,13 +1,11 @@
---
layout: docs
page_title: disconnect Block - Job Specification
page_title: disconnect block in the job specification
description: |-
The "disconnect" block describes the behavior of both the Nomad server and
client in case of a network partition, as well as how to reconcile the workloads
in case of a reconnection.
Describe system behavior for disconnected allocations in the `disconnect` block of the Nomad job specification. Specify whether Nomad should replace a disconnected allocation. Configure node reconciliation behavior. Set a duration for Nomad to attempt allocation reconnection and a duration after which a disconnected Nomad client stops its allocations. Review examples for stop after and lost after configuration.
---
# `disconnect` Block
# `disconnect` block in the job specification
<Placement groups={['job', 'group', 'disconnect']} />
@@ -46,7 +44,7 @@ job "docs" {
~> Note that you cannot use both`lost_after` and `stop_on_client_after` in the
same `disconnect` block.
## `disconnect` Parameters
## Parameters
- `lost_after` `(string: "")` - Specifies a duration during which a Nomad client
will attempt to reconnect allocations after it fails to heartbeat in the
@@ -100,7 +98,7 @@ same `disconnect` block.
continuously for the longest time.
## `disconnect` Examples
## Examples
The following examples only show the `disconnect` blocks. Remember that the
`disconnect` block is only valid in the placements listed previously.

View File

@@ -1,11 +1,11 @@
---
layout: docs
page_title: dispatch_payload Block - Job Specification
page_title: dispatch_payload block in the job specification
description: |-
The "dispatch_payload" block allows a task to access dispatch payloads.
Configure a parameterized job task payload in the `dispatch_payload` block of the Nomad job specification.
---
# `dispatch_payload` Block
# `dispatch_payload` block in the job specification
<Placement groups={['job', 'group', 'task', 'dispatch_payload']} />
@@ -27,13 +27,13 @@ job "docs" {
}
```
## `dispatch_payload` Parameters
## Parameters
- `file` `(string: "")` - Specifies the file name to write the content of
dispatch payload to. The file is written relative to the [task's local
directory][localdir].
## `dispatch_payload` Example
## Example
This example shows a `dispatch_payload` block in a parameterized job that writes
the payload to a `config.json` file.

View File

@@ -1,12 +1,11 @@
---
layout: docs
page_title: env Block - Job Specification
page_title: env block in the job specification
description: |-
The "env" block configures a list of environment variables to populate the
task's environment before starting.
Configure environment variables in the `env` block of the Nomad job specification.
---
# `env` Block
# `env` block in the job specification
<Placement groups={['job', 'group', 'task', 'env']} />
@@ -25,14 +24,14 @@ job "docs" {
}
```
## `env` Parameters
## Parameters
The "parameters" for the `env` block can be any key-value. The keys and values
are both of type `string`, but they can be specified as other types. They will
automatically be converted to strings. Invalid characters such as dashes (`-`)
will be converted to underscores.
## `env` Examples
## Examples
The following examples only show the `env` blocks. Remember that the
`env` block is only valid in the placements listed above.
@@ -75,7 +74,7 @@ env = {
}
```
### Dynamic Environment Variables
### Dynamic environment variables
Nomad also supports populating dynamic environment variables from data stored in
HashiCorp Consul and Vault. To use this feature please see the documentation on

View File

@@ -1,13 +1,11 @@
---
layout: docs
page_title: ephemeral_disk Block - Job Specification
page_title: ephemeral_disk block in the job specification
description: |-
The "ephemeral_disk" block describes the ephemeral disk requirements of the
group. Ephemeral disks can be marked as sticky and support live data
migrations.
Describe a group's ephemeral disk requirements in the `ephemeral_disk` block of the Nomad job specification. Turn on data migration and allocation placement. Specify disk size.
---
# `ephemeral_disk` Block
# `ephemeral_disk` block in the job specification
<Placement groups={['job', 'group', 'ephemeral_disk']} />
@@ -33,7 +31,7 @@ be found in the [filesystem internals][].
Each job's logs will be written to ephemeral disk space. See the [logs
documentation][] for more information.
## `ephemeral_disk` Parameters
## Parameters
- `migrate` `(bool: false)` - This specifies that the Nomad client should make a
best-effort attempt to migrate the data from the previous allocation, even if

View File

@@ -1,12 +1,12 @@
---
layout: docs
page_title: expose Block - Job Specification
page_title: expose block in the job specification
description: |-
The "expose" block allows specifying options for configuring Envoy expose
paths used in Consul Connect integration
---
# `expose` Block
# `expose` block in the job specification
<Placement
groups={[
@@ -131,12 +131,12 @@ job "expose-example" {
}
```
## `expose` Parameters
## Parameters
- `path` <code>([Path]: nil)</code> - A list of [Envoy Expose Path Configurations][expose_path]
- `path` <code>([Path]: nil)</code> - A list of Envoy expose path configurations
to expose through Envoy.
### `path` Parameters
### `path` parameters
- `path` `(string: required)` - The HTTP or gRPC path to expose. The path must be prefixed
with a slash.
@@ -153,9 +153,9 @@ job "expose-example" {
for the exposed listener. The port should be configured to [map inside][network-to]
the task's network namespace.
## `expose` Examples
## Examples
The following example is configured to expose the `/metrics` endpoint of the
The following example exposes the `/metrics` endpoint of the
Connect-enabled `count-dashboard` service, using the `HTTP` protocol.
`count-dashboard` is expected to listen inside its namespace to port `9001`, and
external services will be able to reach its `/metrics` endpoint by connecting to
@@ -184,9 +184,7 @@ service {
}
```
## `path` Examples
The following example is an expose configuration that exposes a `/metrics`
The following example exposes a `/metrics`
endpoint using the `http2` protocol (typical for gRPC), and an HTTP `/v2/health`
endpoint.
@@ -209,7 +207,7 @@ proxy {
}
```
### Exposing Service Checks
## Exposing service checks
A common use case for `expose` is for exposing endpoints used in Consul service
check definitions. For these cases the [expose][] parameter in the service check

View File

@@ -1,17 +1,15 @@
---
layout: docs
page_title: gateway Block - Job Specification
page_title: gateway block in the job specification
description: |-
The "gateway" block allows specifying options for configuring Consul Gateways
used in the Consul Connect integration
Configure a Consul mesh, ingress, or terminating gateway in the `gateway` block of the Nomad job specification.
---
# `gateway` Block
# `gateway` block in the job specification
<Placement groups={['job', 'group', 'service', 'connect', 'gateway']} />
The `gateway` block allows configuration of [Consul Connect
Gateways](/consul/docs/connect/gateways). Nomad will automatically create the
The `gateway` block allows configuration of [Consul service mesh gateways](/consul/docs/connect/gateways). Nomad will automatically create the
necessary Gateway [Configuration Entry](/consul/docs/agent/config-entries) as
well as inject an Envoy proxy task into the Nomad job to serve as the Gateway.
@@ -34,7 +32,7 @@ service {
}
```
## `gateway` Parameters
## Parameters
Exactly one of `ingress`, `terminating`, or `mesh` must be configured.
@@ -47,7 +45,7 @@ Exactly one of `ingress`, `terminating`, or `mesh` must be configured.
- `mesh` <code>([mesh]: nil)</code> - Indicates a mesh gateway will be associated
with the service.
### `proxy` Parameters
### `proxy` parameters
- `connect_timeout` `(string: "5s")` - The amount of time to allow when making
upstream connections before timing out. Defaults to 5 seconds. If the upstream
@@ -67,12 +65,12 @@ Exactly one of `ingress`, `terminating`, or `mesh` must be configured.
this map is automatically populated with additional listeners enabling the
Envoy proxy to work from inside the network namespace.
```
envoy_gateway_bind_addresses "<service>" {
address = "0.0.0.0"
port = <port>
}
```
```hcl
envoy_gateway_bind_addresses "<service>" {
address = "0.0.0.0"
port = <port>
}
```
- `envoy_gateway_no_default_bind` `(bool: false)` - Prevents binding to the default
address of the gateway service. This should be used with one of the other options
@@ -87,12 +85,12 @@ envoy_gateway_bind_addresses "<service>" {
- `config` `(map: nil)` - Escape hatch for [Advanced Configuration] of Envoy.
Keys and values support [runtime variable interpolation][interpolation].
#### `address` Parameters
#### `address` parameters
- `address` `(string: required)` - The address to bind to when combined with `port`.
- `port` `(int: required)` - The port to listen to.
### `ingress` Parameters
### `ingress` parameters
- `listener` <code>(array<[listener]> : required)</code> - One or more listeners
that the ingress gateway should setup, uniquely identified by their port
@@ -100,7 +98,7 @@ envoy_gateway_bind_addresses "<service>" {
- `tls` <code>([tls]: nil)</code> - TLS configuration for this gateway.
#### `listener` Parameters
#### `listener` parameters
- `port` `(int: required)` - The port that the listener should receive traffic on.
@@ -115,7 +113,7 @@ envoy_gateway_bind_addresses "<service>" {
- `service` <code>(array<[listener-service]>: required)</code> - One or more services to be
exposed via this listener. For `tcp` listeners, only a single service is allowed.
#### Listener `service` Parameters
#### Listener `service` parameters
The `service` blocks for a listener under an `ingress` gateway accept the
following parameters. Note these are different than the `service` blocks under a
@@ -183,7 +181,7 @@ documentation][response-headers].
removes only headers containing exact matches. Header names are not
case-sensitive.
#### `tls` Parameters
#### `tls` parameters
- `enabled` `(bool: false)` - Set this configuration to enable TLS for every
listener on the gateway. If TLS is enabled, then each host defined in the
@@ -214,7 +212,7 @@ documentation][response-headers].
[`TLSMinVersion`](/consul/docs/connect/config-entries/ingress-gateway#tlsminversion)
in the Consul documentation for supported versions.
### `terminating` Parameters
### `terminating` parameters
- `service` <code>(array<[linked-service]>: required)</code> - One or more services to be
linked with the gateway. The gateway will proxy traffic to these services. These
@@ -222,7 +220,7 @@ documentation][response-headers].
addresses. They must also be registered in the same Consul datacenter as the
terminating gateway.
#### linked `service` Parameters
#### linked `service` parameters
The `service` blocks for a `terminating` gateway accept the following
parameters. Note these are different than the `service` blocks for listeners
@@ -252,7 +250,7 @@ under an `ingress` gateway.
- `sni` `(string: <optional>)` - An optional hostname or domain name to specify
during the TLS handshake.
### `mesh` Parameters
### `mesh` parameters
The `mesh` block currently does not have any configurable parameters.
@@ -260,7 +258,7 @@ The `mesh` block currently does not have any configurable parameters.
the additional piece of service metadata `{"consul-wan-federation":"1"}` must
be applied. This can be done with the service [`meta`][meta] parameter.
### Gateway with host networking
## Gateway with host networking
Nomad supports running gateways using host networking. A static port must be allocated
for use by the [Envoy admin interface](https://www.envoyproxy.io/docs/envoy/latest/operations/admin)
@@ -271,7 +269,7 @@ accessible to any workload running on the same Nomad client. The admin interface
information about the proxy, including a Consul Service Identity token if Consul ACLs
are enabled.
### Specify Envoy image
## Specify Envoy image
The Docker image used for Connect gateway tasks defaults to the official [Envoy
Docker] image, `docker.io/envoyproxy/envoy:v${NOMAD_envoy_version}`, where `${NOMAD_envoy_version}`
@@ -283,12 +281,12 @@ make use of the envoy version interpolation, e.g.
meta.connect.gateway_image = custom/envoy-${NOMAD_envoy_version}:latest
```
### Custom gateway task
## Custom gateway task
The task created for the gateway can be configured manually using the
[`sidecar_task`][sidecar_task] block.
```
```hcl
connect {
gateway {
# ...
@@ -300,9 +298,9 @@ connect {
}
```
### Examples
## Examples
#### ingress gateway
### ingress gateway
```hcl
job "ingress-demo" {
@@ -404,7 +402,7 @@ job "ingress-demo" {
}
```
#### terminating gateway
### terminating gateway
```hcl
job "countdash-terminating" {
@@ -527,7 +525,7 @@ job "countdash-terminating" {
}
```
#### mesh gateway
### mesh gateway
Mesh gateways are useful when Connect services need to make cross-datacenter
requests where not all nodes in each datacenter have full connectivity. This example

View File

@@ -1,12 +1,11 @@
---
layout: docs
page_title: group Block - Job Specification
page_title: group block in the job specification
description: |-
The "group" block defines a series of tasks that should be co-located on the
same Nomad client. Any task within a group will be placed on the same client.
Define a series of co-located tasks in the `group` block of the Nomad job specification. Specify constraints, affinities, spreads, number of instances, Consul settings, ephemeral disks, disconnect strategy, metadata, migration strategy, network requirements, rescheduling strategy, restart policy, service discovery, shutdown delay, task update strategy, Vault policies, and required volumes. Review count, constraints, metadata, network, and service discovery examples.
---
# `group` Block
# `group` block in the job specification
<Placement groups={['job', 'group']} />
@@ -22,7 +21,7 @@ job "docs" {
}
```
## `group` Parameters
## Parameters
- `constraint` <code>([Constraint][]: nil)</code> -
This can be provided multiple times to define additional constraints.
@@ -103,7 +102,7 @@ job "docs" {
- `volume` <code>([Volume][]: nil)</code> - Specifies the volumes that are
required by tasks within the group.
## `group` Examples
## Examples
The following examples only show the `group` blocks. Remember that the
`group` block is only valid in the placements listed above.
@@ -119,7 +118,7 @@ group "example" {
}
```
### Tasks with Constraint
### Tasks with constraint
This example shows two abbreviated tasks with a constraint on the group. This
will restrict the tasks to 64-bit operating systems.
@@ -172,7 +171,7 @@ group "example" {
}
```
### Service Discovery
### Service discovery
This example creates a service in Consul. To read more about service discovery
in Nomad, please see the [Nomad service discovery documentation][service_discovery].

View File

@@ -1,14 +1,14 @@
---
layout: docs
page_title: Expressions - Configuration Language
page_title: HCL expressions reference
description: |-
HCL allows the use of expressions to access data exported
by sources and to transform and combine that data to produce other values.
---
# Expressions
# HCL expressions reference
_Expressions_ are used to refer to or compute values within a configuration.
Use HCL expressions to refer to or compute values within a configuration.
The simplest expressions are just literal values, like `"hello"` or `5`, but
HCL also allows more complex expressions such as arithmetic, conditional
evaluation, and a number of built-in functions.
@@ -21,7 +21,7 @@ feature's documentation describes any restrictions it places on expressions.
The rest of this page describes all of the features of Nomad's
expression syntax.
## Types and Values
## Types and values
The result of an expression is a _value_. All values have a _type_, which
dictates where that value can be used and what transformations can be
@@ -54,14 +54,14 @@ Finally, there is one special value that has _no_ type:
conditional expressions, so you can dynamically omit an argument if a
condition isn't met.
### Advanced Type Details
### Advanced type details
In most situations, lists and tuples behave identically, as do maps and objects.
Whenever the distinction isn't relevant, the Nomad documentation uses each
pair of terms interchangeably (with a historical preference for "list" and
"map").
### Type Conversion
### Type conversion
Expressions are most often used to set values for arguments. In these cases,
the argument has an expected type and the given expression must produce a value
@@ -80,7 +80,7 @@ valid representation of a number or bool value.
- `false` converts to `"false"`, and vice-versa
- `15` converts to `"15"`, and vice-versa
## Literal Expressions
## Literal expressions
A _literal expression_ is an expression that directly represents a particular
constant value. Nomad has a literal expression syntax for each of the value
@@ -121,12 +121,12 @@ types described above:
otherwise. You can use a non-literal expression as a key by wrapping it in
parentheses, like `(var.business_unit_tag_name) = "SRE"`.
### Available Functions
### Available functions
For a full list of available functions, see [the function
reference](/nomad/docs/job-specification/hcl2/functions).
## `for` Expressions
## `for` expressions
A _`for` expression_ creates a complex type value by transforming
another complex type value. Each element in the input value
@@ -181,67 +181,6 @@ together results that have a common key:
{for s in var.list : substr(s, 0, 1) => s... if s != ""}
```
<!---
## TODO: revamp this section
## Splat Expressions
A _splat expression_ provides a more concise way to express a common operation
that could otherwise be performed with a `for` expression.
If `var.list` is a list of objects that all have an attribute `id`, then a list
of the ids could be produced with the following `for` expression:
```hcl
[for o in var.list : o.id]
```
This is equivalent to the following _splat expression:_
```hcl
var.list[*].id
```
The special `[*]` symbol iterates over all of the elements of the list given to
its left and accesses from each one the attribute name given on its right. A
splat expression can also be used to access attributes and indexes from lists
of complex types by extending the sequence of operations to the right of the
symbol:
```hcl
var.list[*].interfaces[0].name
```
The above expression is equivalent to the following `for` expression:
```hcl
[for o in var.list : o.interfaces[0].name]
```
Splat expressions are for lists only (and thus cannot be used [to reference
resources created with
`for_each`](https://www.terraform.io/docs/configuration/resources.html#referring-to-instances), which
are represented as maps). However, if a splat expression is applied to a value
that is _not_ a list or tuple then the value is automatically wrapped in a
single-element list before processing.
For example, `var.single_object[*].id` is equivalent to
`[var.single_object][*].id`, or effectively `[var.single_object.id]`. This
behavior is not interesting in most cases, but it is particularly useful when
referring to resources that may or may not have `count` set, and thus may or
may not produce a tuple value:
```hcl
aws_instance.example[*].id
```
The above will produce a list of ids whether `aws_instance.example` has `count`
set or not, avoiding the need to revise various other expressions in the
configuration when a particular resource switches to and from having `count`
set.
--->
## `dynamic` blocks
Within top-level block constructs like sources, expressions can usually be used
@@ -296,7 +235,7 @@ job "example" {
...
```
~> **Caveat:** Dynamic blocks are not supported inside blocks that are opaque
**Caveat:** Dynamic blocks are not supported inside blocks that are opaque
to Nomad, such as the `config` attributes in [`task`][task_config],
[`sidecar_task`][sidecar_task_config], [`proxy`][proxy_config], and
[`gateway`][gateway_config], and the group scaling
@@ -436,3 +375,66 @@ literal, to dynamically construct strings from other values.
[proxy_config]: /nomad/docs/job-specification/proxy#config
[sidecar_task_config]: /nomad/docs/job-specification/sidecar_task#config
[task_config]: /nomad/docs/job-specification/task#config
<!---
## TODO: revamp this section
## Splat Expressions
A _splat expression_ provides a more concise way to express a common operation
that could otherwise be performed with a `for` expression.
If `var.list` is a list of objects that all have an attribute `id`, then a list
of the ids could be produced with the following `for` expression:
```hcl
[for o in var.list : o.id]
```
This is equivalent to the following _splat expression:_
```hcl
var.list[*].id
```
The special `[*]` symbol iterates over all of the elements of the list given to
its left and accesses from each one the attribute name given on its right. A
splat expression can also be used to access attributes and indexes from lists
of complex types by extending the sequence of operations to the right of the
symbol:
```hcl
var.list[*].interfaces[0].name
```
The above expression is equivalent to the following `for` expression:
```hcl
[for o in var.list : o.interfaces[0].name]
```
Splat expressions are for lists only (and thus cannot be used [to reference
resources created with
`for_each`](https://www.terraform.io/docs/configuration/resources.html#referring-to-instances), which
are represented as maps). However, if a splat expression is applied to a value
that is _not_ a list or tuple then the value is automatically wrapped in a
single-element list before processing.
For example, `var.single_object[*].id` is equivalent to
`[var.single_object][*].id`, or effectively `[var.single_object.id]`. This
behavior is not interesting in most cases, but it is particularly useful when
referring to resources that may or may not have `count` set, and thus may or
may not produce a tuple value:
```hcl
aws_instance.example[*].id
```
The above will produce a list of ids whether `aws_instance.example` has `count`
set or not, avoiding the need to revise various other expressions in the
configuration when a particular resource switches to and from having `count`
set.
--->

View File

@@ -1,24 +1,18 @@
---
layout: docs
page_title: Configuration Language
page_title: Hashicorp Configuration Language (HCL) reference
description: |-
Noamd uses text files to describe infrastructure and to set variables.
These text files are called Nomad job specifications and are
written in the HCL language.
This section contains reference material for the Hashicorp Configuration Language (HCL) as it relates to defining a Nomad job specifications. Learn about HCL syntax elements such as arguments, blocks, and expressions. Review heredoc string support and how to format decimals.
---
# HCL2
# Hashicorp Configuration Language (HCL) reference
Nomad uses the Hashicorp Configuration Language - HCL - designed to allow
concise descriptions of the required steps to get to a job file.
Nomad 1.0 adopts
[HCL2](https://github.com/hashicorp/hcl/blob/hcl2/README.md), the second
generation of HashiCorp Configuration Language. HCL2 extends the HCL language by
adding expressions and input variables support to improve job spec
reusability and readability. Also, the new HCL2 parser improves the error
messages for invalid jobs.
Define your job specification with the Hashicorp Configuration Language (HCL),
which is a syntax specifically designed for building structured configuration
formats. Refer to the [HCL GitHub repository](https://github.com/hashicorp/hcl)
to learn more about HCL.
## HCL Parsing Context
## Parsing context
The [Nomad API uses JSON][jobs-api], not HCL, to represent Nomad jobs.
When running commands like `nomad job run` and `nomad job plan`, the Nomad CLI
@@ -31,7 +25,7 @@ context from the local environment of the CLI (e.g., files, environment variable
[jobs-api]: /nomad/api-docs/jobs
## JSON Jobs
## JSON jobs
Since HCL is a superset of JSON, `nomad job run example.json` will attempt to
parse a JSON job using the HCL parser. However, the JSON format accepted by
@@ -54,7 +48,7 @@ $ nomad job run -json example.json
[json-jobs-api]: /nomad/api-docs/json-jobs
[run-json]: /nomad/docs/commands/job/run#json
## Arguments, Blocks, and Expressions
## Arguments, bocks, and expressions
The syntax of the HCL language consists of only a few basic elements:
@@ -94,7 +88,7 @@ meta {
}
```
Additionally, block attributes must be [HCL2 valid identifiers](https://github.com/hashicorp/hcl/blob/v2.8.0/hclsyntax/spec.md#identifiers).
Additionally, block attributes must be [HCL valid identifiers](https://github.com/hashicorp/hcl/blob/v2.8.0/hclsyntax/spec.md#identifiers).
Generally, identifiers may only contain letters, numbers, underscore `_`,
or a dash `-`, and start with a letter. Notable,
[`meta`](/nomad/docs/job-specification/meta), and
@@ -126,7 +120,7 @@ mounts = [
]
```
Here, the `tmpfs_options` block declaration is invalid HCL2 syntax, and must be an assignment instead:
Here, the `tmpfs_options` block declaration is invalid HCL syntax, and must be an assignment instead:
```hcl
# VALID in Nomad 1.0
@@ -151,7 +145,7 @@ mount {
}
```
### Multiline "here doc" string
### Multiline "heredoc" string
Nomad supports multi-line string literals in the so-called "heredoc" style, inspired by Unix shell languages:
@@ -164,9 +158,9 @@ hello
}
```
HCL2 trims the whitespace preceding the delimiter in the last line. So in the
above example, `data` is read as `"hello\n world\n "` in HCL1, but `"hello\n world\n"` (note lack of trailing whitespace) in HCL2.
HCL trims the whitespace preceding the delimiter in the last line. So in the
above example, `data` is read as `"hello\n world\n "` in HCL1, but `"hello\n world\n"` (note lack of trailing whitespace) in HCL.
### Decimals
HCL2 requires a leading zero for decimal values lower than 1 (e.g. `0.3`, `0.59`, `0.9`).
HCL requires a leading zero for decimal values lower than 1 (e.g. `0.3`, `0.59`, `0.9`).

View File

@@ -1,12 +1,12 @@
---
layout: docs
page_title: Local Values - HCL Configuration Language
description: >-
page_title: HCL locals reference
description: |-
Local values assign a name to an expression that can then be used multiple
times within a folder.
---
# Local Values
# HCL locals reference
Local values assign a name to an expression, that can then be used multiple
times within a folder.

View File

@@ -1,13 +1,13 @@
---
layout: docs
page_title: Syntax - Configuration Language
page_title: HCL syntax reference
description: |-
HCL has its own syntax, intended to combine declarative
structure with expressions in a way that is easy for humans to read and
understand.
---
# HCL Configuration Syntax
# HCL syntax reference
Other pages in this section have described various configuration constructs
that can appear in HCL. This page describes the lower-level syntax of the

View File

@@ -1,12 +1,12 @@
---
layout: docs
page_title: Input Variables - HCL Configuration Language
page_title: HCL input variables reference
description: |-
Input variables are parameters for Nomad jobs.
This page covers configuration syntax for variables.
---
# Input Variables
# HCL input variables reference
Input variables serve as parameters for a Nomad job, allowing aspects of the
job to be customized without altering the job's own source code.

View File

@@ -1,12 +1,11 @@
---
layout: docs
page_title: identity Block - Job Specification
page_title: identity block in the job specification
description: |-
The "identity" block allows tasks to use their Nomad Workload Identity via an
environment variable or file.
Configure workload identity in the `identity` block of the Nomad job specification. Review how to configure workload identities for Consul and Vault.
---
# `identity` Block
# `identity` block in the job specification
<Placement
groups={[
@@ -57,7 +56,7 @@ job "docs" {
}
```
## `identity` Parameters
## Parameters
- `name` `(string: "default")` - The name of the workload identity, which must
be unique per task. Only one `identity` block in a task can omit the `name`
@@ -95,7 +94,7 @@ job "docs" {
It can be convenient to combine workload identity with Nomad's [Task API]
[taskapi] for enabling tasks to access the Nomad API.
## Workload Identities for Consul
## Workload identities for Consul
Jobs that need access to Consul can use Nomad workload identities for
authentication. These identities are specified as additional `identity` blocks
@@ -121,13 +120,14 @@ tasks must have a [`name`](#name) that follows the pattern
In Nomad Community Edition, `<cluster_name>` is always `default`, so the task
identity name should be `consul_default`.
<EnterpriseAlert>
Nomad Enterprise supports multiple Consul clusters. The value of <code>
&lt;cluster_name&gt;</code> must be the same as the <a href="/nomad/docs/job-specification/consul#cluster">
<code>consul.cluster</code></a> value for the task.
</EnterpriseAlert>
<Warning>
Refer to [Nomad Workload Identities][int_consul_wid] section of the Consul
Nomad Enterprise supports multiple Consul clusters. The value of
`cluster_name` must be the same as the task's [`consul.cluster` parameter value](/nomad/docs/job-specification/consul#cluster).
</Warning>
Refer to [Nomad workload identities][int_consul_wid] section of the Consul
integration documentation for more information.
<Tabs>
@@ -177,7 +177,7 @@ job "httpd" {
</CodeBlockConfig>
</Tab>
<Tab heading="Nomad Enterprise" group="ent">
<CodeBlockConfig highlight="3-5,16-19,34-38">
<CodeBlockConfig highlight="3-5,16-19,32-36">
```hcl
job "httpd" {
@@ -225,7 +225,7 @@ job "httpd" {
</Tab>
</Tabs>
## Workload Identities for Vault
## Workload identities for Vault
Jobs that need access to Vault can use Nomad workload identities for
authentication. These identities are specified as additional `identity` blocks
@@ -243,11 +243,12 @@ override the default identity configured in the Nomad servers. The identity
In Nomad Community Edition, `<cluster_name>` is always `default`, so the
identity name should be `vault_default`.
<EnterpriseAlert>
Nomad Enterprise supports multiple Vault clusters. The value of <code>
&lt;cluster_name&gt;</code> must be the same as the <a href="/nomad/docs/job-specification/vault#cluster">
<code>vault.cluster</code></a> value for the task.
</EnterpriseAlert>
<Warning>
Nomad Enterprise supports multiple Vault clusters. The cluster name
must be the same as the task's [`vault.cluster` parameter value](/nomad/docs/job-specification/vault#cluster).
</Warning>
Refer to [Nomad Workload Identities][int_vault_wid] section of the Vault
integration documentation for more information.

View File

@@ -1,10 +1,11 @@
---
layout: docs
page_title: Job Specification
description: Learn about the Job specification used to submit jobs to Nomad.
page_title: Nomad job specification
description: |-
This section contains reference information for configuring a Nomad job with HCL. Learn what a job specification is and its format. Review an example job specification.
---
# Job Specification
# Nomad job specification
The Nomad job specification (or "jobspec" for short) defines the schema for
Nomad jobs. Nomad jobs are specified in [HCL][], which aims to strike a balance
@@ -133,7 +134,7 @@ job "docs" {
```
The `service` block can also be specified at the group level. This allows job specification authors
to create and register services with Consul Service Mesh. A service block specified at the group
to create and register services with Consul Service Mesh. A service block specified at the group
level must include a [`connect`][] block, like the following snippet.
```hcl

View File

@@ -1,13 +1,12 @@
---
layout: docs
page_title: job Block - Job Specification
page_title: job block in the job specification
description: |-
The "job" block is the top-most configuration option in the job
specification. A job is a declarative specification of tasks that Nomad
should run.
Define your workload in the `job` block of the Nomad job specification. Specify constraints, affinities, spreads, datacenter, node pool, group, metadata, job name, node migration strategy, namespace, priority, region, rescheduling strategy, job type, task update strategy, and Vault policies.
Configure parameterized and periodic jobs. Review examples that use Docker, batch jobs, and secrets.
---
# `job` Block
# `job` block in the job specification
<Placement groups={['job']} />
@@ -28,7 +27,7 @@ job "docs" {
group "example" {
# ...
task "docs" {
# ...
}
@@ -56,7 +55,7 @@ job "docs" {
}
```
## `job` Parameters
## Parameters
- `all_at_once` `(bool: false)` - Controls whether the scheduler can make
partial placements if optimistic scheduling resulted in an oversubscribed
@@ -151,12 +150,12 @@ job "docs" {
accidentally. Users should set the `CONSUL_HTTP_TOKEN` environment variable when
running the job instead.
## `job` Examples
## Examples
The following examples only show the `job` blocks. Remember that the
`job` block is only valid in the placements listed above.
### Docker Container
### Docker container
This example job starts a Docker container which runs as a service. Even though
the type is not specified as "service", that is the default job type.
@@ -179,7 +178,7 @@ job "docs" {
}
```
### Batch Job
### Batch job
This example job executes the `uptime` command on 10 Nomad clients in the fleet,
restricting the eligible nodes to Linux machines.
@@ -205,7 +204,7 @@ job "docs" {
}
```
### Consuming Secrets
### Consuming secrets
This example shows a job which retrieves secrets from Vault and writes those
secrets to a file on disk, which the application then consumes. Nomad handles

View File

@@ -1,12 +1,11 @@
---
layout: docs
page_title: lifecycle Block - Job Specification
page_title: lifecycle block in the job specification
description: |-
The "lifecycle" block configures when a task is run within the lifecycle of a
task group
Configure when Nomad runs a task in the `lifecycle` block of the Nomad job configuration. Specify a lifecycle hook of prestart, poststart, or poststop. Define a task as ephemeral or as a long-lived sidecar. Review examples of initialization, cleanup, and auxiliary tasks.
---
# `lifecycle` Block
# `lifecycle` block in the job specification
<Placement groups={['job', 'group', 'task', 'lifecycle']} />
@@ -28,7 +27,7 @@ and should not be restarted if it completes successfully.
Learn more about [Nomad's task dependencies][learn-taskdeps].
## `lifecycle` Parameters
## Parameters
- `hook` `(string: <required>)` - Specifies when a task should be run within
the lifecycle of a group. The following hooks are available:
@@ -47,11 +46,11 @@ Learn more about [Nomad's task dependencies][learn-taskdeps].
[learn-taskdeps]: /nomad/tutorials/task-deps
## Lifecycle Examples
## Examples
The following include examples of archetypal lifecycle patterns.
### Init Task Pattern
### Init task pattern
Init tasks are useful for performing initialization steps that can't be more easily
accomplished using [`template`](/nomad/docs/job-specification/template) or
@@ -80,7 +79,7 @@ until the upstream database service is listening on the expected port:
}
```
### Companion Sidecar Pattern
### Companion sidecar pattern
Companion or sidecar tasks run alongside the main task to perform an auxiliary
task. Common examples include proxies and log shippers. These tasks benefit from
@@ -110,7 +109,7 @@ coupling.
}
```
### Cleanup Task Pattern
### Cleanup task pattern
Poststop tasks run after the main tasks have stopped. They are useful for performing
post-processing that isn't available in the main tasks or for recovering from

View File

@@ -1,13 +1,11 @@
---
layout: docs
page_title: logs Block - Job Specification
page_title: logs block in the job specification
description: |-
The "logs" block configures the log rotation policy for a task's stdout and
stderr. Logging is enabled by default with reasonable defaults. The "logs" block
allows for finer-grained control over how Nomad handles log files.
Configure a task's log rotation policy in the `logs` block of the Nomad job specification. Set the maximum number of rotated files and the maximum size of each rotated file. Turn off log collection for a task. Review default log rotation values.
---
# `logs` Block
# `logs` block in the job specification
<Placement groups={['job', 'group', 'task', 'logs']} />
@@ -45,7 +43,7 @@ the [ephemeral disk documentation][] for more information.
For information on how to interact with logs after they have been configured,
please see the [`nomad alloc logs`][logs-command] command.
## `logs` Parameters
## Parameters
- `max_files` `(int: 10)` - Specifies the maximum number of rotated files Nomad
will retain for `stdout` and `stderr`. Each stream is tracked individually, so
@@ -67,12 +65,12 @@ please see the [`nomad alloc logs`][logs-command] command.
option. If the task driver's `disable_log_collection` option is set to `true`,
it will override `disabled=false` in the task's `logs` block.
## `logs` Examples
## Examples
The following examples only show the `logs` blocks. Remember that the
`logs` block is only valid in the placements listed above.
### Configure Defaults
### Configure defaults
This example shows a default logging configuration. Yes, it is empty on purpose.
Nomad automatically enables logging with reasonable defaults as described in the

View File

@@ -1,10 +1,11 @@
---
layout: docs
page_title: meta Block - Job Specification
description: The "meta" block allows for user-defined arbitrary key-value pairs.
page_title: meta block in the job specification
description: |-
Define arbitrary metadata in the `meta` block of the Nomad job specification. Configure runtime environment variables for tasks to consume. Review examples that use Nomad interpolation and templates.
---
# `meta` Block
# `meta` block in the job specification
<Placement
groups={[
@@ -43,14 +44,14 @@ group layer applies to all tasks within that group.
Meta values are made available inside tasks as [runtime environment variables][env_meta].
## `meta` Parameters
## Parameters
The "parameters" for the `meta` block can be any key-value. The keys and values
are both of type `string`, but they can be specified as other types. They will
automatically be converted to strings. Any character in a key other than
`[A-Za-z0-9_.]` will be converted to `_`.
## `meta` Examples
## Examples
The following examples only show the `meta` blocks. Remember that the
`meta` block is only valid in the placements listed above.
@@ -92,9 +93,7 @@ meta = {
}
```
## `meta` Usage Examples
### Templates
## Template usage example
To make use of a `meta` value in a template, refer to its environment variable
form.

View File

@@ -1,13 +1,12 @@
---
layout: docs
page_title: migrate Block - Job Specification
page_title: migrate block in the job specification
description: |-
The "migrate" block specifies the group's migrate strategy. The migrate
strategy is used to control the job's behavior when it is being migrated off
Define the group's allocation migration strategy in the `migrate` block of the Nomad job specification. The migration strategy is used to control the job's behavior when it is being migrated off
of a draining node.
---
# `migrate` Block
# `migrate` block in the job specification
<Placement
groups={[
@@ -53,7 +52,7 @@ for system operators to put hard limits on how long a drain may take.
See the [Workload Migration Guide](/nomad/tutorials/manage-clusters/node-drain) for details
on node draining.
## `migrate` Parameters
## Parameters
- `max_parallel` `(int: 1)` - Specifies the number of allocations that can be
migrated at the same time. This number must be less than the total

View File

@@ -1,25 +1,24 @@
---
layout: docs
page_title: multiregion Block - Job Specification
page_title: multiregion block in the job specification
description: |-
The "multiregion" block specifies that a job will be deployed to multiple federated
regions.
Deploy a job to multiple federated regions in the `multiregion` block of the Nomad job specification. Learn about multi-region deployment states, parameterized dispatch, and periodic time zones. Configure region count, datacenters, metadata, and node pool. Specify region rollout strategy parameters. Review examples of max parallel, rollback regions, override counts, and merging metadata.
---
# `multiregion` Block
# `multiregion` block in the job specification
<Placement groups={[['job', 'multiregion']]} />
<EnterpriseAlert />
<Placement groups={[['job', 'multi-region']]} />
The `multiregion` block specifies that a job will be deployed to multiple
[federated regions]. If omitted, the job will be deployed to a single region—the
one specified by the `region` field or the `-region` command line flag to
`nomad job run`.
<EnterpriseAlert product="nomad"/>
Federated Nomad clusters are members of the same gossip cluster but not the
same raft cluster; they don't share their data stores. Each region in a
multiregion deployment gets an independent copy of the job, parameterized with
multi-region deployment gets an independent copy of the job, parameterized with
the values of the `region` block. Nomad regions coordinate to rollout each
region's deployment using rules determined by the `strategy` block.
@@ -51,7 +50,7 @@ job "docs" {
}
```
## Multiregion Deployment States
## Multi-region deployment states
A single region deployment using one of the various [upgrade strategies]
begins in the `running` state, and ends in the `successful` state, the
@@ -60,14 +59,14 @@ complete), or the `failed` state. A failed single region deployment may
automatically revert to the previous version of the job if its `update`
block has the [`auto_revert`][update-auto-revert] setting.
In a multiregion deployment, regions begin in the `pending` state. This allows
In a multi-region deployment, regions begin in the `pending` state. This allows
Nomad to determine that all regions have accepted the job before
continuing. At this point up to `max_parallel` regions will enter `running` at
a time. When each region completes its local deployment, it enters a `blocked`
state where it waits until the last region has completed the deployment. The
final region will unblock the regions to mark them as `successful`.
## Parameterized Dispatch
## Parameterized dispatch
Job dispatching is region specific. While a [parameterized job] can be
registered in multiple [federated regions] like any other job, a parameterized
@@ -76,33 +75,33 @@ Operators are expected to invoke the job by invoking [`job dispatch`]
from the CLI or the [HTTP API] and provide the appropriate dispatch options
for that region.
## Periodic Time Zones
## Periodic time zones
Multiregion periodic jobs share [time zone] configuration, with UTC being the
default. Operators should be mindful of this when registering multiregion jobs.
Multi-region periodic jobs share [time zone] configuration, with UTC being the
default. Operators should be mindful of this when registering multi-region jobs.
For example, a periodic configuration that specifies the job should run every
night at midnight New York time, may result in an undesirable execution time
if one of the target regions is set to Tokyo time.
## `multiregion` Parameters
## Parameters
- `strategy` <code>([Strategy](#strategy-parameters): nil)</code> - Specifies
a rollout strategy for the regions.
- `region` <code>([Region](#region-parameters): nil)</code> - Specifies the
parameters for a specific region. This can be specified multiple times to
define the set of regions for the multiregion deployment. Regions are
define the set of regions for the multi-region deployment. Regions are
ordered; depending on the rollout strategy Nomad may roll out to each region
in order or to several at a time.
~> **Note:** Regions can be added, but regions that are removed will not be
stopped and will be ignored by the deployment. This behavior may change before
multiregion deployments are considered GA.
multi-region deployments are considered GA.
### `strategy` Parameters
### `strategy` parameters
- `max_parallel` `(int: <optional>)` - Specifies the maximum number
of region deployments that a multiregion will have in a running state at a
of region deployments that a multi-region will have in a running state at a
time. By default, Nomad will deploy all regions simultaneously.
- `on_failure` `(string: <optional>)` - Specifies the behavior when a region
@@ -110,8 +109,8 @@ multiregion deployments are considered GA.
the default (empty `""`). This field and its interactions with the job's
[`update` block] is described in the [examples] below.
Each region within a multiregion deployment follows the `auto_revert`
strategy of its own `update` block (if any). The multiregion `on_failure`
Each region within a multi-region deployment follows the `auto_revert`
strategy of its own `update` block (if any). The multi-region `on_failure`
field tells Nomad how many other regions should be marked as failed when one
region's deployment fails:
@@ -133,7 +132,7 @@ multiregion deployments are considered GA.
`system` scheduler will be updated to support `on_failure` when the
[`update` block] is fully supported for system jobs in a future release.
### `region` Parameters
### `region` parameters
The name of a region must match the name of one of the [federated regions].
@@ -157,12 +156,12 @@ The name of a region must match the name of one of the [federated regions].
As described above, the parameters for each region replace the default values
for the field with the same name for each region.
## `multiregion` Examples
## Examples
The following examples only show the `multiregion` block and the other
The following examples only show the `multi-region` block and the other
blocks it might be interacting with.
### Max Parallel
### Max parallel
This example shows the use of `max_parallel`. This job will deploy first to
the "north" and "south" regions. If either "north" finishes and enters the
@@ -170,7 +169,7 @@ the "north" and "south" regions. If either "north" finishes and enters the
`running` state at any given time.
```hcl
multiregion {
multi-region {
strategy {
max_parallel = 2
@@ -183,7 +182,7 @@ multiregion {
}
```
### Rollback Regions
### Rollback regions
This example shows the default value of `on_failure`. Because `max_parallel = 1`,
the "north" region will deploy first, followed by "south", and so on. But
@@ -212,7 +211,7 @@ update {
}
```
### Override Counts
### Override counts
This example shows how the `count` field override the default `count` of the
task group. The job the deploys 2 "worker" and 1 "controller" allocations to
@@ -241,7 +240,7 @@ group "controller" {
}
```
### Merging Meta
### Merging meta
This example shows how the `meta` is merged with the `meta` field of the job,
group, and task. A task in "west" will have the values
@@ -281,7 +280,7 @@ group "worker" {
[federated regions]: /nomad/tutorials/manage-clusters/federation
[`update` block]: /nomad/docs/job-specification/update
[update-auto-revert]: /nomad/docs/job-specification/update#auto_revert
[examples]: #multiregion-examples
[examples]: #examples
[upgrade strategies]: /nomad/tutorials/job-updates
[`nomad deployment unblock`]: /nomad/docs/commands/deployment/unblock
[parameterized job]: /nomad/docs/job-specification/parameterized

View File

@@ -1,12 +1,11 @@
---
layout: docs
page_title: network Block - Job Specification
page_title: network block in the job specification
description: |-
The "network" block specifies the networking requirements for the task group,
including networking mode and port allocations.
Configure task group networking requirements in the `network` block of the Nomad job specification. Configure network mode, hostname, port, DNS, and Container Network Interface (CNI) arguments. Review configuration examples of dynamic ports, static ports, mapped ports, bridge mode, host networks, DNS, and CNI networks.
---
# `network` Block
# `network` block in the job specification
<Placement groups={[['job', 'group', 'network']]} />
@@ -59,7 +58,7 @@ All other operating systems use the `host` networking mode.
only. Refer to the [Bridge networking][docs_networking_bridge] documentation
for more information.
## `network` Parameters
## Parameters
- `mbits` <code>([_deprecated_](/nomad/docs/upgrade/upgrade-specific#nomad-0-12-0) int: 10)</code> - Specifies the bandwidth required in MBits.
@@ -92,7 +91,7 @@ All other operating systems use the `host` networking mode.
- `cni` <code>([CNIConfig](#cni-parameters): nil)</code> - Sets the custom CNI
arguments for a network configuration per allocation, for use with `mode="cni/*`.
### `port` Parameters
### `port` parameters
- `static` `(int: nil)` - Specifies the static TCP/UDP port to allocate. If omitted, a
dynamic port is chosen. We **do not recommend** using static ports, except
@@ -128,7 +127,7 @@ When the task starts, it will be passed the following environment variables:
The label of the port is just text - it has no special meaning to Nomad.
## `dns` Parameters
## `dns` parameters
- `servers` `(array<string>: nil)` - Sets the DNS nameservers the allocation uses for name resolution.
- `searches` `(array<string>: nil)` - Sets the search list for hostname lookup
@@ -136,7 +135,7 @@ The label of the port is just text - it has no special meaning to Nomad.
These parameters support [interpolation](/nomad/docs/runtime/interpolation).
## `cni` Parameters
## `cni` parameters
- `args` `(map<string><string>: nil)` - Sets CNI arguments for network configuration.
These get turned into `CNI_ARGS` per the
@@ -144,12 +143,12 @@ These parameters support [interpolation](/nomad/docs/runtime/interpolation).
These parameters support [interpolation](/nomad/docs/runtime/interpolation).
## `network` Examples
## Examples
The following examples only show the `network` blocks. Remember that the
`network` block is only valid in the placements listed above.
### Dynamic Ports
### Dynamic ports
This example specifies a dynamic port allocation for the port labeled "http".
Dynamic ports are allocated in a range from `20000` to `32000`.
@@ -173,7 +172,7 @@ network {
}
```
### Static Ports
### Static ports
Static ports place your job on a host where the port is not already reserved
by another job with the same port.
@@ -191,7 +190,7 @@ network {
For programs that support the `SO_REUSEPORT` unix socket option,
you may set `ignore_collision = true` to place multiple copies on a single node.
### Mapped Ports
### Mapped ports
Some drivers (such as [Docker][docker-driver] and [QEMU][qemu-driver]) allow you
to map ports. A mapped port means that your application can listen on a fixed
@@ -224,7 +223,7 @@ When the task is started, it is passed an additional environment variable named
`NOMAD_HOST_PORT_http` which indicates the host port that the HTTP service is
bound to.
### Bridge Mode
### Bridge mode
Bridge mode allows compatible tasks to share a networking stack and interfaces. Nomad
can then do port mapping without relying on individual task drivers to implement port
@@ -313,7 +312,7 @@ network {
The Nomad client will build the correct [capabilities arguments](https://github.com/containernetworking/cni/blob/v0.8.0/CONVENTIONS.md#well-known-capabilities) for the portmap plugin based on the defined port blocks.
### CNI Args
### CNI args
The following example specifies CNI args for the custom CNI plugin specified above.
@@ -331,7 +330,7 @@ network {
}
```
### Host Networks
### Host networks
In some cases a port should only be allocated to a specific interface or address on the host.
The `host_network` field of a port will constrain port allocation to a single named host network.

View File

@@ -1,11 +1,11 @@
---
layout: docs
page_title: numa Block - Job Specification
page_title: numa block in the job specification
description: |-
The "numa" block is used configure NUMA aware scheduling strategy for a task.
Define a NUMA-aware task scheduling strategy in the `numa` block of the Nomad job specification. Configure the affinity strategy and specify a list of devices that must be colocated on the same NUMA node.
---
# `numa` Block
# `numa` block in the job specification
<Placement groups={['job', 'group', 'task', 'resources', 'numa']} />
@@ -14,16 +14,9 @@ while taking the [NUMA hardware topology][numa_wiki] of a node into consideratio
Workloads that are sensitive to memory latency can perform significantly better
when pinned to CPU cores on the same NUMA node.
<EnterpriseAlert>
This functionality only exists in Nomad Enterprise. This is not
present in the source available version of Nomad.
</EnterpriseAlert>
<Note>
NUMA aware scheduling is currently limited to Linux.
</Note>
<EnterpriseAlert product="nomad"/>
```hcl
job "example" {
@@ -48,7 +41,7 @@ in a compatible configuration.
Configuring the `numa` block requires the task specifies CPU resources using
the [`cores`][cores] parameter.
# `numa` Parameters
# Parameters
- `affinity` `(string: "none")` - Specifies the strategy Nomad will use when
selecting CPU cores to assign to a task. Possible values are `"none"`,
@@ -79,7 +72,7 @@ the [`cores`][cores] parameter.
efficient use of available resources.
</Note>
## `numa` Examples
## Examples
This example will allocate a `1080ti` GPU and ensure it is on the same NUMA node
as the 4 CPU cores reserved for the task.

View File

@@ -1,6 +1,6 @@
---
layout: docs
page_title: parameterized Block - Job Specification
page_title: parameterized block in the job specification
description: |-
A parameterized job is used to encapsulate a set of work that can be carried
out on various inputs much like a function definition. When the
@@ -8,7 +8,7 @@ description: |-
cluster as a whole.
---
# `parameterized` Block
# `parameterized` block in the job specification
<Placement groups={['job', 'parameterized']} />
@@ -56,11 +56,11 @@ job "docs" {
See the [multiregion] documentation for additional considerations when
dispatching parameterized jobs.
## `parameterized` Requirements
## Requirements
- The job's [scheduler type][batch-type] must be `batch` or `sysbatch`.
## `parameterized` Parameters
## Parameters
- `meta_optional` `(array<string>: nil)` - Specifies the set of metadata keys that
may be provided when dispatching against the job.
@@ -79,11 +79,11 @@ dispatching parameterized jobs.
- `"forbidden"` - A payload is forbidden when dispatching against the job.
## `parameterized` Examples
## Examples
The following examples show non-runnable example parameterized jobs:
### Required Inputs
### Required inputs
This example shows a parameterized job that requires both a payload and
metadata:
@@ -121,7 +121,7 @@ job "video-encode" {
}
```
### Metadata Interpolation
### Metadata interpolation
```hcl
job "email-blast" {
@@ -166,9 +166,9 @@ Nomad processes a periodic with parameterized job in the following order:
2. After Nomad dispatches the parameterized job and gives it parameters, Nomad uses the periodic configuration.
3. Nomad dispatches new jobs according to the periodic configuration that uses thee parameters from the triggering parameterized job.
In this example, the periodic job does not trigger any new jobs
until the operator dispatches the parameterized job at least once. After that, the
dispatched child periodically triggers more children with the given parameters.
In this example, the periodic job does not trigger any new jobs
until the operator dispatches the parameterized job at least once. After that, the
dispatched child periodically triggers more children with the given parameters.
```hcl
periodic {
@@ -188,7 +188,7 @@ dispatched child periodically triggers more children with the given parameters.
There are three columns plus comments in this example output, which is for the preceding periodic, parameterized example job. Scroll to the last column to review the comments.
```
$ nomad job status
$ nomad job status
ID Type Submit Date
sync batch/periodic/parameterized 2024-11-07T10:43:30+01:00 // Original submitted job
sync/dispatch-1730972650-247c6e97 batch/periodic 2024-11-07T10:44:10+01:00 // First dispatched job with parameters A
@@ -209,7 +209,7 @@ If you need to force the periodic job, force the the corresponding parameterized
This example forces the first dispatched job with parameters A from the preceding example.
```
$ nomad job periodic force sync/dispatch-1730972650-247c6e97
$ nomad job periodic force sync/dispatch-1730972650-247c6e97
```
[batch-type]: /nomad/docs/job-specification/job#type 'Batch scheduler type'

View File

@@ -1,13 +1,13 @@
---
layout: docs
page_title: periodic Block - Job Specification
page_title: periodic block in the job specification
description: |-
The "periodic" block allows a job to run at fixed times, dates, or intervals.
The easiest way to think about the periodic scheduler is "Nomad cron" or
"distributed cron".
---
# `periodic` Block
# `periodic` block in the job specification
<Placement groups={['job', 'periodic']} />
@@ -27,14 +27,14 @@ job "docs" {
The periodic expression by default evaluates in the **UTC timezone** to ensure
consistent evaluation when Nomad spans multiple time zones.
## `periodic` Requirements
## Requirements
- The job's [scheduler type][batch-type] must be `batch` or `sysbatch`.
- A job can not be updated to be periodically. Thus, to transition an existing job to be periodic, you must first run `nomad stop -purge «job name»`. This is expected behavior and is to ensure that this change has been intentionally made by an operator.
Refer to the [parameterized] documentation for how to use parameters with a periodic job.
## `periodic` Parameters
## Parameters
- `cron` (_deprecated_: Replaced by `crons` in 1.6.2) `(string)` - Specifies a cron expression configuring the
interval to launch the job. In addition to [cron-specific formats][cron], this
@@ -61,12 +61,12 @@ Refer to the [parameterized] documentation for how to use parameters with a peri
prevents this job from running on the `cron` schedule but prevents force
launches.
## `periodic` Examples
## Examples
The following examples only show the `periodic` blocks. Remember that the
`periodic` block is only valid in the placements listed above.
### Run Daily
### Run daily
This example shows running a periodic job daily:
@@ -76,7 +76,7 @@ periodic {
}
```
### Set Time Zone
### Set time zone
This example shows setting a time zone for the periodic job to evaluate in:
@@ -99,7 +99,7 @@ periodic {
}
```
## Daylight Saving Time
## Daylight saving time
Though Nomad supports configuring `time_zone`, we strongly recommend that periodic
jobs are specified with respect to UTC `time_zone`. Only customize `time_zone`

View File

@@ -1,12 +1,12 @@
---
layout: docs
page_title: proxy Block - Job Specification
page_title: proxy block in the job specification
description: |-
The "proxy" block allows specifying options for configuring
sidecar proxies used in Consul Connect integration
---
# `proxy` Block
# `proxy` block in the job specification
<Placement
groups={['job', 'group', 'service', 'connect', 'sidecar_service', 'proxy']}
@@ -47,7 +47,7 @@ job "countdash" {
}
```
## `proxy` Parameters
## Parameters
- `config` `(map: nil)` - Proxy configuration that is opaque to Nomad and passed
directly to Consul. See [Consul Connect documentation][envoy_dynamic_config]
@@ -68,7 +68,7 @@ job "countdash" {
- `upstreams` <code>([upstreams][]: nil)</code> - Used to configure details of
each upstream service that this sidecar proxy communicates with.
## `proxy` Examples
## Examples
The following example is a proxy specification that includes upstreams
configuration.

View File

@@ -1,6 +1,6 @@
---
layout: docs
page_title: reschedule Block - Job Specification
page_title: reschedule block in the job specification
description: >-
The "reschedule" block specifies the group's rescheduling strategy upon
@@ -13,7 +13,7 @@ description: >-
have been exceeded.
---
# `reschedule` Block
# `reschedule` block in the job specification
<Placement
groups={[
@@ -56,7 +56,7 @@ job "docs" {
~> The reschedule block does not apply to `system` or `sysbatch` jobs because
they run on every node.
## `reschedule` Parameters
## Parameters
- `attempts` `(int: <varies>)` - Specifies the number of reschedule attempts
allowed in the configured interval. Defaults vary by job type, see below
@@ -93,7 +93,7 @@ Information about reschedule attempts are displayed in the CLI and API for
allocations. Rescheduling is enabled by default for service and batch jobs
with the options shown below.
### `reschedule` Parameter Defaults
### Parameter defaults
The values for the `reschedule` parameters vary by job type. Below are the
defaults by job type:

View File

@@ -1,12 +1,12 @@
---
layout: docs
page_title: resources Block - Job Specification
page_title: resources block in the job specification
description: |-
The "resources" block describes the requirements a task needs to execute.
Resource requirements include memory, cpu, and more.
---
# `resources` Block
# `resources` block in the job specification
<Placement groups={['job', 'group', 'task', 'resources']} />
@@ -30,7 +30,7 @@ job "docs" {
}
```
## `resources` Parameters
## Parameters
- `cpu` `(int: 100)` - Specifies the CPU required to run this task in MHz.
@@ -61,7 +61,7 @@ job "docs" {
tmpfs is unsupported, because it will still be counted for scheduling
purposes.
## `resources` Examples
## Examples
The following examples only show the `resources` blocks. Remember that the
`resources` block is only valid in the placements listed above.
@@ -103,7 +103,7 @@ resources {
}
}
```
## Memory Oversubscription
## Memory oversubscription
Setting task memory limits requires balancing the risk of interrupting tasks
against the risk of wasting resources. If a task memory limit is set too low,

View File

@@ -1,10 +1,10 @@
---
layout: docs
page_title: restart Block - Job Specification
page_title: restart block in the job specification
description: The "restart" block configures a group's behavior on task failure.
---
# `restart` Block
# `restart` block in the job specification
<Placement
groups={[
@@ -69,7 +69,7 @@ Because sidecar tasks don't accept a `restart` block, it's recommended
that you set the `restart` for jobs with sidecar tasks at the task
level, so that the Connect sidecar can inherit the default `restart`.
## `restart` Parameters
## Parameters
- `attempts` `(int: <varies>)` - Specifies the number of restarts allowed in the
configured interval. Defaults vary by job type, see below for more
@@ -94,7 +94,7 @@ templates when a task is restarted. If set to `true`, all templates will be re-r
when the task restarts. This can be useful for re-fetching Vault secrets, even if the
lease on the existing secrets has not yet expired.
### `restart` Parameter Defaults
### Parameter defaults
The values for many of the `restart` parameters vary by job type. Here are the
defaults by job type:
@@ -138,7 +138,7 @@ job "docs" {
}
```
### `mode` Values
### `mode` values
This section details the specific values for the "mode" parameter in the Nomad
job specification for constraints. The mode is always specified as a string:
@@ -160,7 +160,7 @@ restart {
allocation according to the
[`reschedule`] block.
### `restart` Examples
### Examples
With the following `restart` block, a failing task will restart 3
times with 15 seconds between attempts, and then wait 10 minutes

View File

@@ -1,10 +1,10 @@
---
layout: docs
page_title: scaling Block - Job Specification
description: The "scaling" block allows specifying scaling policy for a task group
page_title: scaling block in the job specification
description: The `scaling` block allows specifying scaling policy for a task group
---
# `scaling` Block
# `scaling` block in the job specification
<Placement
groups={[
@@ -89,7 +89,7 @@ job "example" {
}
```
## `scaling` Parameters
## Parameters
- `min` - <code>(int: nil)</code> - The minimum acceptable count for the task group.
This should be honored by the external autoscaler. It will also be honored by Nomad

View File

@@ -1,23 +1,23 @@
---
layout: docs
page_title: schedule Block - Job Specification
page_title: schedule block in the job specification
description: |-
Time based task execution is enabled by using the "schedule" task block.
Time based task execution is enabled by using the `schedule` task block.
---
# `schedule` Block
<EnterpriseAlert />
# `schedule` block in the job specification
<Placement groups={['job', 'group', 'task', 'periodic']} />
~> **Note:** Time based task execution is an experimental feature and subject
to change. This feature is supported for select customers. Please refer to the
[Upgrade Guide][upgrade] to find breaking changes.
Time based task execution is enabled by using the `schedule` block. The
`schedule` block controls when a task is allowed to be running.
<EnterpriseAlert product="nomad"/>
~> **Note:** Time based task execution is an experimental feature and subject
to change. This feature is supported for select customers. Please refer to the
[Upgrade Guide][upgrade] to find breaking changes.
Unlike [`periodic`][periodic] jobs, the `schedule` block applies to individual
tasks. The Nomad Client starts and stops tasks at the specified time without
interaction with the Nomad Servers. This means time based task execution works
@@ -40,7 +40,7 @@ job "docs" {
}
```
## `schedule` Parameters
## Parameters
The `schedule` block must have a `cron` block containing:

View File

@@ -1,12 +1,12 @@
---
layout: docs
page_title: service Block - Job Specification
page_title: service block in the job specification
description: |-
The "service" block instructs Nomad to register the task as a service using
The `service` block instructs Nomad to register the task as a service using
the Nomad or Consul service discovery integration.
---
# `service` Block
# `service` block in the job specification
<Placement
groups={[
@@ -79,7 +79,7 @@ The `service` block can also be at task group level.
This enables services in the same task group to opt into [Consul
Service Mesh][connect] integration.
## `service` Parameters
## Parameters
- `provider` `(string: "consul")` - Specifies the service registration provider
to use for service registrations. Valid options are either `consul` or
@@ -238,7 +238,7 @@ Service Mesh][connect] integration.
be omitted entirely.
## `service` Lifecycle
## Lifecycle
Nomad manages registering, updating, and deregistering services with the
service provider. It is important to understand when each of these steps
@@ -273,12 +273,12 @@ following order:
4. If the task has not exited after the [`kill_timeout`][killtimeout], Nomad
will force kill the application.
## `service` Examples
## Examples
The following examples only show the `service` blocks. Remember that the
`service` block is only valid in the placements listed above.
### Basic Service
### Basic service
This example registers a service named "load-balancer" with no health checks
using the Nomad provider:
@@ -311,7 +311,7 @@ network {
```
### Using Driver Address Mode
### Using driver address mode
The [Docker](/nomad/docs/drivers/docker#network_mode) driver supports the `driver`
setting for the `address_mode` parameter in both `service` and `check` blocks.

View File

@@ -1,12 +1,12 @@
---
layout: docs
page_title: sidecar_service Block - Job Specification
page_title: sidecar_service block in the job specification
description: |-
The "sidecar_service" block allows specifying options for configuring
The `sidecar_service` block allows specifying options for configuring
sidecar proxies used in Consul Connect integration
---
# `sidecar_service` Block
# `sidecar_service` block in the job specification
<Placement groups={['job', 'group', 'service', 'connect', 'sidecar_service']} />

View File

@@ -1,12 +1,12 @@
---
layout: docs
page_title: sidecar_task Block - Job Specification
page_title: sidecar_task block in the job specification
description: |-
The "sidecar_task" block allows specifying options for configuring
The `sidecar_task` block allows specifying options for configuring
the task of the sidecar proxies used in Consul Connect integration
---
# `sidecar_task` Block
# `sidecar_task` block in the job specification
<Placement groups={['job', 'group', 'service', 'connect', 'sidecar_task']} />
@@ -123,7 +123,7 @@ Nomad's version interpolation, e.g.
meta.connect.sidecar_image = custom/envoy-${NOMAD_envoy_version}:latest
```
## `sidecar_task` Parameters
## Parameters
- `name` `(string: "connect-[proxy|gateway]-<service>")` - Name of the task. Defaults to
including the name of the service the proxy or gateway is providing.
@@ -156,7 +156,7 @@ meta.connect.sidecar_image = custom/envoy-${NOMAD_envoy_version}:latest
- `volume_mount` <code>([VolumeMount][]: nil)</code> - Specifies where a group
volume should be mounted.
## `sidecar_task` Examples
## Examples
The following example configures resources for the sidecar task and other configuration.

View File

@@ -1,8 +1,8 @@
---
layout: docs
page_title: spread Block - Job Specification
page_title: spread block in the job specification
description: >-
The "spread" block is used to spread placements across a certain node
The `spread` block is used to spread placements across a certain node
attributes such as datacenter.
Spread may be specified at the job or group levels for ultimate flexibility.
@@ -11,7 +11,7 @@ description: >-
each.
---
# `spread` Block
# `spread` block in the job specification
<Placement
groups={[
@@ -74,7 +74,7 @@ Updating the `spread` block is non-destructive. Updating a job specification
with only non-destructive updates will not migrate or replace existing
allocations.
## `spread` Parameters
## Parameters
- `attribute` `(string: "")` - Specifies the name or reference of the attribute
to use. This can be any of the [Nomad interpolated
@@ -88,13 +88,13 @@ allocations.
during scoring and must be an integer between 0 to 100. Weights can be used
when there is more than one spread or affinity block to express relative preference across them.
## `target` Parameters
## Parameters
- `value` `(string:"")` - Specifies a target value of the attribute from a `spread` block.
- `percent` `(integer:0)` - Specifies the percentage associated with the target value.
## Comparison to `spread` Scheduling Algorithm
## Comparison to `spread` scheduling algorithm
The `spread` block is not the same concept as setting the [scheduler
algorithm][] to `"spread"` instead of `"binpack"`. Setting the scheduler
@@ -103,7 +103,7 @@ of the scheduler to place workloads from different jobs on the same set of nodes
or not. The `spread` block impacts how the scheduler places allocations for a
given job.
## Scheduling Performance
## Scheduling performance
Using the `spread` block can have significant impact on scheduling
performance. For each allocation in a `service` and `batch` job, the scheduler
@@ -124,11 +124,11 @@ default distribution of a job across multiple nodes. If this is not possible,
you may consider reducing the size of the node pool or datacenter to reduce the
number of nodes available for the scheduler to consider.
## `spread` Examples
## Examples
The following examples show different ways to use the `spread` block.
### Even Spread Across Data Center
### Even spread across data center
This example shows a spread block across the node's `datacenter` attribute. If we have
two datacenters `us-east1` and `us-west1`, and a task group of `count = 10`,
@@ -141,7 +141,7 @@ spread {
}
```
### Spread With Target Percentages
### Spread with target percentages
This example shows a spread block that specifies one target percentage. If we
have three datacenters `us-east1`, `us-east2`, and `us-west1`, and a task group
@@ -179,7 +179,7 @@ spread {
}
```
### Spread Across Multiple Attributes
### Spread across multiple attributes
This example shows spread blocks with multiple attributes. Consider a Nomad cluster
where there are two datacenters `us-east1` and `us-west1`, and each datacenter has nodes

View File

@@ -1,12 +1,12 @@
---
layout: docs
page_title: task Block - Job Specification
page_title: task block in the job specification
description: |-
The "task" block creates an individual unit of work, such as a Docker
The `task` block creates an individual unit of work, such as a Docker
container, web application, or batch processing.
---
# `task` Block
# `task` block in the job specification
<Placement groups={['job', 'group', 'task']} />
@@ -23,7 +23,7 @@ job "docs" {
}
```
## `task` Parameters
## Parameters
- `artifact` <code>([Artifact][]: nil)</code> - Defines an artifact to download
before running the task. This may be specified multiple times to download
@@ -129,12 +129,12 @@ job "docs" {
- `kind` `(string: <varies>)` - Used internally to manage tasks according to
the value of this field. Initial use case is for Consul Connect.
## `task` Examples
## Examples
The following examples only show the `task` blocks. Remember that the
`task` block is only valid in the placements listed above.
### Docker Container
### Docker container
This example defines a task that starts a Docker container as a service. Docker
is just one of many drivers supported by Nomad. Read more about drivers in the
@@ -154,7 +154,7 @@ task "server" {
}
```
### Metadata and Environment Variables
### Metadata and environment variables
This example uses custom metadata and environment variables to pass information
to the task.
@@ -180,7 +180,7 @@ task "server" {
}
```
### Service Discovery
### Service discovery
This example creates a service in Consul. To read more about service discovery
in Nomad, please see the [Nomad service discovery documentation][service_discovery].

View File

@@ -1,14 +1,14 @@
---
layout: docs
page_title: template Block - Job Specification
page_title: template block in the job specification
description: |-
The "template" block instantiates an instance of a template renderer. This
The `template` block instantiates an instance of a template renderer. This
creates a convenient way to ship configuration files that are populated from
environment variables, Consul data, Vault secrets, or just general
configurations within a Nomad task.
---
# `template` Block
# `template` block in the job specification
<Placement groups={['job', 'group', 'task', 'template']} />
@@ -43,7 +43,7 @@ For a full list of the API template functions, please refer to the [Consul
Template documentation][ct_api]. For a an introduction to Go templates, please
refer to the [Learn Go Template Syntax][gt_learn] guide.
## `template` Parameters
## Parameters
- `change_mode` `(string: "restart")` - Specifies the behavior Nomad should take
if the rendered template changes. Nomad will always write the new contents of
@@ -154,12 +154,12 @@ refer to the [Learn Go Template Syntax][gt_learn] guide.
- `vault_grace` `(string: "15s")` - [Deprecated](https://github.com/hashicorp/consul-template/issues/1268)
## `template` Examples
## Examples
The following examples only show the `template` blocks. Remember that the
`template` block is only valid in the placements listed above.
### Inline Template
### Inline template
This example uses an inline template to render a file to disk. This file watches
various keys in Consul for changes:
@@ -187,7 +187,7 @@ template {
}
```
### Remote Template
### Remote template
This example uses an [`artifact`][artifact] block to download an input template
before passing it to the template engine:
@@ -221,7 +221,7 @@ EOH
}
```
### Node Variables
### Node variables
Use the `env` function to access the Node's attributes and metadata inside a
template. Note the `meta.` syntax here applies only to node meta fields.
@@ -239,7 +239,7 @@ template {
}
```
### Environment Variables
### Environment variables
Templates may be used to create environment variables for tasks. These templates
work exactly like other templates except once the templates are written, they
@@ -303,7 +303,7 @@ DB_PASSWD={{ .Data.data.DB_PASSWD | toJSON }}
For more details see [go-envparser's README][go-envparse].
### Template Destinations
### Template destinations
Templates are rendered into the task working directory. Drivers without
filesystem isolation (such as `raw_exec`) or drivers that build a chroot in
@@ -344,9 +344,9 @@ in the Nomad client logs which reports "watching this many dependencies could
DDoS your servers", referring to the Vault, Consul, or Nomad cluster being
queried.
## Nomad Integration
## Nomad integration
### Nomad Services
### Nomad services
Nomad service registrations can be queried using the `nomadService` and
`nomadServices` functions. The requests are tied to the same namespace as the
@@ -376,7 +376,7 @@ EOF
}
```
### Simple Load Balancing with Nomad Services
### Simple load balancing with Nomad services
The `nomadService` function now supports simple load balancing by selecting
instances of a service via [rendezvous hashing][rhash].
@@ -405,7 +405,7 @@ EOH
}
```
### Nomad Variables
### Nomad variables
<Warning>
@@ -584,7 +584,7 @@ EOH
}
```
## Consul Integration
## Consul integration
<Warning>
@@ -619,7 +619,7 @@ APP_NAME = "{{key "app/name"}}"
}
```
### Consul Services
### Consul services
The Consul service catalog can be queried using the [`service`][ct_api_service]
and [`services`][ct_api_services] functions. For Connect-capable services, use
@@ -648,7 +648,7 @@ upstream {{ .Name | toLower }} {
}
```
## Vault Integration
## Vault integration
<Warning>
@@ -659,7 +659,7 @@ for more details and to adjust the retry behaviour.
</Warning>
### PKI Certificate
### PKI certificate
Vault is a popular open source tool for managing secrets. In addition to acting
as an encrypted KV store, Vault can also generate dynamic secrets, like PKI/TLS
@@ -784,7 +784,7 @@ access it by index. This secret was set using
}
```
## Client Configuration
## Client configuration
The `template` block has the following [client configuration
options](/nomad/docs/configuration/client#options):

View File

@@ -1,12 +1,12 @@
---
layout: docs
page_title: transparent_proxy Block - Job Specification
page_title: transparent_proxy block in the job specification
description: |-
The "transparent_proxy" block allows specifying options for configuring Envoy
The `transparent_proxy` block allows specifying options for configuring Envoy
in Consul Connect transparent proxy mode.
---
# `transparent_proxy` Block
# `transparent_proxy` block in the job specification
<Placement
groups={[
@@ -43,7 +43,7 @@ Using transparent proxy has some important restrictions:
* The workload's task cannot use the same Unix user ID (UID) as the Envoy
sidecar proxy.
## `transparent_proxy` Parameters
## Parameters
* `exclude_inbound_ports` `([]string: nil)` - A list of inbound ports to exclude
from the inbound traffic redirection. This allows traffic on these ports to
@@ -83,7 +83,7 @@ Using transparent proxy has some important restrictions:
node via [client metadata](#client-metadata) (see below). Note that your
workload's task cannot use the same UID as the Envoy sidecar proxy.
## Client Metadata
## Client metadata
You can change the default [`outbound_port`](#outbound_port) and [`uid`](#uid)
for a given client node by updating the node metadata via the [`nomad node meta
@@ -107,7 +107,7 @@ Envoy images.
## Examples
### Minimal Example
### Minimal
The following example is a minimal transparent proxy specification. Note that
with transparent proxy, you will not need to configure an `upstreams` block.

View File

@@ -1,12 +1,12 @@
---
layout: docs
page_title: ui Block - Job Specification
page_title: ui block in the job specification
description: |-
The "ui" block allows lets users add description and helpful links to their
The `ui` block adds descriptions and helpful links to a
job page in the Nomad Web UI.
---
# `ui` Block
# `ui` block in the job specification
<Placement
groups={[
@@ -20,7 +20,7 @@ added to the top of the job page in question.
The following will provide the Web UI with a job description and a pair of links:
## `ui` Parameters
## Parameters
- `description` `(string: "")` - The markdown-enabled description of the job. We
support [GitHub Flavored Markdown](https://github.github.com/gfm/).
@@ -28,7 +28,7 @@ The following will provide the Web UI with a job description and a pair of links
of the job index page in the Web UI. A job can have any number of links, and
they must contain both a string `label` and `url`.
## `ui` Example
## Example
```hcl
job "docs" {

View File

@@ -1,13 +1,13 @@
---
layout: docs
page_title: update Block - Job Specification
page_title: update block in the job specification
description: |-
The "update" block specifies the group's update strategy. The update strategy
The `update` block specifies the group's update strategy. The update strategy
is used to control things like rolling upgrades and canary deployments. If
omitted, a default update strategy is applied.
---
# `update` Block
# `update` block in the job specification
<Placement
groups={[
@@ -45,7 +45,7 @@ job "docs" {
The `system` scheduler will be updated to support the new `update` block in
a future release.
## `update` Parameters
## Parameters
- `max_parallel` `(int: 1)` - Specifies the number of allocations within a task group that can be
updated at the same time. The task groups themselves are updated in parallel.
@@ -112,12 +112,12 @@ a future release.
setting doesn't apply to service jobs which use
[deployments][strategies] instead, with the equivalent parameter being [`min_healthy_time`](#min_healthy_time).
## `update` Examples
## Examples
The following examples only show the `update` blocks. Remember that the
`update` block is only valid in the placements listed above.
### Parallel Upgrades Based on Checks
### Parallel upgrades based on checks
This example performs 3 upgrades at a time and requires the allocations be
healthy for a minimum of 30 seconds before continuing the rolling upgrade. Each
@@ -132,7 +132,7 @@ update {
}
```
### Parallel Upgrades Based on Task State
### Parallel upgrades based on task state
This example is the same as the last but only requires the tasks to be healthy
and does not require registered service checks to be healthy.
@@ -146,7 +146,7 @@ update {
}
```
### Canary Upgrades
### Canary upgrades
This example creates a canary allocation when the job is updated. The canary is
created without stopping any previous allocations from the job and allows
@@ -169,7 +169,7 @@ version.
$ nomad job promote <job-id>
```
### Blue/Green Upgrades
### Blue/Green upgrades
By setting the canary count equal to that of the task group, blue/green
deployments can be achieved. When a new version of the job is submitted, instead
@@ -200,7 +200,7 @@ old to new version.
$ nomad job promote <job-id>
```
### Serial Upgrades
### Serial upgrades
This example uses a serial upgrade strategy, meaning exactly one task group will
be updated at a time. The allocation must be healthy for the default
@@ -212,7 +212,7 @@ update {
}
```
### Update Block Inheritance
### Update block inheritance
This example shows how inheritance can simplify the job when there are multiple
task groups.

View File

@@ -1,12 +1,12 @@
---
layout: docs
page_title: upstreams Block - Job Specification
page_title: upstreams block in the job specification
description: |-
The "upstreams" block allows specifying options for configuring
The `upstreams` block allows specifying options for configuring
upstream services
---
# `upstreams` Block
# `upstreams` block in the job specification
<Placement
groups={[
@@ -79,7 +79,7 @@ job "countdash" {
```
## `upstreams` Parameters
## Parameters
- `config` `(map: nil)` - Upstream configuration that is opaque to Nomad and passed
directly to Consul. See [Consul Connect documentation][consul_expose_path_ref]
@@ -123,7 +123,7 @@ Applications are encouraged to connect to `127.0.0.1` and a well defined port
can be deployed with the Redis upstream's `local_bind_port = 6379` and require
no explicit configuration.
## `upstreams` Examples
## Examples
The following example is an upstream config with the name of the destination service
and a local bind port.

View File

@@ -1,13 +1,13 @@
---
layout: docs
page_title: vault Block - Job Specification
page_title: vault block in the job specification
description: |-
The "vault" block allows the task to specify that it requires a token from a
The `vault` block allows the task to specify that it requires a token from a
HashiCorp Vault server. Nomad will automatically retrieve a Vault token for
the task and handle token renewal for the task.
---
# `vault` Block
# `vault` block in the job specification
<Placement
groups={[
@@ -56,7 +56,7 @@ according to the value set in the `change_mode` parameter.
If a `vault` block is specified, the [`template`][template] block can interact
with Vault as well.
## `vault` Parameters
## Parameters
- `allow_token_expiration` `(bool: false)` - Specifies that Nomad clients should
not attempt to renew a task's Vault token, allowing it to expire. This should
@@ -113,12 +113,12 @@ with Vault as well.
from Vault using JWT and workload identity. If not specified the client's
[`create_from_role`][] value is used.
## `vault` Examples
## Examples
The following examples only show the `vault` blocks. Remember that the
`vault` block is only valid in the placements listed above.
### Retrieve Token
### Retrieve token
This example tells the Nomad client to retrieve a Vault token. The token is
available to the task via the canonical environment variable `VAULT_TOKEN` and
@@ -131,7 +131,7 @@ vault {
}
```
### Signal Task
### Signal task
This example shows signaling the task instead of restarting it.
@@ -144,7 +144,7 @@ vault {
}
```
### Private Token and Change Modes
### Private token and change modes
This example retrieves a Vault token that is not shared with the task when using
a driver that provides `image` isolation like [Docker][docker].
@@ -196,7 +196,7 @@ the certificate is reissued, as indicated by `change_mode = "restart"`
(which is the default value for `change_mode`).
### Vault Namespace
### Vault namespace
This example shows specifying a particular Vault namespace for a given task.

View File

@@ -1,11 +1,11 @@
---
layout: docs
page_title: volume Block - Job Specification
page_title: volume block in the job specification
description: >-
Configure storage volumes in the "volume" block of a Nomad job specification. Specify dynamic host or Container Storage Interface (CSI) volume type, node access mode, filesystem or block device attachment mode, and mount options. Enable read only access and mounting a unique volume per node. Learn about volume interpolation.
Configure storage volumes in the `volume` block of a Nomad job specification. Specify dynamic host or Container Storage Interface (CSI) volume type, node access mode, filesystem or block device attachment mode, and mount options. Enable read only access and mounting a unique volume per node. Learn about volume interpolation.
---
# `volume` Block
# `volume` block in the job specification
<Placement groups={['job', 'group', 'volume']} />
@@ -58,7 +58,7 @@ plugins][csi_plugin].
The Nomad client makes the volumes available to tasks according to
the [volume_mount][volume_mount] block in the `task` configuration.
## `volume` Parameters
## Parameters
- `type` `(string: "")` - Specifies the type of a given volume. The valid volume
types are `"host"` and `"csi"`. Setting the `"host"` value can request either
@@ -147,7 +147,7 @@ The following fields are only valid for volumes with `type = "csi"`:
- `fs_type`: file system type (ex. `"ext4"`)
- `mount_flags`: the flags passed to `mount` (ex. `["ro", "noatime"]`)
## Volume Interpolation
## Volume interpolation
Because volumes represent state, many workloads with multiple allocations will
want to mount specific volumes to specific tasks. The `volume` block is used

View File

@@ -1,12 +1,12 @@
---
layout: docs
page_title: volume_mount Block - Job Specification
page_title: volume_mount block in the job specification
description: |-
The "volume_mount" block allows the task to specify where a group "volume"
The `volume_mount` block allows the task to specify where a group `volume`
should be mounted.
---
# `volume_mount` Block
# `volume_mount` block in the job specification
<Placement groups={['job', 'group', 'task', 'volume_mount']} />
@@ -37,7 +37,7 @@ The Nomad client will make the volumes available to tasks according to this
configuration, and it will fail the allocation if the client configuration
updates to remove a volume that it depends on.
## `volume_mount` Parameters
## Parameters
- `volume` `(string: "")` - Specifies the group volume that the mount is going
to access.
@@ -65,10 +65,10 @@ updates to remove a volume that it depends on.
but does not clean it up properly before exiting.
- `selinux_label``(string: "")` - Specifies the SELinux label for the mount.
This is only supported on Linux hosts and when supported by the task driver. Refer to the task driver documentation for more information. Possible
This is only supported on Linux hosts and when supported by the task driver. Refer to the task driver documentation for more information. Possible
values are:
- `Z` - Specifies that the volume content is private and unshared between
- `Z` - Specifies that the volume content is private and unshared between
containers.
- `z` - Specifies that the volume content is shared among containers.

View File

@@ -1358,7 +1358,7 @@
"path": "job-specification"
},
{
"title": "HCL2",
"title": "HCL guide",
"routes": [
{
"title": "Overview",
@@ -1782,14 +1782,14 @@
"title": "action",
"path": "job-specification/action"
},
{
"title": "artifact",
"path": "job-specification/artifact"
},
{
"title": "affinity",
"path": "job-specification/affinity"
},
{
"title": "artifact",
"path": "job-specification/artifact"
},
{
"title": "change_script",
"path": "job-specification/change_script"
@@ -1806,14 +1806,14 @@
"title": "connect",
"path": "job-specification/connect"
},
{
"title": "consul",
"path": "job-specification/consul"
},
{
"title": "constraint",
"path": "job-specification/constraint"
},
{
"title": "consul",
"path": "job-specification/consul"
},
{
"title": "csi_plugin",
"path": "job-specification/csi_plugin"