Small website updates (#9504)

* systemd should be downcased
* containerd should be downcased
* spellchecking, adjust list item spacing
* QEMU should be upcased
* spelling, it's->its
* Fewer exclamation points; drive-by list spacing
* Update website/pages/docs/internals/security.mdx
* Namespace is not ent only now.
Co-authored-by: Tim Gross <tgross@hashicorp.com>
This commit is contained in:
Charlie Voiselle
2020-12-02 19:02:03 -05:00
committed by GitHub
parent ee7c97fe91
commit e64d528664
19 changed files with 374 additions and 256 deletions

View File

@@ -123,7 +123,7 @@ The table below shows this endpoint's support for
### Parameters
- `address` `(string: <required>)` - Specifies the list of addresses in the
format `ip:port`. This is specified as a query string!
format `ip:port`. This is specified as a query string.
### Sample Request

View File

@@ -460,7 +460,7 @@ The `Task` object supports the following keys:
health check associated with the service. Nomad supports the `script`,
`http` and `tcp` Consul Checks. Script checks are not supported for the
qemu driver since the Nomad client doesn't have access to the file system
of a task using the Qemu driver.
of a task using the QEMU driver.
- `Type`: This indicates the check types supported by Nomad. Valid
options are currently `script`, `http` and `tcp`.

View File

@@ -38,7 +38,7 @@ check {
...
```
- `target` `(float: <required>)` - Specifies the metric value the Autscaler
- `target` `(float: <required>)` - Specifies the metric value the Autoscaler
should try to meet.
- `threshold` `(float: 0.01)` - Specifies how significant a change in the input

View File

@@ -12,7 +12,7 @@ description: >
The `node eligibility` command is used to toggle scheduling eligibility for a
given node. By default nodes are eligible for scheduling meaning they can
receive placements and run new allocations. Nodes that have their scheduling
elegibility disabled are ineligibile for new placements.
eligibility disabled are ineligible for new placements.
The [`node drain`][drain] command automatically disables eligibility. Disabling
a drain restore eligibility by default.
@@ -20,7 +20,7 @@ a drain restore eligibility by default.
Disable scheduling eligibility is useful when draining a set of nodes: first
disable eligibility on each node that will be drained. Then drain each node.
If you just drain each node allocations may get rescheduled multiple times as
they get placed on nodes about to be drained!
they get placed on nodes about to be drained.
Disabling scheduling eligibility may also be useful when investigating poorly
behaved nodes. It allows operators to investigate the current state of a node

View File

@@ -91,15 +91,15 @@ audit {
- `delivery_guarantee` `(string: "enforced", required)` - Specifies the
delivery guarantee that will be made for each audit log entry. Available
options are `"enforced"` and `"best-effort"`. `"enforced"` will
hault request execution if the audit log event fails to be written to it's sink.
`"best-effort"` will not hault request execution, meaning a request could
halt request execution if the audit log event fails to be written to its sink.
`"best-effort"` will not halt request execution, meaning a request could
potentially be un-audited.
- `format` `(string: "json", required)` - Specifies the output format to be
sent to a sink. Currently only `"json"` format is supported.
- `path` `(string: "[data_dir]/audit/audit.log")` - Specifies the path and file
name to use for the audit log. By default Nomad will use it's configured
name to use for the audit log. By default Nomad will use its configured
[`data_dir`](/docs/configuration#data_dir) for a combined path of
`/data_dir/audit/audit.log`. If `rotate_bytes` or `rotate_duration` are set
file rotation will occur. In this case the filename will be post-fixed with
@@ -113,8 +113,8 @@ audit {
audit log should be written to before it needs to be rotated. Must be a
duration value such as 30s.
- `rotate_max_files` `(int: 0)` - Specifies the maximum number of older audit log
file archives to keep. If 0 no files are ever deleted.
- `rotate_max_files` `(int: 0)` - Specifies the maximum number of older audit
log file archives to keep. If 0, no files are ever deleted.
### `filter` Stanza
@@ -124,7 +124,7 @@ audit log for all stages (OperationReceived and OperationComplete). Filters
are useful for operators who want to limit the performance impact of audit
logging as well as reducing the amount of events generated.
`endpoints`, `stages`, and `operations` support [globbed pattern](https://github.com/ryanuber/go-glob/blob/master/README.md#example) matching.
`endpoints`, `stages`, and `operations` support [globbed pattern][glob] matching.
Query parameters are ignored when evaluating filters.
@@ -176,9 +176,9 @@ audit {
## Audit Log Format
Below are two audit log entries for a request made to `/v1/job/web/summary`.
The first entry is for the `OperationReceived` stage. The second entry is for
the `OperationComplete` stage and includes the contents of the `OperationReceived`
Below are two audit log entries for a request made to `/v1/job/web/summary`. The
first entry is for the `OperationReceived` stage. The second entry is for the
`OperationComplete` stage and includes the contents of the `OperationReceived`
stage plus a `response` key.
```json
@@ -292,3 +292,5 @@ If the request returns an error the audit log will reflect the error message.
}
}
```
[glob]: https://github.com/ryanuber/go-glob/blob/master/README.md#example

View File

@@ -81,20 +81,22 @@ The `docker` driver supports the following configuration in the job spec. Only
command = "my-command"
}
```
- `cpuset_cpus` <sup>Beta</sup> - (Optional) CPUs in which to allow execution (0-3, 0,1).
Limit the specific CPUs or cores a container can use. A comma-separated list
or hyphen-separated range of CPUs a container can use, if you have more than
one CPU. The first CPU is numbered 0. A valid value might be 0-3 (to use the
first, second, third, and fourth CPU) or 1,3 (to use the second and fourth CPU).
- `cpuset_cpus` <sup>Beta</sup> - (Optional) CPUs in which to allow execution
(0-3, 0,1). Limit the specific CPUs or cores a container can use. A
comma-separated list or hyphen-separated range of CPUs a container can use, if
you have more than one CPU. The first CPU is numbered 0. A valid value might
be 0-3 (to use the first, second, third, and fourth CPU) or 1,3 (to use the
second and fourth CPU).
Note: `cpuset_cpus` pins the workload to the CPUs but doesn't give the workload
exclusive access to those CPUs.
```hcl
config {
cpuset_cpus = "0-3"
}
```
```hcl
config {
cpuset_cpus = "0-3"
}
```
- `dns_search_domains` - (Optional) A list of DNS search domains for the container
to use.
@@ -631,7 +633,7 @@ group "example" {
If Nomad allocates port `23332` to your allocation, the Docker driver will
automatically setup the port mapping from `23332` on the host to `6379` in your
container, so it will just work!
container, so it will just work.
Note that by default this only works with `bridged` networking mode. It may
also work with custom networking plugins which implement the same API for
@@ -784,6 +786,7 @@ plugin "docker" {
JSON file which is in the dockercfg format containing authentication
information for a private registry, from either (in order) `auths`,
`credHelpers` or `credsStore`.
- `helper`<a id="plugin_auth_helper"></a> - Allows an operator to specify a
[credsStore](https://docs.docker.com/engine/reference/commandline/login/#credential-helper-protocol)
like script on `$PATH` to lookup authentication information from external
@@ -799,9 +802,11 @@ plugin "docker" {
- `cert` - Path to the server's certificate file (`.pem`). Specify this
along with `key` and `ca` to use a TLS client to connect to the docker
daemon. `endpoint` must also be specified or this setting will be ignored.
- `key` - Path to the client's private key (`.pem`). Specify this along with
`cert` and `ca` to use a TLS client to connect to the docker daemon.
`endpoint` must also be specified or this setting will be ignored.
- `ca` - Path to the server's CA file (`.pem`). Specify this along with
`cert` and `key` to use a TLS client to connect to the docker daemon.
`endpoint` must also be specified or this setting will be ignored.
@@ -815,22 +820,28 @@ plugin "docker" {
- `image` - Defaults to `true`. Changing this to `false` will prevent Nomad
from removing images from stopped tasks.
- `image_delay` - A time duration, as [defined
here](https://golang.org/pkg/time/#ParseDuration), that defaults to `3m`.
The delay controls how long Nomad will wait between an image being unused
and deleting it. If a tasks is received that uses the same image within
the delay, the image will be reused.
- `container` - Defaults to `true`. This option can be used to disable Nomad
from removing a container when the task exits. Under a name conflict,
Nomad may still remove the dead container.
- `dangling_containers` stanza for controlling dangling container detection
and cleanup:
- `enabled` - Defaults to `true`. Enables dangling container handling.
- `dry_run` - Defaults to `false`. Only log dangling containers without
cleaning them up.
- `period` - Defaults to `"5m"`. A time duration that controls interval
between Nomad scans for dangling containers.
- `creation_grace` - Defaults to `"5m"`. Grace period after a container is
created during which the GC ignores it. Only used to prevent the GC from
removing newly created containers before they are registered with the
@@ -843,6 +854,7 @@ plugin "docker" {
(`volumes`) inside their container and use volume drivers
(`volume_driver`). Binding relative paths is always allowed and will be
resolved relative to the allocation's directory.
- `selinuxlabel` - Allows the operator to set a SELinux label to the
allocation and task local bind-mounts to containers. If used with
`docker.volumes.enabled` set to false, the labels will still be applied to
@@ -957,8 +969,10 @@ The `docker` driver will set the following client attributes:
- `driver.docker` - This will be set to "1", indicating the driver is
available.
- `driver.docker.bridge_ip` - The IP of the Docker bridge network if one
exists.
- `driver.docker.version` - This will be set to version of the docker server.
Here is an example of using these properties in a job file:

View File

@@ -1,37 +1,42 @@
---
layout: docs
page_title: 'Drivers: nomad-driver-containerd'
sidebar_title: Containerd
sidebar_title: containerd
description: >-
The containerd driver is used
for launching containers using containerd.
---
# Containerd Task Driver
# containerd Task Driver
Name: `containerd-driver`
Homepage: https://github.com/Roblox/nomad-driver-containerd
Containerd ([`containerd.io`](https://containerd.io)) is a lightweight container daemon
for running and managing container lifecycle. Docker daemon also uses containerd.
containerd ([`containerd.io`](https://containerd.io)) is a lightweight container
daemon for running and managing container lifecycle. Docker daemon also uses
containerd.
```hcl
dockerd (docker daemon) --> containerd --> containerd-shim --> runc
```
`nomad-driver-containerd` enables Nomad clients to launch containers directly using containerd, without Docker!
The Docker daemon is therefore not required on the host system.
`nomad-driver-containerd` enables Nomad clients to launch containers directly
using containerd, without Docker. The Docker daemon is therefore not required on
the host system.
See the project's [`homepage`](https://github.com/Roblox/nomad-driver-containerd) for more details.
See the [project's homepage](https://github.com/Roblox/nomad-driver-containerd)
for more details.
## Client Requirements
The containerd task driver is not built into Nomad. It must be [`downloaded`](https://github.com/Roblox/nomad-driver-containerd/releases/)
onto the client host in the configured plugin directory.
The containerd task driver is not built into Nomad. It must be
[`downloaded`][releases] onto the client host in the configured plugin
directory.
- Linux (Ubuntu >=16.04) with [`containerd`](https://containerd.io/downloads/) (>=1.3) installed.
- [`containerd-driver`](https://github.com/Roblox/nomad-driver-containerd/releases/) binary in Nomad's [plugin_dir][plugin_dir].
- [`containerd-driver`][releases] binary in Nomad's [plugin_dir][].
## Capabilities
@@ -50,7 +55,9 @@ For exec'ing into the container, one can use `nomad alloc exec` command.
## Task Configuration
Since docker also relies on containerd for managing container lifecycle, the example job created by [`nomad init -short`][nomad-init] can easily be adapted to use `containerd-driver` instead:
Since Docker also relies on containerd for managing container lifecycle, the
example job created by [`nomad init -short`][nomad-init] can easily be adapted
to use `containerd-driver` instead:
```hcl
job "redis" {
@@ -78,7 +85,8 @@ job "redis" {
The containerd task driver supports the following parameters:
- `image` - (Required) OCI image (Docker is also OCI compatible) for your container.
- `image` - (Required) OCI image (Docker is also OCI compatible) for your
container.
```hcl
config {
@@ -105,8 +113,9 @@ config {
}
```
- `privileged` - (Optional) `true` or `false` (default) Run container in privileged mode.
Your container will have all linux capabilities when running in privileged mode.
- `privileged` - (Optional) `true` or `false` (default) Run container in
privileged mode. Your container will have all Linux capabilities when running
in privileged mode.
```hcl
config {
@@ -114,7 +123,8 @@ config {
}
```
- `readonly_rootfs` - (Optional) `true` or `false` (default) Container root filesystem will be read-only.
- `readonly_rootfs` - (Optional) `true` or `false` (default) Container root
filesystem will be read-only.
```hcl
config {
@@ -122,8 +132,8 @@ config {
}
```
- `host_network` ((#host_network)) - (Optional) `true` or `false` (default) Enable host network.
This is equivalent to `--net=host` in docker.
- `host_network` ((#host_network)) - (Optional) `true` or `false` (default)
Enable host network. This is equivalent to `--net=host` in docker.
```hcl
config {
@@ -166,12 +176,19 @@ config {
}
```
- `mounts` - (Optional) A list of mounts to be mounted in the container.
Volume, bind and tmpfs type mounts are supported. fstab style [`mount options`](https://github.com/containerd/containerd/blob/master/mount/mount_linux.go#L187-L211) are supported.
- `type` - (Optional) Supported values are `volume`, `bind` or `tmpfs`. **Default:** `volume`.
- `mounts` - (Optional) A list of mounts to be mounted in the container. Volume,
bind and tmpfs type mounts are supported. fstab style [`mount options`][] are
supported.
- `type` - (Optional) Supported values are `volume`, `bind` or `tmpfs`.
**Default:** `volume`.
- `target` - (Required) Target path in the container.
- `source` - (Optional) Source path on the host.
- `options` - (Optional) fstab style [`mount options`](https://github.com/containerd/containerd/blob/master/mount/mount_linux.go#L187-L211). **NOTE:** For bind mounts, atleast `rbind` and `ro` are required.
- `options` - (Optional) fstab style [`mount options`][]. **NOTE:** For bind
mounts, at least `rbind` and `ro` are required.
```hcl
config {
@@ -190,12 +207,15 @@ config {
`nomad-driver-containerd` supports **host** and **bridge** networks.
**NOTE:** `host` and `bridge` are mutually exclusive options, and only one of them should be used at a time.
**NOTE:** `host` and `bridge` are mutually exclusive options, and only one of
them should be used at a time.
1. **Host** network can be enabled by setting `host_network` to `true` in task config
of the job spec (see [host_network][host-network] under Task Configuration).
1. **Host** network can be enabled by setting `host_network` to `true` in task
config of the job spec (see [host_network][host-network] under Task
Configuration).
2. **Bridge** network can be enabled by setting the `network` stanza in the task group section of the job spec.
1. **Bridge** network can be enabled by setting the `network` stanza in the task
group section of the job spec.
```hcl
network {
@@ -203,7 +223,8 @@ network {
}
```
You need to install CNI plugins on Nomad client nodes under `/opt/cni/bin` before you can use `bridge` networks.
You need to install CNI plugins on Nomad client nodes under `/opt/cni/bin`
before you can use `bridge` networks.
**Instructions for installing CNI plugins.**
@@ -215,14 +236,17 @@ You need to install CNI plugins on Nomad client nodes under `/opt/cni/bin` befor
## Plugin Options ((#plugin_options))
- `enabled` - (Optional) The `containerd` driver may be disabled on hosts by setting this option to `false` (defaults to `true`).
- `enabled` - (Optional) The `containerd` driver may be disabled on hosts by
setting this option to `false` (defaults to `true`).
- `containerd_runtime` - (Required) Runtime for `containerd` e.g. `io.containerd.runc.v1` or `io.containerd.runc.v2`
- `containerd_runtime` - (Required) Runtime for `containerd` e.g.
`io.containerd.runc.v1` or `io.containerd.runc.v2`
- `stats_interval` - (Optional) This value defines how frequently you want to send `TaskStats` to nomad client. (defaults to `1 second`).
- `stats_interval` - (Optional) This value defines how frequently you want to
send `TaskStats` to nomad client. (defaults to `1 second`).
An example of using these plugin options with the new [plugin
syntax][plugin] is shown below:
An example of using these plugin options with the new [plugin syntax][plugin] is
shown below:
```hcl
plugin "containerd-driver" {
@@ -234,7 +258,8 @@ plugin "containerd-driver" {
}
```
Please note the plugin name should match whatever name you have specified for the external driver in the [plugin_dir][plugin_dir] directory.
Please note the plugin name should match whatever name you have specified for
the external driver in the [plugin_dir][plugin_dir] directory.
[nomad-driver-containerd]: https://github.com/Roblox/nomad-driver-containerd
[nomad-init]: /docs/commands/job/init
@@ -242,3 +267,5 @@ Please note the plugin name should match whatever name you have specified for th
[plugin_dir]: /docs/configuration#plugin_dir
[plugin-options]: #plugin_options
[host-network]: #host_network
[`mount options`]: https://github.com/containerd/containerd/blob/9561d9389d3dd87ff6030bf1da4e705bbc024130/mount/mount_linux.go#L198-L222
[releases]: https://github.com/Roblox/nomad-driver-containerd/releases/

View File

@@ -28,9 +28,9 @@ Below is a list of community-supported task drivers you can use with Nomad:
- [Jail task driver][jail-task-driver]
- [Pot][pot]
- [Firecracker][firecracker-task-driver]
- [Systemd-Nspawn][nspawn-driver]
- [systemd-nspawn][nspawn-driver]
- [Windows IIS][nomad-driver-iis]
- [Containerd][nomad-driver-containerd]
- [containerd][nomad-driver-containerd]
[lxc]: /docs/drivers/external/lxc
[rkt]: /docs/drivers/external/rkt

View File

@@ -1,16 +1,18 @@
---
layout: docs
page_title: 'Drivers: Systemd-Nspawn'
sidebar_title: Systemd-Nspawn
description: The Nspawn task driver is used to run application containers using Systemd-Nspawn.
page_title: 'Drivers: systemd-nspawn'
sidebar_title: systemd-nspawn
description: The nspawn task driver is used to run application containers using systemd-nspawn.
---
# Nspawn Driver
# nspawn Driver
Name: `nspawn`
The `nspawn` driver provides an interface for using Systemd-Nspawn for running application
containers. You can download the external Systemd-Nspawn driver [here][nspawn-driver]. For more detailed instructions on how to set up and use this driver, please refer to the [guide][nspawn-guide].
The `nspawn` driver provides an interface for using systemd-nspawn for running
application containers. You can download the external systemd-nspawn driver
[here][nspawn-driver]. For more detailed instructions on how to set up and use
this driver, please refer to the [guide][nspawn-guide].
## Task Configuration
@@ -30,17 +32,22 @@ The `nspawn` driver supports the following configuration in the job spec:
(Optional) `true` (default) or `false`. Search for an init program and invoke
it as PID 1. Arguments specified in `command` will be used as arguments for
the init program.
- [`ephemeral`](https://www.freedesktop.org/software/systemd/man/systemd-nspawn.html#-x) -
(Optional) `true` or `false` (default). Make an ephemeral copy of the image
before staring the container.
- [`process_two`](https://www.freedesktop.org/software/systemd/man/systemd-nspawn.html#-a) -
(Optional) `true` or `false` (default). Start the command specified with
`command` as PID 2, using a minimal stub init as PID 1.
- [`read_only`](https://www.freedesktop.org/software/systemd/man/systemd-nspawn.html#--read-only) -
(Optional) `true` or `false` (default). Mount the used image as read only.
- [`user_namespacing`](https://www.freedesktop.org/software/systemd/man/systemd-nspawn.html#-U) -
(Optional) `true` (default) or `false`. Enable user namespacing features
inside the container.
- `command` - (Optional) A list of strings to pass as the used command to the
container.
@@ -53,6 +60,7 @@ The `nspawn` driver supports the following configuration in the job spec:
- [`console`](https://www.freedesktop.org/software/systemd/man/systemd-nspawn.html#--console=MODE) -
(Optional) Configures how to set up standard input, output and error output
for the container.
- `image` - The image to be used in the container. This can either be the path
to a
[directory](https://www.freedesktop.org/software/systemd/man/systemd-nspawn.html#-D),
@@ -62,18 +70,25 @@ The `nspawn` driver supports the following configuration in the job spec:
[`systemd-machined`](https://www.freedesktop.org/software/systemd/man/systemd-machined.service.html).
A path can be specified as a relative path from the configured Nomad plugin
directory. **This option is mandatory**.
- `image_download` - (Optional) Download the used image according to the
settings defined in this block. Structure is documented below.
- [`pivot_root`](https://www.freedesktop.org/software/systemd/man/systemd-nspawn.html#--pivot-root=) -
(Optional) Pivot the specified directory to the be containers root directory.
- [`resolv_conf`](https://www.freedesktop.org/software/systemd/man/systemd-nspawn.html#--resolv-conf=) -
(Optional) Configure how `/etc/resolv.conf` is handled inside the container.
- [`user`](https://www.freedesktop.org/software/systemd/man/systemd-nspawn.html#-u) -
(Optional) Change to the specified user in the containers user database.
- [`volatile`](https://www.freedesktop.org/software/systemd/man/systemd-nspawn.html#--volatile) -
(Optional) Boot the container in volatile mode.
- [`working_directory`](https://www.freedesktop.org/software/systemd/man/systemd-nspawn.html#--chdir=) -
(Optional) Set the working directory inside the container.
- [`bind`](https://www.freedesktop.org/software/systemd/man/systemd-nspawn.html#--bind=) -
(Optional) Files or directories to bind mount inside the container.
@@ -109,10 +124,10 @@ The `nspawn` driver supports the following configuration in the job spec:
```
- `port_map` - (Optional) A key-value map of port labels. Works the same way as
in the [docker
driver][docker_driver].
**Note:** `systemd-nspawn` will not expose ports to the loopback interface of
your host.
in the [docker driver][docker_driver].
**Note:** `systemd-nspawn` will not expose ports to the loopback interface
of your host.
```hcl
config {
@@ -126,11 +141,14 @@ The `image_download` block supports the following arguments:
- `url` - The URL of the image to download. The URL must be of type `http://` or
`https://`. **This option is mandatory**.
- [`verify`](https://www.freedesktop.org/software/systemd/man/machinectl.html#pull-tar%20URL%20%5BNAME%5D) -
(Optional) `no` (default), `signature` or `checksum`. Whether to verify the
image before making it available.
- `force` - (Optional) `true` or `false` (default) If a local copy already
exists, delete it first and replace it by the newly downloaded image.
- `type` - (Optional) `tar` (default) or `raw`. The type of image to download.
## Networking
@@ -145,17 +163,19 @@ The `nspawn` driver requires the following:
- 64-bit Linux host
- The `linux_amd64` Nomad binary
- The Nspawn driver binary placed in the [plugin_dir][plugin_dir] directory.
- The nspawn driver binary placed in the [plugin_dir][plugin_dir] directory.
- `systemd-nspawn` to be installed
- Nomad running with root privileges
## Plugin Options
- `enabled` - The `nspawn` driver may be disabled on hosts by setting this option to `false` (defaults to `true`).
- `enabled` - The `nspawn` driver may be disabled on hosts by setting this
option to `false` (defaults to `true`).
- `volumes` - Enable support for Volumes in the driver (defaults to `true`).
An example of using these plugin options with the new [plugin
syntax][plugin] is shown below:
An example of using these plugin options with the new [plugin syntax][plugin] is
shown below:
```hcl
plugin "nspawn" {
@@ -170,8 +190,9 @@ plugin "nspawn" {
The `nspawn` driver will set the following client attributes:
- `driver.nspawn` - Set to `true` if Systemd-Nspawn is found and enabled on the
- `driver.nspawn` - Set to `true` if systemd-nspawn is found and enabled on the
host node and Nomad is running with root privileges.
- `driver.nspawn.version` - Version of `systemd-nspawn` e.g.: `244`.
[nspawn-driver]: https://github.com/JanMa/nomad-driver-nspawn/releases

View File

@@ -1,15 +1,15 @@
---
layout: docs
page_title: 'Drivers: Qemu'
sidebar_title: Qemu
description: The Qemu task driver is used to run virtual machines using Qemu/KVM.
page_title: 'Drivers: QEMU'
sidebar_title: QEMU
description: The QEMU task driver is used to run virtual machines using QEMU/KVM.
---
# Qemu Driver
# QEMU Driver
Name: `qemu`
The `qemu` driver provides a generic virtual machine runner. Qemu can utilize
The `qemu` driver provides a generic virtual machine runner. QEMU can utilize
the KVM kernel module to utilize hardware virtualization features and provide
great performance. Currently the `qemu` driver can map a set of ports from the
host machine to the guest virtual machine, and provides configuration for
@@ -115,7 +115,7 @@ The `qemu` driver implements the following [capabilities](/docs/internals/plugin
## Client Requirements
The `qemu` driver requires Qemu to be installed and in your system's `$PATH`.
The `qemu` driver requires QEMU to be installed and in your system's `$PATH`.
The task must also specify at least one artifact to download, as this is the only
way to retrieve the image being run.
@@ -123,7 +123,7 @@ way to retrieve the image being run.
The `qemu` driver will set the following client attributes:
- `driver.qemu` - Set to `1` if Qemu is found on the host node. Nomad determines
- `driver.qemu` - Set to `1` if QEMU is found on the host node. Nomad determines
this by executing `qemu-system-x86_64 -version` on the host and parsing the output
- `driver.qemu.version` - Version of `qemu-system-x86_64`, ex: `2.4.0`
@@ -150,16 +150,16 @@ plugin "qemu" {
}
```
- `image_paths` (`[]string`: `[]`) - Specifies the host paths the Qemu driver is
- `image_paths` (`[]string`: `[]`) - Specifies the host paths the QEMU driver is
allowed to load images from.
## Resource Isolation
Nomad uses Qemu to provide full software virtualization for virtual machine
workloads. Nomad can use Qemu KVM's hardware-assisted virtualization to deliver
Nomad uses QEMU to provide full software virtualization for virtual machine
workloads. Nomad can use QEMU KVM's hardware-assisted virtualization to deliver
better performance.
Virtualization provides the highest level of isolation for workloads that
require additional security, and resource use is constrained by the Qemu
require additional security, and resource use is constrained by the QEMU
hypervisor rather than the host kernel. VM network traffic still flows through
the host's interface(s).

View File

@@ -2,13 +2,15 @@
layout: docs
page_title: Documentation
description: |-
Welcome to the Nomad documentation! This documentation is more of a reference
Welcome to the Nomad documentation. This documentation is more of a reference
guide for all available features and options of Nomad.
---
# Nomad Documentation
Welcome to the Nomad documentation! This documentation is a reference guide for
all available features and options of Nomad. If you are just getting
started with Nomad, please start with the
[introduction and getting started guide](/intro) instead.
Welcome to the Nomad documentation. This documentation is a reference for all
available features and options of Nomad. If you are just getting started with
Nomad, please start with the [HashiCorp Learn "Getting Started" collection][gs]
instead.
[gs]: https://learn.hashicorp.com/collections/nomad/get-started

View File

@@ -33,7 +33,7 @@ clarify what is being discussed:
cannot be split.
- **Driver** A Driver represents the basic means of executing your **Tasks**.
Example Drivers include Docker, Qemu, Java, and static binaries.
Example Drivers include Docker, QEMU, Java, and static binaries.
- **Task** - A Task is the smallest unit of work in Nomad. Tasks are executed by drivers,
which allow Nomad to be flexible in the types of tasks it supports. Tasks

View File

@@ -39,7 +39,7 @@ but the general mechanisms for a secure Nomad deployment revolve around:
tokens.
- **[Namespaces](https://learn.hashicorp.com/tutorials/nomad/namespaces)**
(**Enterprise Only**) - Access to read and write to a Namepsace can be
- Access to read and write to a namespace can be
controlled to allow for granular access to job information managed within a
multi-tenant cluster.

View File

@@ -533,7 +533,7 @@ job "example" {
}
```
No explicit `address_mode` required!
No explicit `address_mode` required.
Services default to the `auto` address mode. When a Docker network mode other
than "host" or "bridge" is used, services will automatically advertise the
@@ -561,7 +561,7 @@ job "example" {
config {
image = "redis:3.2"
network_mode = "weave"
# No port map required!
# No port map required.
}
resources {
@@ -666,7 +666,7 @@ job "example" {
config {
image = "redis:3.2"
advertise_ipv6_address = true
# No port map required!
# No port map required.
}
resources {
@@ -705,7 +705,7 @@ advertise and check directly since Nomad isn't managing any port assignments.
</sup>
<small>
{' '}
Script checks are not supported for the [qemu driver][qemu] since the Nomad
Script checks are not supported for the [QEMU driver][qemu] since the Nomad
client does not have access to the file system of a task for that driver.
</small>
@@ -715,7 +715,7 @@ advertise and check directly since Nomad isn't managing any port assignments.
[service-discovery]: /docs/integrations/consul-integration#service-discovery 'Nomad Service Discovery'
[interpolation]: /docs/runtime/interpolation 'Nomad Runtime Interpolation'
[network]: /docs/job-specification/network 'Nomad network Job Specification'
[qemu]: /docs/drivers/qemu 'Nomad qemu Driver'
[qemu]: /docs/drivers/qemu 'Nomad QEMU Driver'
[restart_stanza]: /docs/job-specification/restart 'restart stanza'
[connect]: /docs/job-specification/connect 'Nomad Consul Connect Integration'
[type]: /docs/job-specification/service#type
@@ -723,4 +723,4 @@ advertise and check directly since Nomad isn't managing any port assignments.
[killsignal]: /docs/job-specification/task#kill_signal
[killtimeout]: /docs/job-specification/task#kill_timeout
[service_task]: /docs/job-specification/service#task-1
[network_mode]: /docs/job-specification/network#mode
[network_mode]: /docs/job-specification/network#mode

View File

@@ -129,7 +129,7 @@ in-place or start new nodes on the new version. See the [Workload Migration
Guide](https://learn.hashicorp.com/tutorials/nomad/node-drain) for instructions on how to migrate running
allocations from the old nodes to the new nodes with the [`nomad node drain`](/docs/commands/node/drain) command.
## Done!
## Done
You are now running the latest Nomad version. You can verify all
Clients joined by running `nomad node status` and checking all the clients

View File

@@ -9,18 +9,18 @@ description: |-
# Upgrade Guides
The [upgrading page](/docs/upgrade) covers the details of doing
a standard upgrade. However, specific versions of Nomad may have more
details provided for their upgrades as a result of new features or changed
behavior. This page is used to document those details separately from the
standard upgrade flow.
The [upgrading page](/docs/upgrade) covers the details of doing a standard
upgrade. However, specific versions of Nomad may have more details provided for
their upgrades as a result of new features or changed behavior. This page is
used to document those details separately from the standard upgrade flow.
## Nomad 1.0.0
### HCL2 for Job specification
Nomad v1.0.0 adopts HCL2 for parsing the job spec. HCL2 extends HCL with more
expression and reuse support, but adds some stricter schema for HCL blocks (a.k.a. stanzas). Check [HCL](/docs/job-specification/hcl2) for more details.
expression and reuse support, but adds some stricter schema for HCL blocks
(a.k.a. stanzas). Check [HCL](/docs/job-specification/hcl2) for more details.
### Signal used when stopping Docker tasks
@@ -38,22 +38,23 @@ been removed with v1.0.0, and all metrics will be emitted with tags.
### Null characters in region, datacenter, job name/ID, task group name, and task names
Starting with Nomad v1.0.0, jobs will fail validation if any of the following
contain null character: the job ID or name, the task group name, or the task name. Any
jobs meeting this requirement should be modified before an update to v1.0.0. Similarly,
client and server config validation will prohibit either the region or the datacenter
from containing null characters.
contain null character: the job ID or name, the task group name, or the task
name. Any jobs meeting this requirement should be modified before an update to
v1.0.0. Similarly, client and server config validation will prohibit either the
region or the datacenter from containing null characters.
### EC2 CPU characteristics may be different
Starting with Nomad v1.0.0, the AWS fingerprinter uses data derived from the
official AWS EC2 API to determine default CPU performance characteristics, including
core count and core speed. This data should be accurate for each instance type
per region. Previously, Nomad used a hand-made lookup table that was not region
aware and may have contained inaccurate or incomplete data. As part of this change,
the AWS fingerprinter no longer sets the `cpu.modelname` attribute.
official AWS EC2 API to determine default CPU performance characteristics,
including core count and core speed. This data should be accurate for each
instance type per region. Previously, Nomad used a hand-made lookup table that
was not region aware and may have contained inaccurate or incomplete data. As
part of this change, the AWS fingerprinter no longer sets the `cpu.modelname`
attribute.
As before, `cpu_total_compute` can be used to override the discovered CPU resources
available to the Nomad client.
As before, `cpu_total_compute` can be used to override the discovered CPU
resources available to the Nomad client.
### Inclusive language
@@ -62,46 +63,56 @@ deprecated from client configuration and driver configuration. The existing
configuration values are permitted but will be removed in a future version of
Nomad. The specific configuration values replaced are:
* Client `driver.blacklist` is replaced with `driver.denylist`.
* Client `driver.whitelist` is replaced with `driver.allowlist`.
* Client `env.blacklist` is replaced with `env.denylist`.
* Client `fingerprint.blacklist` is replaced with `fingerprint.denylist`.
* Client `fingerprint.whitelist` is replaced with `fingerprint.allowlist`.
* Client `user.blacklist` is replaced with `user.denylist`.
* Client `template.function_blacklist` is replaced with `template.function_denylist`.
* Docker driver `docker.caps.whitelist` is replaced with `docker.caps.allowlist`.
- Client `driver.blacklist` is replaced with `driver.denylist`.
- Client `driver.whitelist` is replaced with `driver.allowlist`.
- Client `env.blacklist` is replaced with `env.denylist`.
- Client `fingerprint.blacklist` is replaced with `fingerprint.denylist`.
- Client `fingerprint.whitelist` is replaced with `fingerprint.allowlist`.
- Client `user.blacklist` is replaced with `user.denylist`.
- Client `template.function_blacklist` is replaced with
`template.function_denylist`.
- Docker driver `docker.caps.whitelist` is replaced with
`docker.caps.allowlist`.
### Envoy proxy versions
Nomad v1.0.0 changes the behavior around the selection of Envoy version used
for Connect sidecar proxies. Previously, Nomad always defaulted to Envoy v1.11.2
if neither the `meta.connect.sidecar_image` parameter or `sidecar_task` stanza
were explicitly configured. Likewise the same version of Envoy would be used for
Connect ingress gateways if `meta.connect.gateway_image` was unset. Starting with
Nomad v1.0.0, each Nomad Client will query Consul for a list of supported Envoy
versions. Nomad will make use of the latest version of Envoy supported by the
Consul agent when launching Envoy as a Connect sidecar proxy. If the version of
the Consul agent is older than v1.7.8, v1.8.4, or v1.9.0, Nomad will fallback to
the v1.11.2 version of Envoy. As before, if the `meta.connect.sidecar_image`,
`meta.connect.gateway_image`, or `sidecar_task` stanza are set, those settings
Nomad v1.0.0 changes the behavior around the selection of Envoy version used for
Connect sidecar proxies. Previously, Nomad always defaulted to Envoy v1.11.2 if
neither the `meta.connect.sidecar_image` parameter or `sidecar_task` stanza were
explicitly configured. Likewise the same version of Envoy would be used for
Connect ingress gateways if `meta.connect.gateway_image` was unset. Starting
with Nomad v1.0.0, each Nomad Client will query Consul for a list of supported
Envoy versions. Nomad will make use of the latest version of Envoy supported by
the Consul agent when launching Envoy as a Connect sidecar proxy. If the version
of the Consul agent is older than v1.7.8, v1.8.4, or v1.9.0, Nomad will fallback
to the v1.11.2 version of Envoy. As before, if the `meta.connect.sidecar_image`,
`meta.connect.gateway_image`, or `sidecar_task` stanza are set, those settings
take precedence.
When upgrading Nomad Clients from a previous version to v1.0.0 and above, it is
recommended to also upgrade the Consul agents to v1.7.8, 1.8.4, or v1.9.0 or newer.
Upgrading Nomad and Consul to versions that support the new behaviour while also doing a
full [node drain](https://www.nomadproject.io/docs/upgrade#5-upgrade-clients) at
the time of the upgrade for each node will ensure Connect workloads are properly
rescheduled onto nodes in such a way that the Nomad Clients, Consul agents, and
Envoy sidecar tasks maintain compatibility with one another.
recommended to also upgrade the Consul agents to v1.7.8, 1.8.4, or v1.9.0 or
newer. Upgrading Nomad and Consul to versions that support the new behavior
while also doing a full [node drain][] at the time of the upgrade for each node
will ensure Connect workloads are properly rescheduled onto nodes in such a way
that the Nomad Clients, Consul agents, and Envoy sidecar tasks maintain
compatibility with one another.
### Envoy worker threads
Nomad v1.0.0 changes the default behaviour around the number of worker threads
Nomad v1.0.0 changes the default behavior around the number of worker threads
created by the Envoy sidecar proxy when using Consul Connect. Previously, the
Envoy [`--concurrency`][envoy_concurrency] argument was left unset, which caused
Envoy to spawn as many worker threads as logical cores available on the CPU. The
`--concurrency` value now defaults to `1` and can be configured by setting the
[`meta.connect.proxy_concurrency`][proxy_concurrency] property in client configuration.
[`meta.connect.proxy_concurrency`][proxy_concurrency] property in client
configuration.
## Nomad 0.12.8
@@ -110,14 +121,14 @@ Envoy to spawn as many worker threads as logical cores available on the CPU. The
Nomad 0.12.8 includes security fixes for the handling of Docker volume mounts:
- The `docker.volumes.enabled` flag now defaults to `false` as documented.
- Docker driver mounts of type "volume" (but not "bind") were not sandboxed
and could mount arbitrary locations from the client host. The
- Docker driver mounts of type "volume" (but not "bind") were not sandboxed and
could mount arbitrary locations from the client host. The
`docker.volumes.enabled` configuration will now disable Docker mounts with
type "volume" when set to `false` (the default).
This change Docker impacts jobs that use a `mounts` with type "volume", as
shown below. This job will fail when placed unless `docker.volumes.enabled =
true`.
This change Docker impacts jobs that use a `mounts` with type "volume", as shown
below. This job will fail when placed unless `docker.volumes.enabled = true`.
```hcl
mounts = [
@@ -151,14 +162,14 @@ in handling of job `template` and `artifact` stanzas:
- The `template.source` and `template.destination` fields are now protected by
the file sandbox introduced in 0.9.6. These paths are now restricted to fall
inside the task directory by default. An operator can opt-out of this
protection with the
[`template.disable_file_sandbox`](/docs/configuration/client#template-parameters)
field in the client configuration.
protection with the [`template.disable_file_sandbox`][] field in the client
configuration.
- The paths for `template.source`, `template.destination`, and
`artifact.destination` are validated on job submission to ensure the paths
do not escape the file sandbox. It was possible to use interpolation to
bypass this validation. The client now interpolates the paths before
checking if they are in the file sandbox.
`artifact.destination` are validated on job submission to ensure the paths do
not escape the file sandbox. It was possible to use interpolation to bypass
this validation. The client now interpolates the paths before checking if they
are in the file sandbox.
~> **Warning:** Due to a [bug][gh-9148] in Nomad v0.12.6, the
`template.destination` and `artifact.destination` paths do not support
@@ -175,12 +186,12 @@ been deprecated and is no longer considered when making scheduling decisions.
This is in part because we felt that `mbits` didn't accurately account network
bandwidth as a resource.
Additionally the use of the `network` block inside of a task's `resource` block is
also deprecated. Users are advised to move their `network` block to the `group`
block. Recent networking features have only been added to group based network
configuration. If any usecase or feature which was available with task network
resource is not fulfilled with group network configuration, please open an issue
detailing the missing capability.
Additionally the use of the `network` block inside of a task's `resource` block
is also deprecated. Users are advised to move their `network` block to the
`group` block. Recent networking features have only been added to group based
network configuration. If any usecase or feature which was available with task
network resource is not fulfilled with group network configuration, please open
an issue detailing the missing capability.
### Enterprise Licensing
@@ -223,14 +234,14 @@ plugin "docker" {
}
```
### Qemu images
### QEMU images
Nomad 0.12.0 restricts the paths the Qemu tasks can load an image from. A Qemu
Nomad 0.12.0 restricts the paths the QEMU tasks can load an image from. A QEMU
task may download an image to the allocation directory to load. But images
outside the allocation directories must be explicitly allowed by operators in
the client agent configuration file.
For example, you may allow loading Qemu images from `/mnt/qemu-images` by
For example, you may allow loading QEMU images from `/mnt/qemu-images` by
adding the following to the agent configuration file:
```hcl
@@ -252,8 +263,7 @@ sandboxed and could mount arbitrary locations from the client host. The
type "volume" when set to `false`.
This change Docker impacts jobs that use a `mounts` with type "volume", as
shown below. This job will fail when placed unless `docker.volumes.enabled =
true`.
shown below. This job will fail when placed unless `docker.volumes.enabled = true`.
```hcl
mounts = [
@@ -322,7 +332,7 @@ Placements will be more correct but slightly different in v0.11.2 vs earlier
versions of Nomad. Operators do _not_ need to take any actions as the impact of
the bug fix will only minimally affect scoring.
Feasability (whether a node is capable of running a job at all) is _not_
Feasibility (whether a node is capable of running a job at all) is _not_
affected.
### Periodic Jobs and Daylight Saving Time
@@ -342,34 +352,32 @@ for details.
### client.template: `vault_grace` deprecation
Nomad 0.11.0 updates
[consul-template](https://github.com/hashicorp/consul-template) to v0.24.1.
This library deprecates the [`vault_grace`][vault_grace] option for templating
[consul-template](https://github.com/hashicorp/consul-template) to v0.24.1. This
library deprecates the [`vault_grace`][vault_grace] option for templating
included in Nomad. The feature has been ignored since Vault 0.5 and as long as
you are running a more recent version of Vault, you can safely remove
`vault_grace` from your Nomad jobs.
### Rkt Task Driver Removed
The `rkt` task driver has been deprecated and removed from Nomad. While the
code is available in an external repository,
[https://github.com/hashicorp/nomad-driver-rkt](https://github.com/hashicorp/nomad-driver-rkt),
it will not be maintained as `rkt` is [no longer being developed
upstream](https://github.com/rkt/rkt). We encourage all `rkt` users to find a
new task driver as soon as possible.
The `rkt` task driver has been deprecated and removed from Nomad. While the code
is available in an external repository,
<https://github.com/hashicorp/nomad-driver-rkt>, it will not be maintained as
`rkt` is [no longer being developed upstream](https://github.com/rkt/rkt). We
encourage all `rkt` users to find a new task driver as soon as possible.
## Nomad 0.10.8
### Docker volume mounts
Nomad 0.10.8 includes a security fix for the handling of Docker volume
mounts. Docker driver mounts of type "volume" (but not "bind") were not
sandboxed and could mount arbitrary locations from the client host. The
`docker.volumes.enabled` configuration will now disable Docker mounts with
type "volume" when set to `false`.
Nomad 0.10.8 includes a security fix for the handling of Docker volume mounts.
Docker driver mounts of type "volume" (but not "bind") were not sandboxed and
could mount arbitrary locations from the client host. The
`docker.volumes.enabled` configuration will now disable Docker mounts with type
"volume" when set to `false`.
This change Docker impacts jobs that use a `mounts` with type "volume", as
shown below. This job will fail when placed unless `docker.volumes.enabled =
true`.
This change Docker impacts jobs that use a `mounts` with type "volume", as shown
below. This job will fail when placed unless `docker.volumes.enabled = true`.
```hcl
mounts = [
@@ -406,6 +414,7 @@ vulnerabilities in handling of job `template` and `artifact` stanzas:
protection with the
[`template.disable_file_sandbox`](/docs/configuration/client#template-parameters)
field in the client configuration.
- The paths for `template.source`, `template.destination`, and
`artifact.destination` are validated on job submission to ensure the paths
do not escape the file sandbox. It was possible to use interpolation to
@@ -422,17 +431,16 @@ v0.10.7. To work around the bug, use a relative path.
### Same-Node Scheduling Penalty Removed
Nomad 0.10.4 includes a fix to the scheduler that removes the
same-node penalty for allocations that have not previously failed. In
earlier versions of Nomad, the node where an allocation was running
was penalized from receiving updated versions of that allocation,
resulting in a higher chance of the allocation being placed on a new
node. This was changed so that the penalty only applies to nodes where
the previous allocation has failed or been rescheduled, to reduce the
risk of correlated failures on a host. Scheduling weighs a number of
factors, but this change should reduce movement of allocations that
are being updated from a healthy state. You can view the placement
metrics for an allocation with `nomad alloc status -verbose`.
Nomad 0.10.4 includes a fix to the scheduler that removes the same-node penalty
for allocations that have not previously failed. In earlier versions of Nomad,
the node where an allocation was running was penalized from receiving updated
versions of that allocation, resulting in a higher chance of the allocation
being placed on a new node. This was changed so that the penalty only applies to
nodes where the previous allocation has failed or been rescheduled, to reduce
the risk of correlated failures on a host. Scheduling weighs a number of
factors, but this change should reduce movement of allocations that are being
updated from a healthy state. You can view the placement metrics for an
allocation with `nomad alloc status -verbose`.
### Additional Environment Variable Filtering
@@ -454,17 +462,17 @@ allows trusted operators with a client certificate signed by the CA to send RPC
calls as a Nomad client or server node, bypassing access control and accessing
any secrets available to a client.
Nomad clusters configured for mTLS following the [Securing Nomad with TLS][tls-guide]
guide or the [Vault PKI Secrets Engine Integration][tls-vault-guide] guide
should already have certificates that will pass validation. Before upgrading to
Nomad 0.10.3, operators using mTLS with `verify_server_hostname = true` should
confirm that the common name or SAN of all Nomad client node certs is
`client.<region>.nomad`, and that the common name or SAN of all Nomad server
node certs is `server.<region>.nomad`.
Nomad clusters configured for mTLS following the [Securing Nomad with
TLS][tls-guide] guide or the [Vault PKI Secrets Engine
Integration][tls-vault-guide] guide should already have certificates that will
pass validation. Before upgrading to Nomad 0.10.3, operators using mTLS with
`verify_server_hostname = true` should confirm that the common name or SAN of
all Nomad client node certs is `client.<region>.nomad`, and that the common name
or SAN of all Nomad server node certs is `server.<region>.nomad`.
### Connection Limits Added
Nomad 0.10.3 introduces the [limits][limits] agent configuration parameters for
Nomad 0.10.3 introduces the [limits][] agent configuration parameters for
mitigating denial of service attacks from users who are not authenticated via
mTLS. The default limits stanza is:
@@ -488,9 +496,9 @@ could cause unintended resource usage.
### Preemption Panic Fixed
Nomad 0.9.7 and 0.10.2 fix a [server crashing bug][gh-6787] present in
scheduler preemption since 0.9.0. Users unable to immediately upgrade Nomad can
[disable preemption][preemption-api] to avoid the panic.
Nomad 0.9.7 and 0.10.2 fix a [server crashing bug][gh-6787] present in scheduler
preemption since 0.9.0. Users unable to immediately upgrade Nomad can [disable
preemption][preemption-api] to avoid the panic.
### Dangling Docker Container Cleanup
@@ -498,7 +506,8 @@ Nomad 0.10.2 addresses an issue occurring in heavily loaded clients, where
containers are started without being properly managed by Nomad. Nomad 0.10.2
introduced a reaper that detects and kills such containers.
Operators may opt to run reaper in a dry-mode or disabling it through a client config.
Operators may opt to run reaper in a dry-mode or disabling it through a client
config.
For more information, see [Docker Dangling containers][dangling-containers].
@@ -506,14 +515,14 @@ For more information, see [Docker Dangling containers][dangling-containers].
### Deployments
Nomad 0.10 enables rolling deployments for service jobs by default
and adds a default update stanza when a service job is created or updated.
This does not affect jobs with an update stanza.
Nomad 0.10 enables rolling deployments for service jobs by default and adds a
default update stanza when a service job is created or updated. This does not
affect jobs with an update stanza.
In pre-0.10 releases, when updating a service job without an update stanza,
all existing allocations are stopped while new allocations start up,
and this may cause a service degradation or an outage.
You can regain this behavior and disable deployments by setting `max_parallel` to 0.
In pre-0.10 releases, when updating a service job without an update stanza, all
existing allocations are stopped while new allocations start up, and this may
cause a service degradation or an outage. You can regain this behavior and
disable deployments by setting `max_parallel` to 0.
For more information, see [`update` stanza][update].
@@ -521,11 +530,27 @@ For more information, see [`update` stanza][update].
### Template Rendering
Nomad 0.9.5 includes security fixes for privilege escalation vulnerabilities in handling of job `template` stanzas:
Nomad 0.9.5 includes security fixes for privilege escalation vulnerabilities in
handling of job `template` stanzas:
- The client host's environment variables are now cleaned before rendering the template. If a template includes the `env` function, the job should include an [`env`](/docs/job-specification/env) stanza to allow access to the variable in the template.
- The `plugin` function is no longer permitted by default and will raise an error if used in a template. Operator can opt-in to permitting this function with the new [`template.function_blacklist`](/docs/configuration/client#template-parameters) field in the client configuration.
- The `file` function has been changed to restrict paths to fall inside the task directory by default. Paths that used the `NOMAD_TASK_DIR` environment variable to prefix file paths should work unchanged. Relative paths or symlinks that point outside the task directory will raise an error. An operator can opt-out of this protection with the new [`template.disable_file_sandbox`](/docs/configuration/client#template-parameters) field in the client configuration.
- The client host's environment variables are now cleaned before rendering the
template. If a template includes the `env` function, the job should include an
[`env`](/docs/job-specification/env) stanza to allow access to the variable in
the template.
- The `plugin` function is no longer permitted by default and will raise an
error if used in a template. Operator can opt-in to permitting this function
with the new
[`template.function_blacklist`](/docs/configuration/client#template-parameters)
field in the client configuration.
- The `file` function has been changed to restrict paths to fall inside the task
directory by default. Paths that used the `NOMAD_TASK_DIR` environment
variable to prefix file paths should work unchanged. Relative paths or
symlinks that point outside the task directory will raise an error. An
operator can opt-out of this protection with the new
[`template.disable_file_sandbox`](/docs/configuration/client#template-parameters)
field in the client configuration.
## Nomad 0.9.0
@@ -543,6 +568,7 @@ All task drivers have become [plugins][plugins] in Nomad 0.9.0. There are two
user visible differences between 0.8 and 0.9 drivers:
- [LXC][lxc] is now community supported and distributed independently.
- Task driver [`config`][task-config] stanzas are no longer validated by
the [`nomad job validate`][validate] command. This is a regression that will
be fixed in a future release.
@@ -578,8 +604,8 @@ Values containing whitespace will be quoted:
### HCL2 Transition
Nomad 0.9.0 begins a transition to [HCL2][hcl2], the next version of the
HashiCorp configuration language. While Nomad has begun integrating HCL2,
users will need to continue to use HCL1 in Nomad 0.9.0 as the transition is
HashiCorp configuration language. While Nomad has begun integrating HCL2, users
will need to continue to use HCL1 in Nomad 0.9.0 as the transition is
incomplete.
If you interpolate variables in your [`task.config`][task-config] containing
@@ -616,14 +642,14 @@ This only affects users who interpolate unusual variables with multiple
consecutive dots in their task `config` stanza. All other interpolation is
unchanged.
Since HCL2 uses dotted object notation for interpolation users should
transition away from variable names with multiple consecutive dots.
Since HCL2 uses dotted object notation for interpolation users should transition
away from variable names with multiple consecutive dots.
### Downgrading clients
Due to the large refactor of the Nomad client in 0.9, downgrading to a
previous version of the client after upgrading it to Nomad 0.9 is not supported.
To downgrade safely, users should erase the Nomad client's data directory.
Due to the large refactor of the Nomad client in 0.9, downgrading to a previous
version of the client after upgrading it to Nomad 0.9 is not supported. To
downgrade safely, users should erase the Nomad client's data directory.
### `port_map` Environment Variable Changes
@@ -634,8 +660,8 @@ However, in Nomad 0.9.0 no parameters in a driver's `config` stanza, including
its `port_map`, are available for interpolation. This means `{{ env NOMAD_PORT_<label> }}` in a `template` stanza or `HTTP_PORT = "${NOMAD_PORT_http}"` in an `env` stanza will now interpolate the _host_ ports,
not the container's.
Nomad 0.10 introduced Task Group Networking which natively supports port
mapping without relying on task driver specific `port_map` fields. The
Nomad 0.10 introduced Task Group Networking which natively supports port mapping
without relying on task driver specific `port_map` fields. The
[`to`](/docs/job-specification/network#to) field on group network port stanzas
will be interpolated properly. Please see the
[`network`](/docs/job-specification/network/) stanza documentation for details.
@@ -645,16 +671,15 @@ will be interpolated properly. Please see the
### Raft Protocol Version Compatibility
When upgrading to Nomad 0.8.0 from a version lower than 0.7.0, users will need
to set the
[`raft_protocol`](/docs/configuration/server#raft_protocol) option
in their `server` stanza to 1 in order to maintain backwards compatibility with
the old servers during the upgrade. After the servers have been migrated to
version 0.8.0, `raft_protocol` can be moved up to 2 and the servers restarted
to match the default.
to set the [`raft_protocol`](/docs/configuration/server#raft_protocol) option in
their `server` stanza to 1 in order to maintain backwards compatibility with the
old servers during the upgrade. After the servers have been migrated to version
0.8.0, `raft_protocol` can be moved up to 2 and the servers restarted to match
the default.
The Raft protocol must be stepped up in this way; only adjacent version numbers are
compatible (for example, version 1 cannot talk to version 3). Here is a table of the
Raft Protocol versions supported by each Nomad version:
The Raft protocol must be stepped up in this way; only adjacent version numbers
are compatible (for example, version 1 cannot talk to version 3). Here is a
table of the Raft Protocol versions supported by each Nomad version:
<table>
<thead>
@@ -679,18 +704,44 @@ Raft Protocol versions supported by each Nomad version:
</tbody>
</table>
In order to enable all [Autopilot](https://learn.hashicorp.com/tutorials/nomad/autopilot) features, all servers
in a Nomad cluster must be running with Raft protocol version 3 or later.
In order to enable all
[Autopilot](https://learn.hashicorp.com/tutorials/nomad/autopilot) features, all
servers in a Nomad cluster must be running with Raft protocol version 3 or
later.
#### Upgrading to Raft Protocol 3
This section provides details on upgrading to Raft Protocol 3 in Nomad 0.8 and higher. Raft protocol version 3 requires Nomad running 0.8.0 or newer on all servers in order to work. See [Raft Protocol Version Compatibility](/docs/upgrade/upgrade-specific#raft-protocol-version-compatibility) for more details. Also the format of `peers.json` used for outage recovery is different when running with the latest Raft protocol. See [Manual Recovery Using peers.json](https://learn.hashicorp.com/tutorials/nomad/outage-recovery#manual-recovery-using-peersjson) for a description of the required format.
This section provides details on upgrading to Raft Protocol 3 in Nomad 0.8 and
higher. Raft protocol version 3 requires Nomad running 0.8.0 or newer on all
servers in order to work. See [Raft Protocol Version
Compatibility](/docs/upgrade/upgrade-specific#raft-protocol-version-compatibility)
for more details. Also the format of `peers.json` used for outage recovery is
different when running with the latest Raft protocol. See [Manual Recovery Using
peers.json](https://learn.hashicorp.com/tutorials/nomad/outage-recovery#manual-recovery-using-peersjson)
for a description of the required format.
Please note that the Raft protocol is different from Nomad's internal protocol as shown in commands like `nomad server members`. To see the version of the Raft protocol in use on each server, use the `nomad operator raft list-peers` command.
Please note that the Raft protocol is different from Nomad's internal protocol
as shown in commands like `nomad server members`. To see the version of the Raft
protocol in use on each server, use the `nomad operator raft list-peers`
command.
The easiest way to upgrade servers is to have each server leave the cluster, upgrade its `raft_protocol` version in the `server` stanza, and then add it back. Make sure the new server joins successfully and that the cluster is stable before rolling the upgrade forward to the next server. It's also possible to stand up a new set of servers, and then slowly stand down each of the older servers in a similar fashion.
The easiest way to upgrade servers is to have each server leave the cluster,
upgrade its `raft_protocol` version in the `server` stanza, and then add it
back. Make sure the new server joins successfully and that the cluster is stable
before rolling the upgrade forward to the next server. It's also possible to
stand up a new set of servers, and then slowly stand down each of the older
servers in a similar fashion.
When using Raft protocol version 3, servers are identified by their `node-id` instead of their IP address when Nomad makes changes to its internal Raft quorum configuration. This means that once a cluster has been upgraded with servers all running Raft protocol version 3, it will no longer allow servers running any older Raft protocol versions to be added. If running a single Nomad server, restarting it in-place will result in that server not being able to elect itself as a leader. To avoid this, either set the Raft protocol back to 2, or use [Manual Recovery Using peers.json](https://learn.hashicorp.com/tutorials/nomad/outage-recovery#manual-recovery-using-peersjson) to map the server to its node ID in the Raft quorum configuration.
When using Raft protocol version 3, servers are identified by their `node-id`
instead of their IP address when Nomad makes changes to its internal Raft quorum
configuration. This means that once a cluster has been upgraded with servers all
running Raft protocol version 3, it will no longer allow servers running any
older Raft protocol versions to be added. If running a single Nomad server,
restarting it in-place will result in that server not being able to elect itself
as a leader. To avoid this, either set the Raft protocol back to 2, or use
[Manual Recovery Using
peers.json](https://learn.hashicorp.com/tutorials/nomad/outage-recovery#manual-recovery-using-peersjson)
to map the server to its node ID in the Raft quorum configuration.
### Node Draining Improvements
@@ -701,8 +752,8 @@ being drained. Nomad 0.8 now supports a [`migrate`][migrate] stanza in job
specifications to control how many allocations may be migrated at once and the
default will be used for existing jobs.
The `drain` command now blocks until the drain completes. To get the Nomad
0.7.1 and earlier drain behavior use the command: `nomad node drain -enable -force -detach <node-id>`
The `drain` command now blocks until the drain completes. To get the Nomad 0.7.1
and earlier drain behavior use the command: `nomad node drain -enable -force -detach <node-id>`
See the [`migrate` stanza documentation][migrate] and [Decommissioning Nodes
guide](https://learn.hashicorp.com/tutorials/nomad/node-drain) for details.
@@ -750,12 +801,11 @@ as the old style will be deprecated in future versions of Nomad.
### RPC Advertise Address
The behavior of the [advertised RPC
address](/docs/configuration#rpc-1) has changed to be only used
to advertise the RPC address of servers to client nodes. Server to server
communication is done using the advertised Serf address. Existing cluster's
should not be effected but the advertised RPC address may need to be updated to
allow connecting client's over a NAT.
The behavior of the [advertised RPC address](/docs/configuration#rpc-1) has
changed to be only used to advertise the RPC address of servers to client nodes.
Server to server communication is done using the advertised Serf address.
Existing cluster's should not be effected but the advertised RPC address may
need to be updated to allow connecting client's over a NAT.
## Nomad 0.6.0
@@ -775,8 +825,8 @@ If you manually configure `advertise` addresses no changes are necessary.
The change to the default, advertised IP also effect clients that do not specify
which network_interface to use. If you have several routable IPs, it is advised
to configure the client's [network
interface](/docs/configuration/client#network_interface)
such that tasks bind to the correct address.
interface](/docs/configuration/client#network_interface) such that tasks bind to
the correct address.
## Nomad 0.5.5
@@ -888,3 +938,5 @@ deleted and then Nomad 0.3.0 can be launched.
[update]: /docs/job-specification/update
[validate]: /docs/commands/job/validate
[vault_grace]: /docs/job-specification/template
[node drain]: https://www.nomadproject.io/docs/upgrade#5-upgrade-clients
[`template.disable_file_sandbox`]: /docs/configuration/client#template-parameters

View File

@@ -3,14 +3,14 @@ layout: intro
page_title: Introduction
sidebar_title: What is Nomad?
description: >-
Welcome to the intro guide to Nomad! This guide is the best place to start
Welcome to the intro guide to Nomad. This guide is the best place to start
with Nomad. We cover what Nomad is, what problems it can solve, how it
compares to existing software, and a quick start for using Nomad.
---
# Introduction to Nomad
Welcome to the intro guide to Nomad! This guide is the best place to start with Nomad. We cover what Nomad is, what problems it can solve, how it compares to existing software, and how you can get started using it. If you are familiar with the basics of Nomad, the [documentation](/docs) and [HashiCorp Learn guides](https://learn.hashicorp.com/nomad) provides a more detailed reference of available features.
Welcome to the intro guide to Nomad. This guide is the best place to start with Nomad. We cover what Nomad is, what problems it can solve, how it compares to existing software, and how you can get started using it. If you are familiar with the basics of Nomad, the [documentation](/docs) and [HashiCorp Learn guides](https://learn.hashicorp.com/nomad) provides a more detailed reference of available features.
<iframe
src="https://www.youtube.com/embed/s_Fm9UtL4YU"

View File

@@ -28,7 +28,7 @@ centers and multiple clouds.
## Legacy Application Deployment
A virtual machine based application deployment strategy can lead to low hardware
utlization rates and high infrastructure costs. While a Docker-based deployment
utilization rates and high infrastructure costs. While a Docker-based deployment
strategy can be impractical for some organizations or use cases, the potential for
greater automation, increased resilience, and reduced cost is very attractive.
Nomad natively supports running legacy applications, static binaries, JARs, and
@@ -52,7 +52,7 @@ easier to adopt the paradigm.
As data science and analytics teams grow in size and complexity, they increasingly
benefit from highly performant and scalable tools that can run batch workloads with
minimal operational overhead. Nomad can natively run batch jobs and [parameterized](https://www.hashicorp.com/blog/replacing-queues-with-nomad-dispatch) jobs.
minimal operational overhead. Nomad can natively run batch jobs and [parameterized](https://www.hashicorp.com/blog/replacing-queues-with-nomad-dispatch) jobs.
Nomad's architecture enables easy scalability and an optimistically
concurrent scheduling strategy that can yield [thousands of container deployments per
second](https://www.hashicorp.com/c1m). Alternatives are overly complex and limited

View File

@@ -12,7 +12,7 @@ export default function NonContainerizedApplicationOrchestrationPage() {
textSplit={{
heading: 'Non-Containerized Orchestration',
content:
'Deploy, manage, and scale your non-containerized applications using the Java, Qemu, or exec drivers.',
'Deploy, manage, and scale your non-containerized applications using the Java, QEMU, or exec drivers.',
textSide: 'right',
links: [
{