mirror of
https://github.com/kemko/nomad.git
synced 2026-01-01 16:05:42 +03:00
Docs SEO: Update Concepts for search (#24757)
* Update for search engine optimization * Update descriptions and add intro body summary paragraph * Apply suggestions from code review Co-authored-by: Jeff Boruszak <104028618+boruszak@users.noreply.github.com> --------- Co-authored-by: Jeff Boruszak <104028618+boruszak@users.noreply.github.com>
This commit is contained in:
@@ -1,21 +1,20 @@
|
||||
---
|
||||
layout: docs
|
||||
page_title: Access Control List (ACL)
|
||||
description: Learn about Nomad's ACL subsystem
|
||||
description: Learn how Nomad's Access Control List (ACL) security system uses tokens, policies, roles, and capabilities to control access to data and resources.
|
||||
---
|
||||
|
||||
# Access Control List (ACL)
|
||||
|
||||
The Nomad ACL system is designed to be intuitive, high-performance, and to
|
||||
provide administrative insight. At the highest level, there are four major
|
||||
components to the ACL system: tokens, policies, roles, and capabilities. The
|
||||
two components named auth methods and binding rules are specific to Nomad's
|
||||
Single Sign-On (SSO) ACL capabilities.
|
||||
This page provides conceptual information about Nomad's Access Control List
|
||||
(ACL) security system. At the highest level, Nomad's ACL system has tokens,
|
||||
policies, roles, and capabilities. Additionally, Nomad's Single Sign-On (SSO)
|
||||
ACL capabilities use auth methods and binding rules to restrict access.
|
||||
|
||||
The Nomad [access control tutorials][] provide detailed information and
|
||||
guidance on Nomad ACL system.
|
||||
|
||||
### Policy
|
||||
## Policy
|
||||
|
||||
Policies consist of a set of rules defining the capabilities or actions to be
|
||||
granted. For example, a `readonly` policy might only grant the ability to list
|
||||
@@ -27,13 +26,13 @@ Nomad ACL token for accessing the objects in a Nomad cluster, objects like
|
||||
namespaces, node, agent, operator, quota, etc. For more information on writing
|
||||
policies, see the [ACL policy reference doc][].
|
||||
|
||||
### Role
|
||||
## Role
|
||||
|
||||
Roles group one or more ACL policies into a container which can then be used to
|
||||
generate ACL tokens for authorisation. This abstraction allows easier control
|
||||
and updating of ACL permissions, particularly in larger, more diverse clusters.
|
||||
|
||||
### Token
|
||||
## Token
|
||||
|
||||
Requests to Nomad are authenticated using a bearer token. Each ACL token has a
|
||||
public Accessor ID which is used to name a token and a Secret ID which is used
|
||||
@@ -49,14 +48,14 @@ other regions. Otherwise, tokens are created locally in the region the request
|
||||
was made and not replicated. `Local` tokens cannot be used for cross-region
|
||||
requests since they are not replicated between regions.
|
||||
|
||||
### Workload Identity
|
||||
## Workload Identity
|
||||
|
||||
Nomad allocations can receive workload identities in the form of a
|
||||
[JSON Web Token (JWT)][jwt]. The
|
||||
[Workload Identity concept page][workload identity] has more information on
|
||||
this topic.
|
||||
|
||||
### Auth Method
|
||||
## Auth Method
|
||||
|
||||
Authentication methods dictate how Nomad should talk to SSO providers when a
|
||||
user requests to authenticate using one. Currently, Nomad supports the [OpenID
|
||||
@@ -64,7 +63,7 @@ Connect (OIDC)][oidc] SSO workflow which allows users to log in to Nomad via
|
||||
applications such as [Auth0][auth0], [Okta][okta], and [Vault][vault], and
|
||||
non-interactive login via externally-issued [JSON Web Tokens (JWT)][jwt].
|
||||
|
||||
### Binding Rule
|
||||
## Binding Rule
|
||||
|
||||
Binding rules provide a mapping between a Nomad user's SSO authorisation claims
|
||||
and internal Nomad objects such as ACL Roles and ACL Policies. A binding rule
|
||||
|
||||
@@ -2,28 +2,33 @@
|
||||
layout: docs
|
||||
page_title: Federation
|
||||
description: |-
|
||||
Nomad federation is a multi-cluster orchestration and management feature that allows multiple
|
||||
Nomad clusters, defined as a region, to work together seamlessly.
|
||||
Nomad federation enables multiple Nomad clusters in different regions to work together seamlessly. Learn about cross-region request forwarding, replication, and Nomad Enterprise's multi-region job deployments.
|
||||
---
|
||||
|
||||
# Federation
|
||||
|
||||
Nomad federation is a multi-cluster orchestration and management feature that allows multiple Nomad
|
||||
clusters, defined as a region, to work together seamlessly. By federating clusters, you benefit
|
||||
from improved scalability, fault tolerance, and centralized management of workloads across various
|
||||
data centers or geographical locations.
|
||||
This page provides conceptual information about the Nomad federation feature.
|
||||
Learn about cross-region request forwarding, replication, and Nomad Enterprise's
|
||||
multi-region job deployments.
|
||||
|
||||
Nomad federation is a multi-cluster orchestration and management feature that
|
||||
enables multiple Nomad clusters running in different regions to work together
|
||||
seamlessly. By federating clusters, you benefit from improved scalability, fault
|
||||
tolerance, and centralized management of workloads across various data centers
|
||||
or geographical locations.
|
||||
|
||||
## Cross-Region request forwarding
|
||||
|
||||
API calls can include a `region` query parameter that defines the Nomad region the query is
|
||||
specified for. If this is not the local region, Nomad transparently forwards the request to a
|
||||
server in the requested region. When you omit the query parameter, Nomad uses the region of the
|
||||
server that is processing the request.
|
||||
API calls can include a `region` query parameter that defines the Nomad region
|
||||
the query is specified for. If this is not the local region, Nomad transparently
|
||||
forwards the request to a server in the requested region. When you omit the
|
||||
query parameter, Nomad uses the region of the server that is processing the
|
||||
request.
|
||||
|
||||
## Replication
|
||||
|
||||
Nomad writes the following objects in the authoritative region and replicates them to all federated
|
||||
regions:
|
||||
Nomad writes the following objects in the authoritative region and replicates
|
||||
them to all federated regions:
|
||||
|
||||
- ACL [policies][acl_policy], [roles][acl_role], [auth methods][acl_auth_method],
|
||||
[binding rules][acl_binding_rule], and [global tokens][acl_token]
|
||||
@@ -32,25 +37,25 @@ regions:
|
||||
- [Quota specifications][quota]
|
||||
- [Sentinel policies][sentinel_policies]
|
||||
|
||||
When creating, updating, or deleting these objects, Nomad always sends the request to the
|
||||
authoritative region using RPC forwarding.
|
||||
When creating, updating, or deleting these objects, Nomad always sends the
|
||||
request to the authoritative region using RPC forwarding.
|
||||
|
||||
Nomad starts replication routines on each federated cluster's leader server in a hub and spoke
|
||||
design. The routines then use blocking queries to receive updates from the authoritative region to
|
||||
mirror in their own state store. These routines also implement rate limiting, so that busy clusters
|
||||
do not degrade due to overly aggressive replication processes.
|
||||
Nomad starts replication routines on each federated cluster's leader server in a
|
||||
hub and spoke design. The routines then use blocking queries to receive updates
|
||||
from the authoritative region to mirror in their own state store. These routines
|
||||
also implement rate limiting, so that busy clusters do not degrade due to overly
|
||||
aggressive replication processes.
|
||||
|
||||
<Note>
|
||||
Nomad writes ACL local tokens in the region where you make the request and does not replicate
|
||||
those local tokens.
|
||||
</Note>
|
||||
<Note> Nomad writes ACL local tokens in the region where you make the request
|
||||
and does not replicate those local tokens. </Note>
|
||||
|
||||
## Multi-Region job deployments <EnterpriseAlert inline />
|
||||
|
||||
Nomad job deployments can use the [`multiregion`][] block when running in federated mode.
|
||||
Multiregion configuration instructs Nomad to register and run the job on all the specified regions,
|
||||
removing the need for multiple job specification copies and registration on each region.
|
||||
Multiregion jobs do not provide regional failover in the event of failure.
|
||||
Nomad job deployments can use the [`multiregion`][] block when running in
|
||||
federated mode. Multiregion configuration instructs Nomad to register and run
|
||||
the job on all the specified regions, removing the need for multiple job
|
||||
specification copies and registration on each region. Multiregion jobs do not
|
||||
provide regional failover in the event of failure.
|
||||
|
||||
[acl_policy]: /nomad/docs/concepts/acl#policy
|
||||
[acl_role]: /nomad/docs/concepts/acl#role
|
||||
|
||||
@@ -1,13 +1,17 @@
|
||||
---
|
||||
layout: docs
|
||||
page_title: Architecture
|
||||
description: Learn about the internal architecture of Nomad.
|
||||
description: |-
|
||||
Nomad's system architecture supports a "run any workload anywhere" approach to scheduling and orchestration. Learn how regions, servers, and clients interact, and how Nomad clusters replicate data and run workloads.
|
||||
---
|
||||
|
||||
# Architecture
|
||||
|
||||
Nomad is a complex system that has many different pieces. To help both users and developers of Nomad
|
||||
build a mental model of how it works, this page documents the system architecture.
|
||||
This page provides conceptual information on Nomad architecture. Learn how regions, servers, and clients interact, and how Nomad clusters replicate data and run workloads.
|
||||
|
||||
Nomad is a complex system that has many different pieces. To help both users and
|
||||
developers of Nomad build a mental model of how it works, this page documents
|
||||
the system architecture.
|
||||
|
||||
Refer to the [glossary][] for more details on some of the terms discussed here.
|
||||
|
||||
|
||||
@@ -2,16 +2,14 @@
|
||||
layout: docs
|
||||
page_title: Consensus Protocol
|
||||
description: |-
|
||||
Nomad uses a consensus protocol to provide Consistency as defined by CAP.
|
||||
The consensus protocol is based on Raft: In search of an Understandable
|
||||
Consensus Algorithm. For a visual explanation of Raft, see The Secret Lives of
|
||||
Data.
|
||||
Nomad uses a consensus protocol based on Raft to manage cluster node state. Learn about logs, Finite State Machine (FSM), peer set, quorum, committed entry, and leader.
|
||||
---
|
||||
|
||||
# Consensus Protocol
|
||||
|
||||
Nomad uses a [consensus protocol](<https://en.wikipedia.org/wiki/Consensus_(computer_science)>)
|
||||
to provide [Consistency (as defined by CAP)](https://en.wikipedia.org/wiki/CAP_theorem).
|
||||
This page provides conceptual information on Nomad's [consensus
|
||||
protocol](<https://en.wikipedia.org/wiki/Consensus_(computer_science)>), which
|
||||
provides [Consistency as defined by CAP](https://en.wikipedia.org/wiki/CAP_theorem).
|
||||
The consensus protocol is based on
|
||||
["Raft: In search of an Understandable Consensus Algorithm"](https://raft.github.io/raft.pdf).
|
||||
For a visual explanation of Raft, see [The Secret Lives of Data](http://thesecretlivesofdata.com/raft).
|
||||
@@ -182,7 +180,7 @@ failure scenario.
|
||||
<td>2</td>
|
||||
<td>0</td>
|
||||
</tr>
|
||||
<tr class="warning">
|
||||
<tr>
|
||||
<td>3</td>
|
||||
<td>2</td>
|
||||
<td>1</td>
|
||||
@@ -192,7 +190,7 @@ failure scenario.
|
||||
<td>3</td>
|
||||
<td>1</td>
|
||||
</tr>
|
||||
<tr class="warning">
|
||||
<tr>
|
||||
<td>5</td>
|
||||
<td>3</td>
|
||||
<td>2</td>
|
||||
|
||||
@@ -1,10 +1,12 @@
|
||||
---
|
||||
layout: docs
|
||||
page_title: CPU
|
||||
description: Learn about how Nomad manages CPU resources.
|
||||
page_title: How Nomad Uses CPU
|
||||
description: Learn how Nomad discovers and uses CPU resources on nodes in order to place and run workloads. Nomad Enterprise can use non-uniform memory access (NUMA) scheduling, which is optimized for the NUMA topology of a client node.
|
||||
---
|
||||
|
||||
# Modern processors
|
||||
# How Nomad Uses CPUs
|
||||
|
||||
This page provides conceptual information on how Nomad discovers and uses CPU resources on nodes in order to place and run workloads.
|
||||
|
||||
Every Nomad node has a Central Processing Unit (CPU) providing the computational
|
||||
power needed for running operating system processes. Nomad uses the CPU to
|
||||
|
||||
@@ -1,21 +1,25 @@
|
||||
---
|
||||
layout: docs
|
||||
page_title: Filesystem
|
||||
page_title: Allocation Filesystems
|
||||
description: |-
|
||||
Nomad creates an allocation working directory for every allocation. Learn what
|
||||
goes into the working directory and how it interacts with Nomad task drivers.
|
||||
Learn how Nomad uses allocation working directories to store job task templates, storage volumes, artifacts, dispatch payloads, and logs. Review image and chroot isolation, as well as when Nomad uses isolation mode.
|
||||
---
|
||||
|
||||
# Filesystem
|
||||
# Allocation Filesystems
|
||||
|
||||
Nomad creates a working directory for each allocation on a client. This
|
||||
directory can be found in the Nomad [`data_dir`] at
|
||||
This page provides conceptual information about how Nomad uses allocation
|
||||
working directories to store job task templates, storage volumes, artifacts,
|
||||
dispatch payloads, and logs. Review image and chroot isolation, as well as when
|
||||
Nomad does not use any isolation mode.
|
||||
|
||||
Nomad creates a working directory for each allocation on a client. Find this
|
||||
directory in the Nomad [`data_dir`] at
|
||||
`./alloc/«alloc_id»`. The allocation working directory is where Nomad
|
||||
creates task directories and directories shared between tasks, write logs for
|
||||
creates task directories and directories shared between tasks, writes logs for
|
||||
tasks, and downloads artifacts or templates.
|
||||
|
||||
An allocation with two tasks (named `task1` and `task2`) will have an
|
||||
allocation directory like the one below.
|
||||
allocation directory like this example.
|
||||
|
||||
```shell-session
|
||||
.
|
||||
|
||||
@@ -2,12 +2,15 @@
|
||||
layout: docs
|
||||
page_title: Gossip Protocol
|
||||
description: |-
|
||||
Nomad uses a gossip protocol to manage membership. All of this is provided
|
||||
through the use of the Serf library.
|
||||
Learn how Nomad's gossip protocol uses the Serf library to provide server membership management, which includes cross-region requests, failure detection, and automatic clustering using a consensus protocol.
|
||||
---
|
||||
|
||||
# Gossip Protocol
|
||||
|
||||
This page provides conceptual information about Nomad's gossip protocol, which
|
||||
provides server membership management, cross-region requests, failure detection,
|
||||
and automatic clustering using a consensus protocol.
|
||||
|
||||
Nomad uses the [Serf library][serf] to provide a [gossip
|
||||
protocol](https://en.wikipedia.org/wiki/Gossip_protocol) to manage membership.
|
||||
The gossip protocol used by Serf is based on ["SWIM: Scalable Weakly-consistent
|
||||
|
||||
@@ -2,8 +2,7 @@
|
||||
layout: docs
|
||||
page_title: Concepts
|
||||
description: >-
|
||||
This section covers the core concepts of Nomad and explains technical details of
|
||||
Nomads operation.
|
||||
Learn about Nomad's architecture, core concepts, and behavior.
|
||||
---
|
||||
|
||||
# Nomad Concepts
|
||||
|
||||
@@ -2,13 +2,14 @@
|
||||
layout: docs
|
||||
page_title: Job
|
||||
description: |-
|
||||
Learn about Nomad's job feature, which is how you deploy your apps, maintenance scripts, cron jobs, and similar tasks.
|
||||
Learn how a Nomad workload, called a job, deploys your apps, maintenance scripts, cron jobs, and similar tasks. Review job statuses and how Nomad versions your jobs.
|
||||
---
|
||||
|
||||
# Job
|
||||
|
||||
This page explains jobs, which are the main Nomad constructs for workloads that
|
||||
run your apps, maintenance scripts, cron jobs, and other tasks.
|
||||
This page contains conceptual information about jobs, which are the main Nomad
|
||||
constructs for workloads that run your apps, maintenance scripts, cron jobs, and
|
||||
other tasks. Review job statuses and how Nomad versions your jobs.
|
||||
|
||||
## Background
|
||||
|
||||
|
||||
@@ -1,21 +1,21 @@
|
||||
---
|
||||
layout: docs
|
||||
page_title: Node Pools
|
||||
description: Learn about the internal architecture of Nomad.
|
||||
description: Nomad's node pools feature groups clients and segments infrastructure into logical units so that jobs have control over client allocation placement. Review node pool replication in multi-region clusters, built-in node pools, node pool patterns, and enterprise features such as scheduler configuration, node pool governance, and multi-region jobs.
|
||||
---
|
||||
|
||||
# Node Pools
|
||||
|
||||
Node pools are a way to group clients and segment infrastructure into logical
|
||||
units that can be targeted by jobs for a strong control over where allocations
|
||||
are placed.
|
||||
This page contains conceptual information about Nomad's node pools feature. Use
|
||||
node pools to group clients and segment infrastructure into logical units so
|
||||
that jobs control allocation placement. Review node pool replication in multi-region clusters, built-in node pools, node pool patterns, and enterprise features such as scheduler configuration, node pool governance, and multi-region jobs.
|
||||
|
||||
Without node pools, allocations for a job can be placed in any eligible client
|
||||
in the cluster. Affinities and constraints can help express preferences for
|
||||
certain nodes, but they do not easily prevent other jobs from placing
|
||||
allocations in a set of nodes.
|
||||
|
||||
A node pool can be created using the [`nomad node pool apply`][cli_np_apply]
|
||||
Create a node pool using the [`nomad node pool apply`][cli_np_apply]
|
||||
command and passing a node pool [specification file][np_spec].
|
||||
|
||||
```hcl
|
||||
|
||||
@@ -1,14 +1,16 @@
|
||||
---
|
||||
layout: docs
|
||||
page_title: Base Plugin
|
||||
description: Learn about how to author a Nomad plugin.
|
||||
page_title: Nomad Base Plugin
|
||||
description: |-
|
||||
Learn how to create a Nomad plugin so you can extend Nomad's task and device driver features. Review device plugin API functions that you must implement in your device plugin.
|
||||
---
|
||||
|
||||
# Base Plugin
|
||||
# Nomad Base Plugin
|
||||
|
||||
The base plugin is a special plugin type implemented by all plugins. It allows
|
||||
for common plugin operations such as defining a configuration schema and
|
||||
version information.
|
||||
This page provides conceptual information on Nomad's base plugin, which is a
|
||||
special plugin type implemented by all plugins. The base plugin allows for
|
||||
common plugin operations such as defining a configuration schema and version
|
||||
information.
|
||||
|
||||
## Plugin API
|
||||
|
||||
|
||||
@@ -1,11 +1,14 @@
|
||||
---
|
||||
layout: docs
|
||||
page_title: Network Plugins
|
||||
description: Learn how Nomad manages custom user-specified network configurations.
|
||||
description: |-
|
||||
Nomad's network plugin support enables scheduling tasks with custom network configuration plugins that conform to the Container Network Interface (CNI). Learn about the CNI reference plugins that are required for Nomad bridge networking and Consul service mesh.
|
||||
---
|
||||
|
||||
# Network plugins
|
||||
|
||||
This page provides conceptual information on Nomad's network plugin support, which enables scheduling tasks with custom network configuration plugins that conform to the Container Network Interface (CNI). Learn about the CNI reference plugins that are required for Nomad bridge networking and Consul service mesh.
|
||||
|
||||
Nomad has built-in support for scheduling compute resources such as
|
||||
CPU, memory, and networking. Nomad's network plugin support extends
|
||||
this to allow scheduling tasks with purpose-created or specialty network
|
||||
|
||||
@@ -1,11 +1,14 @@
|
||||
---
|
||||
layout: docs
|
||||
page_title: Storage Plugins
|
||||
description: Learn how Nomad manages dynamic storage plugins.
|
||||
description: |-
|
||||
Nomad's storage plugin support enables scheduling tasks with external storage volumes that conform to the Container Storage Interface (CSI), including AWS Elastic Block Storage (EBS) volumes, Google Cloud Platform (GCP) persistent disks, Ceph, and Portworx. Learn about controller, node, and monolith storage plugins, as well as storage volume lifecyle.
|
||||
---
|
||||
|
||||
# Storage Plugins
|
||||
|
||||
This page provides conceptual information on Nomad's storage plugin support, which enables scheduling tasks with external storage volumes that conform to the Container Storage Interface (CSI), including AWS Elastic Block Storage (EBS) volumes, Google Cloud Platform (GCP) persistent disks, Ceph, and Portworx. Learn about controller, node, and monolith storage plugins, as well as storage volume lifecyle.
|
||||
|
||||
Nomad has built-in support for scheduling compute resources such as
|
||||
CPU, memory, and networking. Nomad's storage plugin support extends
|
||||
this to allow scheduling tasks with externally created storage
|
||||
|
||||
@@ -1,10 +1,13 @@
|
||||
---
|
||||
layout: docs
|
||||
page_title: Device Plugins
|
||||
description: Learn how to author a Nomad device plugin.
|
||||
description: |-
|
||||
Learn how to create a Nomad device plugin so you can schedule workload tasks with other devices, such as GPUs. Review device plugin lifecycle and the device plugin API functions that you must implement in your device plugin.
|
||||
---
|
||||
|
||||
# Devices
|
||||
# Device Plugins
|
||||
|
||||
This page provides conceptual information for creating a device driver plugin to extend Nomad's workload execution functionality.
|
||||
|
||||
Nomad has built-in support for scheduling compute resources such as CPU, memory,
|
||||
and networking. Nomad device plugins are used to support scheduling tasks with
|
||||
@@ -12,7 +15,7 @@ other devices, such as GPUs. They are responsible for fingerprinting these
|
||||
devices and working with the Nomad client to make them available to assigned
|
||||
tasks.
|
||||
|
||||
For a real world example of a Nomad device plugin implementation, see the [Nvidia
|
||||
For a real world example of a Nomad device plugin implementation, refer to the [Nvidia
|
||||
GPU plugin](https://github.com/hashicorp/nomad-device-nvidia).
|
||||
|
||||
## Authoring Device Plugins
|
||||
|
||||
@@ -1,12 +1,14 @@
|
||||
---
|
||||
layout: docs
|
||||
page_title: Plugins
|
||||
description: Learn about how external plugins work in Nomad.
|
||||
description: Learn how Nomad's task and device driver plugins extend a workload's supported functions.
|
||||
---
|
||||
|
||||
# Plugins
|
||||
|
||||
Nomad implements a plugin framework which allows users to extend the
|
||||
This page provides conceptual information on task and device driver plugins.
|
||||
|
||||
Nomad implements a plugin framework which lets you extend the
|
||||
functionality of some components within Nomad. The design of the plugin system
|
||||
is inspired by the lessons learned from plugin systems implemented in other
|
||||
HashiCorp products such as Terraform and Vault.
|
||||
@@ -16,7 +18,7 @@ The following components are currently pluggable within Nomad:
|
||||
- [Task Drivers](/nomad/docs/concepts/plugins/task-drivers)
|
||||
- [Devices](/nomad/docs/concepts/plugins/devices)
|
||||
|
||||
# Architecture
|
||||
## Architecture
|
||||
|
||||
The Nomad plugin framework uses the [go-plugin][goplugin] project to expose
|
||||
a language independent plugin interface. Plugins implement a set of gRPC
|
||||
|
||||
@@ -1,13 +1,15 @@
|
||||
---
|
||||
layout: docs
|
||||
page_title: Task Driver Plugins
|
||||
description: Learn how to author a Nomad task driver plugin.
|
||||
description: Learn how to create a Nomad task driver plugin to extend Nomad's workload execution functionality.
|
||||
---
|
||||
|
||||
# Task Drivers
|
||||
# Task Driver Plugins
|
||||
|
||||
This page provides conceptual information for creating a task driver plugin to extend Nomad's workload execution functionality.
|
||||
|
||||
Task drivers in Nomad are the runtime components that execute workloads. For
|
||||
a real world example of a Nomad task driver plugin implementation, see the [exec2 driver][].
|
||||
a real world example of a Nomad task driver plugin implementation, refer to the [exec2 driver][].
|
||||
|
||||
## Authoring Task Driver Plugins
|
||||
|
||||
|
||||
@@ -1,16 +1,16 @@
|
||||
---
|
||||
layout: docs
|
||||
page_title: Scheduling
|
||||
description: Learn about how scheduling works in Nomad.
|
||||
page_title: Scheduling workloads
|
||||
description: Nomad's scheduling component assigns jobs to client machines. Explore scheduling internals, placement, and preemption.
|
||||
---
|
||||
|
||||
# Scheduling
|
||||
# Scheduling workloads
|
||||
|
||||
Scheduling is a core function of Nomad. It is the process of assigning tasks
|
||||
from jobs to client machines. The design is heavily inspired by Google's work on
|
||||
both [Omega: flexible, scalable schedulers for large compute clusters][omega] and
|
||||
[Large-scale cluster management at Google with Borg][borg]. See the links below
|
||||
for implementation details on scheduling in Nomad.
|
||||
Scheduling workloads is the process of assigning tasks from jobs to client machines.
|
||||
It is one of Nomad's core functions. The design is heavily inspired by Google's
|
||||
work on both [Omega: flexible, scalable schedulers for large compute
|
||||
clusters][omega] and [Large-scale cluster management at Google with Borg][borg].
|
||||
Refer to the links below for implementation details on scheduling in Nomad.
|
||||
|
||||
- [Scheduling Internals](/nomad/docs/concepts/scheduling/scheduling) - An overview of how the scheduler works.
|
||||
- [Placement](/nomad/docs/concepts/scheduling/placement) - Explains how placements are computed and how they can be adjusted.
|
||||
|
||||
@@ -1,10 +1,13 @@
|
||||
---
|
||||
layout: docs
|
||||
page_title: Placement
|
||||
description: Learn about how placements are computed in Nomad.
|
||||
page_title: Allocation Placement
|
||||
description: Nomad uses allocation placement when scheduling jobs to run on clients. Learn about using affinities, constraints, datacenters, and node pools to specify allocation placement.
|
||||
---
|
||||
|
||||
# Placement
|
||||
# Allocation Placement
|
||||
|
||||
This page provides conceptual information about job allocation placement on
|
||||
clients. Learn about using affinities, constraints, datacenters, and node pools to specify allocation placement.
|
||||
|
||||
When the Nomad scheduler receives a job registration request, it needs to
|
||||
determine which clients will run allocations for the job.
|
||||
@@ -18,7 +21,7 @@ adjusted via agent and job configuration.
|
||||
|
||||
There are several options that can be used depending on the desired outcome.
|
||||
|
||||
### Affinities and Constraints
|
||||
## Affinities and Constraints
|
||||
|
||||
Affinities and constraints allow users to define soft or hard requirements for
|
||||
their jobs. The [`affinity`][job_affinity] block specifies a soft requirement
|
||||
@@ -43,14 +46,14 @@ requirements but it is acceptable to have other jobs sharing the same nodes.
|
||||
The sections below describe the node values that can be configured and used in
|
||||
job affinity and constraint rules.
|
||||
|
||||
#### Node Class
|
||||
### Node Class
|
||||
|
||||
Node class is an arbitrary value that can be used to group nodes based on some
|
||||
characteristics, like the instance size or the presence of fast hard drives,
|
||||
and is specified in the client configuration file using the
|
||||
[`node_class`][config_client_node_class] parameter.
|
||||
|
||||
#### Dynamic and Static Node Metadata
|
||||
### Dynamic and Static Node Metadata
|
||||
|
||||
Node metadata are arbitrary key-value mappings specified either in the client
|
||||
configuration file using the [`meta`][config_client_meta] parameter or
|
||||
@@ -63,7 +66,7 @@ resource ownership, such as `owner = "team-qa"`, or fine-grained locality,
|
||||
`rack = "3"`. Dynamic metadata may be used to track runtime information, such
|
||||
as jobs running in a given client.
|
||||
|
||||
### Datacenter
|
||||
## Datacenter
|
||||
|
||||
Datacenters represent a geographical location in a region that can be used for
|
||||
fault tolerance and infrastructure isolation.
|
||||
@@ -79,7 +82,7 @@ represent where a node resides rather than its intended use. The
|
||||
[`spread`][job_spread] block can help achieve fault tolerance across
|
||||
datacenters.
|
||||
|
||||
### Node Pool
|
||||
## Node Pool
|
||||
|
||||
Node pools allow grouping nodes that can be targeted by jobs to achieve
|
||||
workload isolation.
|
||||
|
||||
@@ -1,18 +1,25 @@
|
||||
---
|
||||
layout: docs
|
||||
page_title: Preemption
|
||||
description: Learn about how preemption works in Nomad.
|
||||
page_title: Allocation Preemption
|
||||
description: Nomad uses preemption to evict existing allocations in order to place allocations for a higher priority job. Learn how Nomad uses job priority to preempt running allocations and how you can determine whether Nomad preempted an allocation. Use `nomad plan` to dry run the scheduler to find out if preemption is necessary.
|
||||
---
|
||||
|
||||
# Preemption
|
||||
# Allocation Preemption
|
||||
|
||||
Preemption allows Nomad to kill existing allocations in order to place allocations for a higher priority job.
|
||||
The evicted allocation is temporarily displaced until the cluster has capacity to run it. This allows operators to
|
||||
run high priority jobs even under resource contention across the cluster.
|
||||
This page provides conceptual information about preemption, which is evicting
|
||||
existing allocations in order to place allocations for a higher priority job.
|
||||
Learn how Nomad uses job priorty to determine when to preempt running
|
||||
allocations and how you can determine whether Nomad has preempted an allocation.
|
||||
Use `nomad plan` to dry run the scheduler to find out if preemption is necessary.
|
||||
|
||||
Preemption allows Nomad to kill existing allocations in order to place
|
||||
allocations for a higher priority job. The evicted allocation is temporarily
|
||||
displaced until the cluster has capacity to run it. This allows operators to run
|
||||
high priority jobs even under resource contention across the cluster.
|
||||
|
||||
~> **Advanced Topic!** This page covers technical details of Nomad. You do not need to understand these details to effectively use Nomad. The details are documented here for those who wish to learn about them without having to go spelunking through the source code.
|
||||
|
||||
# Preemption in Nomad
|
||||
## Preemption in Nomad
|
||||
|
||||
Every job in Nomad has a priority associated with it. Priorities impact scheduling at the evaluation and planning
|
||||
stages by sorting the respective queues accordingly (higher priority jobs get moved ahead in the queues).
|
||||
@@ -21,7 +28,7 @@ Nomad has preemption capabilities for service, batch, and system jobs. The Nomad
|
||||
to free up capacity for new allocations resulting from relatively higher priority jobs, sending evicted allocations back
|
||||
into the plan queue.
|
||||
|
||||
# Details
|
||||
## Details
|
||||
|
||||
Preemption is enabled by default for system jobs. Operators can use the [scheduler config](/nomad/api-docs/operator#update-scheduler-configuration) API endpoint to disable preemption.
|
||||
|
||||
@@ -45,7 +52,7 @@ Allocations are selected starting from the lowest priority, and scored according
|
||||
to how closely they fit the job's required capacity. For example, if the `75` priority job needs 1GB disk and 2GB memory, Nomad will preempt
|
||||
allocations `a1`, `a2` and `a4` to satisfy those requirements.
|
||||
|
||||
# Preemption Visibility
|
||||
## Preemption Visibility
|
||||
|
||||
Operators can use the [allocation API](/nomad/api-docs/allocations#read-allocation) or the `alloc status` command to get visibility into
|
||||
whether an allocation has been preempted. Preempted allocations will have their DesiredStatus set to “evict”. The `Allocation` object
|
||||
@@ -57,7 +64,7 @@ in the API also has two additional fields related to preemption.
|
||||
- `PreemptedByAllocID` - This field is set on allocations that were preempted by the scheduler. It contains the allocation ID of the allocation
|
||||
that preempted it. In the above example, allocations `a1`, `a2` and `a4` will have this field set to the ID of the allocation from the job `webapp`.
|
||||
|
||||
# Integration with Nomad plan
|
||||
## Integration with Nomad plan
|
||||
|
||||
`nomad plan` allows operators to dry run the scheduler. If the scheduler determines that
|
||||
preemption is necessary to place the job, it shows additional information in the CLI output for
|
||||
|
||||
@@ -1,14 +1,16 @@
|
||||
---
|
||||
layout: docs
|
||||
page_title: Scheduling
|
||||
description: Learn about how scheduling works in Nomad.
|
||||
page_title: Scheduling in Nomad
|
||||
description: Nomad implements job scheduling using jobs, nodes, allocations, and evaluations. Learn about job lifecycle and how the job schedular generates the allocation plan that the server implements using a service, batch, system, sysbatch, or core scheduler.
|
||||
---
|
||||
|
||||
# Scheduling in Nomad
|
||||
|
||||
This page provides conceptual information on how Nomad implements job scheduling using jobs, nodes, allocations, and evaluations. Learn about job lifecycle and how the job schedular generates the allocation plan that the server implements using a service, batch, system, sysbatch, or core scheduler.
|
||||
|
||||
[![Nomad Data Model][img-data-model]][img-data-model]
|
||||
|
||||
There are four primary "nouns" in Nomad; jobs, nodes, allocations, and
|
||||
There are four primary components in Nomad: jobs, nodes, allocations, and
|
||||
evaluations. Jobs are submitted by users and represent a _desired state_. A job
|
||||
is a declarative description of tasks to run which are bounded by constraints
|
||||
and require resources. Tasks can be scheduled on nodes in the cluster running
|
||||
|
||||
@@ -2,19 +2,17 @@
|
||||
layout: docs
|
||||
page_title: Security Model
|
||||
description: >-
|
||||
Nomad relies on both a lightweight gossip mechanism and an RPC system to
|
||||
provide various features. Both of the systems have different security
|
||||
mechanisms that stem from their designs. However, the security mechanisms of
|
||||
Nomad have a common goal: to provide confidentiality, integrity, and
|
||||
authentication.
|
||||
Nomad's security model provides mechanisms such as mTLS, namespace, access control list (ACL), Sentinel policies (Enterprise), and resource quotas (Enterprise) to enable fine-grained access within and between clusters. Learn how to secure your Nomad clusters with threat models for internal threats and external threats. Review the network ports used by the security model.
|
||||
---
|
||||
|
||||
# Security Model
|
||||
|
||||
Nomad is a flexible workload orchestrator to deploy and manage any containerized
|
||||
or legacy application using a single, unified workflow. It can run diverse
|
||||
workloads including Docker, non-containerized, microservice, and batch
|
||||
applications.
|
||||
This page provides conceptual information on Nomad's security model, which
|
||||
provides mechanisms such as mTLS, namespace, access control list (ACL), Sentinel
|
||||
policies, and resource quotas to enable fine-grained access within and between
|
||||
clusters. Learn how to secure your Nomad clusters, threat models, internal
|
||||
threats, and external threats. Review the network ports used by the security
|
||||
model.
|
||||
|
||||
Nomad utilizes a lightweight gossip and RPC system, [similar to
|
||||
Consul](/consul/docs/security/security-models/core), which provides
|
||||
@@ -28,25 +26,23 @@ features for multi-tenant deployments are offered exclusively in the enterprise
|
||||
version. This documentation may need to be adapted to your deployment situation,
|
||||
but the general mechanisms for a secure Nomad deployment revolve around:
|
||||
|
||||
- **[mTLS](/nomad/tutorials/transport-security/security-enable-tls)** -
|
||||
Mutual authentication of both the TLS server and client x509 certificates
|
||||
prevents internal abuse by preventing unauthenticated access to network
|
||||
components within the cluster.
|
||||
- **[mTLS](/nomad/tutorials/transport-security/security-enable-tls)** Mutual
|
||||
authentication of both the TLS server and client x509 certificates prevents
|
||||
internal abuse by preventing unauthenticated access to network components
|
||||
within the cluster.
|
||||
|
||||
- **[ACLs](/nomad/tutorials/access-control)** - Enables
|
||||
authorization for authenticated connections by granting capabilities to ACL
|
||||
tokens.
|
||||
- **[ACLs](/nomad/tutorials/access-control)** Enables authorization for
|
||||
authenticated connections by granting capabilities to ACL tokens.
|
||||
|
||||
- **[Namespaces](/nomad/tutorials/manage-clusters/namespaces)** -
|
||||
Access to read and write to a namespace can be
|
||||
controlled to allow for granular access to job information managed within a
|
||||
multi-tenant cluster.
|
||||
- **[Namespaces](/nomad/tutorials/manage-clusters/namespaces)** Access to read
|
||||
and write to a namespace can be controlled to allow for granular access to job
|
||||
information managed within a multi-tenant cluster.
|
||||
|
||||
- **[Sentinel Policies](/nomad/tutorials/governance-and-policy/sentinel)**
|
||||
(**Enterprise Only**) - Sentinel policies allow for granular control over
|
||||
components such as task drivers within a cluster.
|
||||
- **[Sentinel Policies]**<EnterpriseAlert inline /> Sentinel
|
||||
policies allow for granular control over components such as task drivers
|
||||
within a cluster.
|
||||
|
||||
### Personas
|
||||
## Personas
|
||||
|
||||
When thinking about Nomad, it helps to consider the following types of base
|
||||
personas when managing the security requirements for the cluster deployment. The
|
||||
@@ -84,7 +80,7 @@ within Nomad itself.
|
||||
internet such as a web server. This is someone who shouldn't have any
|
||||
network access to the Nomad server API.
|
||||
|
||||
### Secure Configuration
|
||||
## Secure Configuration
|
||||
|
||||
Nomad's security model is applicable only if all parts of the system are running
|
||||
with a secure configuration; **Nomad is not secure-by-default.** Without the following
|
||||
@@ -93,9 +89,9 @@ to a cluster. Like all security considerations, one must appropriately determine
|
||||
what concerns they have for their environment and adapt to these security
|
||||
recommendations accordingly.
|
||||
|
||||
#### Requirements
|
||||
### Requirements
|
||||
|
||||
- **[mTLS enabled](/nomad/tutorials/transport-security/security-enable-tls)** -
|
||||
- [mTLS enabled](/nomad/tutorials/transport-security/security-enable-tls)
|
||||
Mutual TLS (mTLS) enables [mutual
|
||||
authentication](https://en.wikipedia.org/wiki/Mutual_authentication) with
|
||||
security properties to prevent the following problems:
|
||||
@@ -139,35 +135,32 @@ recommendations accordingly.
|
||||
when using `tls.verify_https_client=false`. You can use a reverse proxy or
|
||||
other external means to restrict access to them.
|
||||
|
||||
- **[ACLs enabled](/nomad/tutorials/access-control)** - The
|
||||
- [ACLs enabled](/nomad/tutorials/access-control) The
|
||||
access control list (ACL) system provides a capability-based control
|
||||
mechanism for Nomad administrators allowing for custom roles (typically
|
||||
within Vault) to be tied to an individual human or machine operator
|
||||
identity. This allows for access to capabilities within the cluster to be
|
||||
restricted to specific users.
|
||||
|
||||
- **[Namespaces](/nomad/tutorials/manage-clusters/namespaces)**
|
||||
- [Namespaces](/nomad/tutorials/manage-clusters/namespaces) This feature
|
||||
allows for a cluster to be shared by multiple teams within a company. Using
|
||||
this logical separation is important for multi-tenant clusters to prevent
|
||||
users without access to that namespace from conflicting with each other.
|
||||
This requires ACLs to be enabled in order to be enforced.
|
||||
|
||||
- This feature allows for a cluster to be shared by
|
||||
multiple teams within a company. Using this logical separation is important
|
||||
for multi-tenant clusters to prevent users without access to that namespace
|
||||
from conflicting with each other. This requires ACLs to be enabled in order
|
||||
to be enforced.
|
||||
|
||||
- **[Sentinel Policies](/nomad/tutorials/governance-and-policy/sentinel)**
|
||||
(**Enterprise Only**) - [Sentinel](https://www.hashicorp.com/sentinel/) is
|
||||
a feature which enables
|
||||
- [Sentinel Policies]<EnterpriseAlert inline />
|
||||
[Sentinel](https://www.hashicorp.com/sentinel/) is a feature which enables
|
||||
[policy-as-code](https://docs.hashicorp.com/sentinel/concepts/policy-as-code)
|
||||
to enforce further restrictions on operators. This is used to augment the
|
||||
built-in ACL system for fine-grained control over jobs.
|
||||
|
||||
- **[Resource Quotas](/nomad/tutorials/governance-and-policy/quotas)**
|
||||
(**Enterprise Only**) - Can limit a namespace's access to the underlying
|
||||
compute resources in the cluster by setting upper-limits for operators.
|
||||
Access to these resource quotas can be managed via ACLs to ensure read-only
|
||||
access for operators so they can't just change their quotas.
|
||||
- [Resource Quotas]<EnterpriseAlert inline /> Can limit a
|
||||
namespace's access to the underlying compute resources in the cluster by
|
||||
setting upper-limits for operators. Access to these resource quotas can be
|
||||
managed via ACLs to ensure read-only access for operators so they can't just
|
||||
change their quotas.
|
||||
|
||||
#### Recommendations
|
||||
### Recommendations
|
||||
|
||||
The following are security recommendations that can help significantly improve
|
||||
the security of your cluster depending on your use case. We recommend always
|
||||
@@ -226,7 +219,7 @@ environment.
|
||||
- **[HTTP Headers](/nomad/docs/configuration#http_api_response_headers)** -
|
||||
Additional security [headers](https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers), such as [`X-XSS-Protection`](https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/X-XSS-Protection), can be [configured](/nomad/docs/configuration#http_api_response_headers) for HTTP API responses.
|
||||
|
||||
### Threat Model
|
||||
## Threat Model
|
||||
|
||||
The following are parts of the Nomad threat model:
|
||||
|
||||
@@ -292,7 +285,7 @@ The following are not part of the threat model for client agents:
|
||||
devices. Such privileges can be used to facilitate compromise other workloads,
|
||||
or cause denial-of-service attacks.
|
||||
|
||||
#### Internal Threats
|
||||
### Internal Threats
|
||||
|
||||
- **Job Operator** - Someone with a valid mTLS certificate and ACL token may still be a
|
||||
threat to your cluster in certain situations, especially in multi-team
|
||||
@@ -316,7 +309,7 @@ The following are not part of the threat model for client agents:
|
||||
to an exposed Docker daemon API through other means such as the [`raw_exec`](/nomad/docs/drivers/raw_exec)
|
||||
driver.
|
||||
|
||||
#### External Threats
|
||||
### External Threats
|
||||
|
||||
There are two main components to consider to for external threats in a Nomad cluster:
|
||||
|
||||
@@ -331,7 +324,7 @@ There are two main components to consider to for external threats in a Nomad clu
|
||||
a client node is unencrypted in the agent's data and configuration
|
||||
directory.
|
||||
|
||||
### Network Ports
|
||||
## Network Ports
|
||||
|
||||
| **Port / Protocol** | Agents | Description |
|
||||
|----------------------|---------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------|
|
||||
@@ -345,3 +338,5 @@ There are two main components to consider to for external threats in a Nomad clu
|
||||
[Variables]: /nomad/docs/concepts/variables
|
||||
[verify_https_client]: /nomad/docs/configuration/tls#verify_https_client
|
||||
[serf]: https://github.com/hashicorp/serf
|
||||
[Sentinel Policies]: /nomad/tutorials/governance-and-policy/sentinel
|
||||
[Resource Quotas]: /nomad/tutorials/governance-and-policy/quotas
|
||||
|
||||
@@ -1,36 +1,40 @@
|
||||
---
|
||||
layout: docs
|
||||
page_title: Variables
|
||||
description: Learn about the Nomad Variables feature
|
||||
page_title: Nomad Variables
|
||||
description: Nomad's variables feature lets you store and use encrypted configuration data in your job specifications. Learn how Access Control List (ACL) policies restrict access to variables within a namespace, how a job task's workload identity grants access to variables, and how locking a variable blocks access to that variable.
|
||||
---
|
||||
|
||||
# Nomad Variables
|
||||
|
||||
This page contains conceptual information about the Nomad variables feature,
|
||||
which lets you store and use encrypted configuration data in your job
|
||||
specifications. Learn how Access Control List (ACL) policies restrict access to variables within a namespace, how a job task's workload identity grants access to variables, and how locking a variable blocks access to that variable.
|
||||
|
||||
Most Nomad workloads need access to config values or secrets. Nomad has a
|
||||
`template` block to [provide such configuration to tasks](/nomad/docs/job-specification/template#nomad-variables),
|
||||
but prior to Nomad 1.4 has left the role of storing that configuration to
|
||||
external services such as [HashiCorp Consul] and [HashiCorp Vault].
|
||||
|
||||
Nomad Variables provide the option to store configuration at file-like paths
|
||||
directly in Nomad's state store. You can [access these variables](/nomad/docs/job-specification/template#nomad-variables) directly from
|
||||
your task `template`s. The contents of these variables are encrypted
|
||||
Nomad variables provide the option to store configuration at file-like paths
|
||||
directly in Nomad's state store. [Access these variables](/nomad/docs/job-specification/template#nomad-variables) directly from
|
||||
your task templates. The contents of these variables are encrypted
|
||||
and replicated between servers via raft. Access to variables is controlled by
|
||||
ACL policies, and tasks have implicit ACL policies that allow them to access
|
||||
their own variables. You can create, read, update, or delete variables via the
|
||||
command line, the Nomad API, or in the Nomad web UI.
|
||||
|
||||
Note that the Variables feature is intended for small pieces of configuration
|
||||
Note that the variables feature is intended for small pieces of configuration
|
||||
data needed by workloads. Because writing to the Nomad state store uses
|
||||
resources needed by Nomad, it is not well-suited for large or fast-changing
|
||||
data. For example, do not store batch job results as Variables - these should be
|
||||
data. For example, do not store batch job results as variables - these should be
|
||||
stored in an external database. Variables are also not intended to be a full
|
||||
replacement for HashiCorp Vault. Unlike Vault, Nomad stores the root encryption
|
||||
key on the servers. See [Key Management][] for details.
|
||||
|
||||
## ACL for Variables
|
||||
|
||||
Every Variable belongs to a specific Nomad namespace. ACL policies can restrict
|
||||
access to Variables within a namespace on a per-path basis, using a list of
|
||||
Every variable belongs to a specific Nomad namespace. ACL policies can restrict
|
||||
access to variables within a namespace on a per-path basis, using a list of
|
||||
`path` blocks, located under `namespace.variables`. See the [ACL policy
|
||||
specification] docs for details about the syntax and structure of an ACL policy.
|
||||
|
||||
@@ -63,20 +67,20 @@ namespace "dev" {
|
||||
}
|
||||
```
|
||||
|
||||
The available capabilities for Variables are as follows:
|
||||
The available capabilities for variables are as follows:
|
||||
|
||||
| Capability | Notes |
|
||||
|------------|-----------------------------------------------------------------------------------------------------------------------|
|
||||
| write | Create or update Variables at this path. Includes the "list" capability but not the "read" or "destroy" capabilities. |
|
||||
| read | Read the decrypted contents of Variables at this path. Also includes the "list" capability |
|
||||
| list | List the metadata but not contents of Variables at this path. |
|
||||
| destroy | Delete Variables at this path. |
|
||||
| write | Create or update variables at this path. Includes the "list" capability but not the "read" or "destroy" capabilities. |
|
||||
| read | Read the decrypted contents of variables at this path. Also includes the "list" capability |
|
||||
| list | List the metadata but not contents of variables at this path. |
|
||||
| destroy | Delete variables at this path. |
|
||||
|
||||
## Task Access to Variables
|
||||
|
||||
Tasks can access Variables with the [`template`] block or using the [Task API].
|
||||
Tasks can access variables with the [`template`] block or using the [Task API].
|
||||
The [workload identity] for each task grants it automatic read and list access to
|
||||
Variables found at Nomad-owned paths with the prefix `nomad/jobs/`, followed by
|
||||
variables found at Nomad-owned paths with the prefix `nomad/jobs/`, followed by
|
||||
the job ID, task group name, and task name. This is equivalent to the following
|
||||
policy:
|
||||
|
||||
@@ -104,7 +108,7 @@ namespace "$namespace" {
|
||||
```
|
||||
|
||||
For example, a task named "redis", in a group named "cache", in a job named
|
||||
"example", will automatically have access to Variables as if it had the
|
||||
"example", will automatically have access to variables as if it had the
|
||||
following policy:
|
||||
|
||||
```hcl
|
||||
@@ -222,16 +226,16 @@ As soon as any instance starts, it tries to lock the sync variable. If it succee
|
||||
it continues to execute while a secondary thread is in charge of keeping track of
|
||||
the lock and renewing it when necessary. If by any chance the renewal fails,
|
||||
the main process is forced to return, and the instance goes into standby until it
|
||||
attempts to acquire the lock over the sync variable.
|
||||
attempts to acquire the lock over the sync variable.
|
||||
|
||||
Only threads 1 and 3 or thread 2 are running at any given time, because every
|
||||
instance is either executing as normal while renewing the lock or waiting for a
|
||||
Only threads 1 and 3 or thread 2 are running at any given time, because every
|
||||
instance is either executing as normal while renewing the lock or waiting for a
|
||||
chance to acquire it and run.
|
||||
|
||||
When the main process, or protected function, returns, the helper releases the
|
||||
When the main process, or protected function, returns, the helper releases the
|
||||
lock, allowing a second instance to start running.
|
||||
|
||||
To see it implemented live, look for the [`nomad var lock`][] command
|
||||
To see it implemented live, look for the [`nomad var lock`][] command
|
||||
implementation or the [Nomad Autoscaler][] High Availability implementation.
|
||||
|
||||
|
||||
|
||||
@@ -1,11 +1,14 @@
|
||||
---
|
||||
layout: docs
|
||||
page_title: Workload Identity
|
||||
description: Learn about Nomad's workload identity feature
|
||||
description: Nomad's workload identity feature isolates and uniquely identities each workload so you can associate Access Control List (ACL) policies to jobs. Learn about workload identity claims, claims attributes specific to Nomad Enterprise, default workload ACL policy, and workload identity for Consul and Vault.
|
||||
---
|
||||
|
||||
# Workload Identity
|
||||
|
||||
This page provides conceptual information about Nomad's workload identity
|
||||
feature, which isolates and uniquely identities each workload so you can associate Access Control List (ACL) policies to jobs. Learn about workload identity claims, claims attributes specific to Nomad Enterprise, default workload ACL policy, and workload identity for Consul and Vault.
|
||||
|
||||
Every workload running in Nomad is given a default identity. When an
|
||||
[allocation][] is accepted by the [plan applier][], the leader generates a
|
||||
Workload Identity for each task in the allocation. This workload identity is a
|
||||
|
||||
@@ -2,18 +2,18 @@
|
||||
layout: docs
|
||||
page_title: Documentation
|
||||
description: |-
|
||||
Welcome to the Nomad documentation. Nomad is a scheduler and workload
|
||||
orchestrator. This documentation is a reference for all available features
|
||||
and options of Nomad.
|
||||
Nomad is a flexible workload orchestrator to deploy and manage any containerized or legacy application using a single, unified workflow. It can run diverse workloads including Docker, non-containerized, microservice, and batch applications.
|
||||
---
|
||||
|
||||
# Nomad Documentation
|
||||
|
||||
Welcome to the Nomad documentation. Nomad is a scheduler and workload
|
||||
orchestrator. This documentation is a reference for all
|
||||
available features and options of Nomad. If you are just getting started with
|
||||
Nomad, please start with the [Getting Started collection][gs]
|
||||
instead.
|
||||
Nomad is a flexible workload orchestrator to deploy and manage any containerized
|
||||
or legacy application using a single, unified workflow. It can run diverse
|
||||
workloads including Docker, non-containerized, microservice, and batch
|
||||
applications.
|
||||
|
||||
If you are just getting started with Nomad, refer to the [Getting
|
||||
Started tutorial collection][gs].
|
||||
|
||||
~> Interested in talking with HashiCorp about your experience building, deploying,
|
||||
or managing your applications? [Set up a time to chat!](https://forms.gle/2tAmxJbyPbcL2nRW9)
|
||||
|
||||
Reference in New Issue
Block a user