Files
Aimee Ukasick 53b083b8c5 Docs: Nomad IA (#26063)
* Move commands from docs to its own root-level directory

* temporarily use modified dev-portal branch with nomad ia changes

* explicitly clone nomad ia exp branch

* retrigger build, fixed dev-portal broken build

* architecture, concepts and get started individual pages

* fix get started section destinations

* reference section

* update repo comment in website-build.sh to show branch

* docs nav file update capitalization

* update capitalization to force deploy

* remove nomad-vs-kubernetes dir; move content to what is nomad pg

* job section

* Nomad operations category, deploy section

* operations category, govern section

* operations - manage

* operations/scale; concepts scheduling fix

* networking

* monitor

* secure section

* remote auth-methods folder and move up pages to sso; linkcheck

* Fix install2deploy redirects

* fix architecture redirects

* Job section: Add missing section index pages

* Add section index pages so breadcrumbs build correctly

* concepts/index fix front matter indentation

* move task driver plugin config to new deploy section

* Finish adding full URL to tutorials links in nav

* change SSO to Authentication in nav and file system

* Docs NomadIA: Move tutorials into NomadIA branch (#26132)

* Move governance and policy from tutorials to docs

* Move tutorials content to job-declare section

* run jobs section

* stateful workloads

* advanced job scheduling

* deploy section

* manage section

* monitor section

* secure/acl and secure/authorization

* fix example that contains an unseal key in real format

* remove images from sso-vault

* secure/traffic

* secure/workload-identities

* vault-acl change unseal key and root token in command output sample

* remove lines from sample output

* fix front matter

* move nomad pack tutorials to tools

* search/replace /nomad/tutorials links

* update acl overview with content from deleted architecture/acl

* fix spelling mistake

* linkcheck - fix broken links

* fix link to Nomad variables tutorial

* fix link to Prometheus tutorial

* move who uses Nomad to use cases page; move spec/config shortcuts

add dividers

* Move Consul out of Integrations; move namespaces to govern

* move integrations/vault to secure/vault; delete integrations

* move ref arch to docs; rename Deploy Nomad back to Install Nomad

* address feedback

* linkcheck fixes

* Fixed raw_exec redirect

* add info from /nomad/tutorials/manage-jobs/jobs

* update page content with newer tutorial

* link updates for architecture sub-folders

* Add redirects for removed section index pages. Fix links.

* fix broken links from linkcheck

* Revert to use dev-portal main branch instead of nomadIA branch

* build workaround: add intro-nav-data.json with single entry

* fix content-check error

* add intro directory to get around Vercel build error

* workound for emtpry directory

* remove mdx from /intro/ to fix content-check and git snafu

* Add intro index.mdx so Vercel build should work

---------

Co-authored-by: Tu Nguyen <im2nguyen@gmail.com>
2025-07-08 19:24:52 -05:00

142 lines
5.3 KiB
Plaintext

---
layout: docs
page_title: csi_plugin block in the job specification
description: >-
Specify that the task provides a Container Storage Interface plugin to the cluster in the `csi_plugin` block of the Nomad job specification. Configure plugin ID, type, mount directory, stage publish base directory, and health timeout. Review recommendations for deploying CSI plugins.
---
# `csi_plugin` block in the job specification
<Placement groups={['job', 'group', 'task', 'csi_plugin']} />
The "csi_plugin" block allows the task to specify it provides a
Container Storage Interface plugin to the cluster. Nomad will
automatically register the plugin so that it can be used by other jobs
to claim [volumes][csi_volumes].
```hcl
csi_plugin {
id = "csi-hostpath"
type = "monolith"
mount_dir = "/csi"
stage_publish_base_dir = "/local/csi"
health_timeout = "30s"
}
```
## Parameters
- `id` `(string: <required>)` - This is the ID for the plugin. Some
plugins will require both controller and node plugin types (see
below); you need to use the same ID for both so that Nomad knows they
belong to the same plugin.
- `type` `(string: <required>)` - One of `node`, `controller`, or
`monolith`. Each plugin supports one or more types. Each Nomad
client node where you want to mount a volume will need a `node`
plugin instance. Some plugins will also require one or more
`controller` plugin instances to communicate with the storage
provider's APIs. Some plugins can serve as both `controller` and
`node` at the same time, and these are called `monolith`
plugins. Refer to your CSI plugin's documentation.
- `mount_dir` `(string: <optional>)` - The directory path inside the
container where the plugin will expect a Unix domain socket for
bidirectional communication with Nomad. This field is typically not
required. Refer to your CSI plugin's documentation for details.
- `stage_publish_base_dir` `(string: <optional>)` - The base directory
path inside the container where the plugin will be instructed to
stage and publish volumes. This field is typically not required.
Refer to your CSI plugin's documentation for details. This can not
be a subdirectory of `mount_dir`.
- `health_timeout` `(duration: <optional>)` - The duration that
the plugin supervisor will wait before restarting an unhealthy
CSI plugin. Must be a duration value such as `30s` or `2m`.
Defaults to `30s` if not set.
~> **Note:** Plugins running as `node` or `monolith` require root
privileges (or `CAP_SYS_ADMIN` on Linux) to mount volumes on the
host. With the Docker task driver, you can use the `privileged = true`
configuration, but no other default task drivers currently have this
option.
## Recommendations for deploying CSI plugins
CSI plugins run as Nomad tasks, but after mounting the volume are not in the
data path for the volume. Tasks that mount volumes write and read directly to
the volume via a bind-mount and there is no communication between the job and
the CSI plugin. But when an allocation that mounts a volume stops, Nomad will
need to communicate with the plugin on that allocation's node to unmount the
volume. This has implications on how to deploy CSI plugins:
* If you are stopping jobs on a node, you must stop tasks that claim
volumes before stopping the `node` or `monolith` plugin for those
volumes. If you use the `node drain` feature, plugin tasks will
automatically be drained last.
* Only the most recently-placed allocation for a given plugin ID and
type (controller or node) will be used by any given client node. Run
`node` plugins as system jobs and distribute `controller` plugins
across client nodes using a constraint as shown below.
* Some plugins will create volumes only in the same location as the
plugin. For example, the AWS EBS plugin will create and mount
volumes only within the same Availability Zone. You should configure
your plugin task as recommended by the plugin's documentation to use
the [`topology_request`] field in your volume specification.
## Examples
```hcl
job "plugin-efs" {
datacenters = ["dc1"]
# you can run node plugins as service jobs as well, but running
# as a system job ensures all nodes in the DC have a copy.
type = "system"
# only one plugin of a given type and ID should be deployed on
# any given client node
constraint {
operator = "distinct_hosts"
value = true
}
group "nodes" {
task "plugin" {
driver = "docker"
config {
image = "amazon/aws-efs-csi-driver:v.1.3.2"
args = [
"--endpoint=unix://csi/csi.sock",
"--logtostderr",
"--v=5",
]
# all CSI node plugins will need to run as privileged tasks
# so they can mount volumes to the host. controller plugins
# do not need to be privileged.
privileged = true
}
csi_plugin {
id = "aws-efs0"
type = "node"
mount_dir = "/csi" # this path /csi matches the --endpoint
# argument for the container
health_timeout = "30s"
}
}
}
}
```
[csi]: https://github.com/container-storage-interface/spec
[csi_volumes]: /nomad/docs/job-specification/volume
[system]: /nomad/docs/concepts/scheduling/schedulers#system
[`topology_request`]: /nomad/commands/volume/create#topology_request