* drivers/raw_exec: enable setting cgroup override values
This PR enables configuration of cgroup override values on the `raw_exec`
task driver. WARNING: setting cgroup override values eliminates any
gauruntee Nomad can make about resource availability for *any* task on
the client node.
For cgroup v2 systems, set a single unified cgroup path using `cgroup_v2_override`.
The path may be either absolute or relative to the cgroup root.
config {
cgroup_v2_override = "custom.slice/app.scope"
}
or
config {
cgroup_v2_override = "/sys/fs/cgroup/custom.slice/app.scope"
}
For cgroup v1 systems, set a per-controller path for each controller using
`cgroup_v1_override`. The path(s) may be either absolute or relative to
the controller root.
config {
cgroup_v1_override = {
"pids": "custom/app",
"cpuset": "custom/app",
}
}
or
config {
cgroup_v1_override = {
"pids": "/sys/fs/cgroup/pids/custom/app",
"cpuset": "/sys/fs/cgroup/cpuset/custom/app",
}
}
* drivers/rawexec: ensure only one of v1/v2 cgroup override is set
* drivers/raw_exec: executor should error if setting cgroup does not work
* drivers/raw_exec: create cgroups in raw_exec tests
* drivers/raw_exec: ensure we fail to start if custom cgroup set and non-root
* move custom cgroup func into shared file
---------
Co-authored-by: Michael Schurter <mschurter@hashicorp.com>
The Nomad client expects certain cgroups paths to exist in order to
manage tasks. These paths are created when the agent first starts, but
if process fails the agent would just log the error and proceed with its
initialization, despite not being able to run tasks.
This commit surfaces the errors back to the client initialization so the
process can stop early and make clear to operators that something went
wrong.
* client: refactor cpuset partitioning
This PR updates the way Nomad client manages the split between tasks
that make use of resources.cpus vs. resources.cores.
Previously, each task was explicitly assigned which CPU cores they were
able to run on. Every time a task was started or destroyed, all other
tasks' cpusets would need to be updated. This was inefficient and would
crush the Linux kernel when a client would try to run ~400 or so tasks.
Now, we make use of cgroup heirarchy and cpuset inheritence to efficiently
manage cpusets.
* cr: tweaks for feedback