* Rename pages to include roles
* Models and adapters
* [ui] Any policy checks in the UI now check for roles' policies as well as token policies (#18346)
* combinedPolicies as a concept
* Classic decorator on role adapter
* We added a new request for roles, so the test based on a specific order of requests got fickle fast
* Mirage roles cluster scaffolded
* Acceptance test for roles and policies on the login page
* Update mirage mock for nodes fetch to account for role policies / empty token.policies
* Roles-derived policies checks
* [ui] Access Control with Roles and Tokens (#18413)
* top level policies routes moved into access control
* A few more routes and name cleanup
* Delog and test fixes to account for new url prefix and document titles
* Overview page
* Tokens and Roles routes
* Tokens helios table
* Add a role
* Hacky role page and deletion
* New policy keyboard shortcut and roles breadcrumb nav
* If you leave New Role but havent made any changes, remove the newly-created record from store
* Roles index list and general role route crud
* Roles index actually links to roles now
* Helios button styles for new roles and policies
* Handle when you try to create a new role without having any policies
* Token editing generally
* Create Token functionality
* Cant delete self-token but management token editing and deleting is fine
* Upgrading helios caused codemirror to explode, shimmed
* Policies table fix
* without bang-element condition, modifier would refire over and over
* Token TTL or Time setting
* time will take you on
* Mirage hooks for create and list roles
* Ensure policy names only use allow characters in mirage mocks
* Mirage mocked roles and policies in the default cluster
* log and lintfix
* chromedriver to 2.1.2
* unused unit tests removed
* Nice profile dropdown
* With the HDS accordion, rename our internal component scss ref
* design revisions after discussion
* Tooltip on deleted-policy tokens
* Two-step button peripheral isDeleting gcode removed
* Never to null on token save
* copywrite headers added and empty routefiles removed
* acceptance test fixes for policies endpoint
* Route for updating a token
* Policies testfixes
* Ember on-click-outside modifier upgraded with general ember-modifier upgrade
* Test adjustments to account for new profile header dropdown
* Test adjustments for tokens via policy pages
* Removed an unused route
* Access Control index page tests
* a11y tests
* Tokens index acceptance tests generally
* Lintfix
* Token edit page tests
* Token editing tests
* New token expiration tests
* Roles Index tests
* Role editing policies tests
* A complete set of Access Control Roles tests
* Policies test
* Be more specific about which row to check for expiration time
* Nil check on expirationTime equality
* Management tokens shouldnt show No Roles/Policies, give them their own designation
* Route guard on selftoken, conditional columns, and afterModel at parent to prevent orphaned policies on tokens/roles from stopping a new save
* Policy unloading on delete and other todos plus autofocus conditionally re-enabled
* Invalid policies non-links now a concept for Roles index
* HDS style links to make job.variables.alert links look like links again
* Mirage finding looks weird so making model async in hash even though redundant
* Drop rsvp
* RSVP wasnt the problem, cached lookups were
* remove old todo comments
* de-log
It includes the work over the state store, the PRC server, the HTTP server, the go API package and the CLI's command. To read more on the actuall functionality, refer to the RFCs [NMD-178] Locking with Nomad Variables and [NMD-179] Leader election using locking mechanism for the Autoscaler.
This feature will help operator to remove a failed/left node from Serf layer immediately
without waiting for 24 hours for the node to be reaped
* Update CLI with prune flag
* Update API /v1/agent/force-leave with prune query string parameter
* Update CLI and API doc
* Add unit test
Add structs and fields to support the Nomad Pools Governance Enterprise
feature of controlling node pool access via namespaces.
Nomad Enterprise allows users to specify a default node pool to be used
by jobs that don't specify one. In order to accomplish this, it's
necessary to distinguish between a job that explicitly uses the
`default` node pool and one that did not specify any.
If the `default` node pool is set during job canonicalization it's
impossible to do this, so this commit allows a job to have an empty node
pool value during registration but sets to `default` at the admission
controller mutator.
In order to guarantee state consistency the state store validates that
the job node pool is set and exists before inserting it.
Implement scheduler support for node pool:
* When a scheduler is invoked, we get a set of the ready nodes in the DCs that
are allowed for that job. Extend the filter to include the node pool.
* Ensure that changes to a job's node pool are picked up as destructive
allocation updates.
* Add `NodesInPool` as a metric to all reporting done by the scheduler.
* Add the node-in-pool the filter to the `Node.Register` RPC so that we don't
generate spurious evals for nodes in the wrong pool.
Implements the HTTP API associated with the `NodePool.ListJobs` RPC, including
the `api` package for the public API and documentation.
Update the `NodePool.ListJobs` RPC to fix the missing handling of the special
"all" pool.
This changeset only adds the `node_pool` field to the jobspec, and ensures that
it gets picked up correctly as a change. Without the rest of the implementation
landed yet, the field will be ignored.
When the server restarts for the upgrade, it loads the `structs.Job` from the
Raft snapshot/logs. The jobspec has long since been parsed, so none of the
guards around the default value are in play. The empty field value for `Enabled`
is the zero value, which is false.
This doesn't impact any running allocation because we don't replace running
allocations when either the client or server restart. But as soon as any
allocation gets rescheduled (ex. you drain all your clients during upgrades),
it'll be using the `structs.Job` that the server has, which has `Enabled =
false`, and logs will not be collected.
This changeset fixes the bug by adding a new field `Disabled` which defaults to
false (so that the zero value works), and deprecates the old field.
Fixes#17076
This PR modifies references to the envoyproxy/envoy docker image to
explicitly include the docker.io prefix. This does not affect existing
users, but makes things easier for Podman users, who otherwise need to
specify the full name because Podman does not default to docker.io
Some Nomad users ship application logs out-of-band via syslog. For these users
having `logmon` (and `docker_logger`) running is unnecessary overhead. Allow
disabling the logmon and pointing the task's stdout/stderr to /dev/null.
This changeset is the first of several incremental improvements to log
collection short of full-on logging plugins. The next step will likely be to
extend the internal-only task driver configuration so that cluster
administrators can turn off log collection for the entire driver.
---
Fixes: #11175
Co-authored-by: Thomas Weber <towe75@googlemail.com>
* api: enable support for setting original source alongside job
This PR adds support for setting job source material along with
the registration of a job.
This includes a new HTTP endpoint and a new RPC endpoint for
making queries for the original source of a job. The
HTTP endpoint is /v1/job/<id>/submission?version=<version> and
the RPC method is Job.GetJobSubmission.
The job source (if submitted, and doing so is always optional), is
stored in the job_submission memdb table, separately from the
actual job. This way we do not incur overhead of reading the large
string field throughout normal job operations.
The server config now includes job_max_source_size for configuring
the maximum size the job source may be, before the server simply
drops the source material. This should help prevent Bad Things from
happening when huge jobs are submitted. If the value is set to 0,
all job source material will be dropped.
* api: avoid writing var content to disk for parsing
* api: move submission validation into RPC layer
* api: return an error if updating a job submission without namespace or job id
* api: be exact about the job index we associate a submission with (modify)
* api: reword api docs scheduling
* api: prune all but the last 6 job submissions
* api: protect against nil job submission in job validation
* api: set max job source size in test server
* api: fixups from pr
* docs: dispatch_payload docs had some weirdness
Docs said "Examples" when there was only 1 example. Not sure what the
floating "to" in the description was for.
* docs: missing a heading level on jobs api docs
The job evaluate endpoint creates a new evaluation for the job which is
a write operation. This change modifies the necessary capability from
`read-job` to `submit-job` to better reflect this.
* docs: add dynamic node metadata api docs
Also update all paths in the client API docs to explicitly state the
`/v1/` prefix. We're inconsistent about that, but I think it's better to
display the full path than to only show the fragment. If we ever do a
`/v2/` whether or not we explicitly state `/v1/` in our docs won't be
our greatest concern.
* docs: add task-api docs
Add `identity` jobspec block to expose workload identity tokens to tasks.
---------
Co-authored-by: Anders <mail@anars.dk>
Co-authored-by: Tim Gross <tgross@hashicorp.com>
Co-authored-by: Michael Schurter <mschurter@hashicorp.com>
The ACL token decoding was not correctly handling time duration
syntax such as "1h" which forced people to use the nanosecond
representation via the HTTP API.
The change adds an unmarshal function which allows this syntax to
be used, along with other styles correctly.