Our git pre-push hook already prevents Nomad Enterprise code from getting pushed
anywhere but its own repo. But this hook only works for files on the current
worktree (checkout). Were you to fetch an Enterprise tag into your local
Community Edition repo but not have it checked out, and then `git push --tags`,
you'd push that tag and the associated commit history.
Add tag filtering to the pre-push hook to prevent Enterprise tags (and the older
`+pro` SKU) tags from getting pushed to the Community Edition repo.
Clusters that have gone through several upgrades have be found
to include keyring material which has an empty RSA block.
In more recent versions of Nomad, an empty RSA block is omitted
from being written to disk. This results in the panic not being
present. Older versions, however, did not have this struct tag
meaning we wrote an empty JSON block which is not accounted for
in the current version.
github.com/moby/sys/capability is a fork of the (no longer maintained)
github.com/syndtr/gocapability package.
For changes since the fork took place, see
https://github.com/moby/sys/blob/main/capability/CHANGELOG.md
Note that the "workaround for RHEL6" is removed for a number of reasons.
Feel free to choose the one you like the most, either is sufficient:
1. /proc/sys/kernel/cap_last_cap is available since RHEL 6.7
(kernel 2.6.32-573.el6), released 9 years ago (2015-07-22).
2. It incorrectly returns CAP_BLOCK_SUSPEND (36), which was only added
in kernel v3.5 and was never backported to RHEL6 kernels. The
correct value for RHEL6 would be CAP_MAC_ADMIN (33).
3. As far as upstream kernels go, /proc/sys/kernel/cap_last_cap was
added in kernel v3.2, and a correct value depends on the kernel
version. It could be CAP_WAKE_ALARM (35), added to kernel v3.0, or
CAP_SYSLOG (34), added to kernel v2.6.38, or possibly a lesser value
for even older kernels.
Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
When we removed the time table in #24112 we introduced a bug where if a previous
version of Nomad had written a time table entry, we'd return from the restore
loop early and never load the rest of the FSM. This will result in a mostly or
partially wiped state for that Nomad node, which would then be out of sync with
its peers (which would also have the same problem on upgrade).
The bug only occurs when the FSM is being restored from snapshot, which isn't
the case if you test with a server that's only written Raft logs and not
snapshotted them.
While fixing this bug, we still need to ensure we're reading the time table
entries even if we're throwing them away, so that we move the snapshot reader
along to the next full entry.
Fixes: https://github.com/hashicorp/nomad/issues/24411
When multiple templates with api functions are included in a task, it's
possible for consul-template to re-render templates as it creates
watchers, overwriting render event data. This change uses event fields
that do not get overwritten, and only executes the change mode for
templates that were actually written to disk.
---------
Co-authored-by: Tim Gross <tgross@hashicorp.com>
The template aims to ensure all PRs include the required
information for reviewers. The checklist items help ensure merging
happens quickly and in the correct manner.
Co-authored-by: Tim Gross <tgross@hashicorp.com>
When creating or registering a CSI volume, the RPC handler uses the volume
specification's namespace instead of the request namespace. This works as
intended, but the ACL check is only on the request namespace.
This allows a cross-namespace ACL bypass for authenticated users who have
`csi-write-volume` capabilities in one namespace but not another namespace. Such
a user can set the volume specification to a forbidden namespace while setting
the `-namespace` flag in the CLI or API. The ACL check happens against the
namespace they do have permission to, but the volume is created in the forbidden
namespace.
This changeset fixes the bug by moving the namespace check into the loop over
the volumes being written by the RPCs. It also updates the tests to better cover
ACL checking in these two RPCs.
Ref: CVE-2024-10975
Ref: https://hashicorp.atlassian.net/browse/SECVULN-15463
Fixes: https://github.com/hashicorp/nomad/issues/24397
* Updates the Task Lifecycle Status chart to show which pre/poststart task may have failed
* Default colour to prevent HDS error
* De-duplicated data-test attr and added is-active and is-finished test classes
* Failed and Pending state tests
this opens up dispatching parameterized jobs by systems
that do not allow modifying what http request body they send
e.g. these two things are equal:
POST '{"Payload": "'"$(base64 <<< "hello")"'"}' /v1/job/my-job/dispatch
POST 'hello' /v1/job/my-job/dispatch/payload
Clusters that have gone through several upgrades have be found to
include keyring material which has an empty RSA block.
In more recent versions of Nomad, an empty RSA block is omitted
from being written to disk. This results in the panic not being
present. Older versions, however, did not have this struct tag
meaning we wrote an empty JSON block which is not accounted for
in the current version.
* Upon sign-in post-expiry/403, redirect to original route
* Tests for token expiry re-routing
* Had made one of the new test tokens a management token, which conflicted with another test but was not necessary
* connect: handle grpc_address as gosockaddr/template string
This PR fixes a bug where the consul.grpc_address could not be set using
a go-sockaddr/template string. This was inconsistent with how we do accept
such strings for consul.address values.
* add changelog
* drivers: move executor process out of v1 task cgroup after process starts
This PR changes the behavior of the raw exec task driver on old cgroups v1
systems such that the executor process is no longer a member of the cgroups
created for the task. Now, the executor process is placed into those
cgroups and starts the task child process (just as before), but now then
exits those cgroups and exists in the nomad parent cgroup. This change
makes the behavior sort of similar to cgroups v2 systems, where we never
have the executor enter the task cgroup to begin with (because we can
directly clone(3) the task process into it).
Fixes#23951
* executor: handle non-linux case
* cgroups: add test case for no executor process in task cgroup (v1)
* add changelog
* drivers: also move executor out of cpuset cgroup
In #10193 we introduced a testing helper that spins up a client RPC server
without the rest of the client operations so that we can make server-side client
RPC tests lighter. But this wasn't actually ever wired up to the intended
target. While working on Dynamic Host Volumes I noticed that this would be
useful for RPC tests.
This changeset fixes some bugs in the helper that arose from client code drift,
and makes it used by the client RPC tests for CSI. This will also get used for
the DHV RPC tests.
Ref: https://github.com/hashicorp/nomad/pull/10193
* escaping newlines is not allowed in go-sockaddr template
* client{} block in client section
* tiny extra clarification that the NOMAD_ADDR is an example
Fixes a bug in the AllocatedResources.Comparable method, which resulted in
reporting less required resources than actually expected. This could result in
overscheduling of allocations on a single node and overlapping cgroup cpusets.
* ui: show region in header gutter when only one region exists
This PR adds a plain text label of the region to the header when there is
only one region present. Before, nothing was showin in this case, and a
dropdown was shown on federated clusters.
The use case here is for operators of multiple non-federated Nomad clusters,
when all the UI's involved otherwise look identical.
* [ui] Signing in with a token explicitly sets the region dropdown activeRegion (#24347)
* Signing in with a token explicitly sets the region dropdown activeREgion
* Test and Select a Region default text
* Account for 403 on mocked agent members req
* Dont show the region if it isnt set in agent config
* Small padding css change
* unit test condition moved to stubbable acceptance test
---------
Co-authored-by: Phil Renaud <phil.renaud@hashicorp.com>
Core scheduler relies on a special table in the state store—the TimeTable—to
figure out which objects can be GC'd. The TimeTable correlates Raft indices
with objects insertion time, a solution we used before most of the objects we
store in the state contained timestamps. This introduced a bit of a memory
overhead and complexity, but most importantly meant that any GC threshold users
set greater than timeTableLimit = 72 * time.Hour was ignored. This PR removes
the TimeTable and relies on object timestamps to determine whether they could
be GCd or not.