Tim Gross 6d806a9934 services: fix data integrity errors for Nomad native services (#20590)
This changeset fixes three potential data integrity issues between allocations
and their Nomad native service registrations.

* When a node is marked down because it missed heartbeats, we remove Vault and
  Consul tokens (for the pre-Workload Identity workflows) after we've written
  the node update to Raft. This is unavoidably non-transactional because the
  Consul and Vault servers aren't in the same Raft cluster as Nomad itself. But
  we've unnecessarily mirrored this same behavior to deregister Nomad
  services. This makes it possible for the leader to successfully write the node
  update to Raft without removing services.

  To address this, move the delete into the same Raft transaction. One minor
  caveat with this approach is the upgrade path: if the leader is upgraded first
  and a node is marked down during this window, older followers will have stale
  information until they are also upgraded. This is unavoidable without
  requiring the leader to unconditionally make an extra Raft write for every
  down node until 2 LTS versions after Nomad 1.8.0. This temporary reduction in
  data integrity for stale reads seems like a reasonable tradeoff.

* When an allocation is marked client-terminal from the client in
  `UpdateAllocsFromClient`, we have an opportunity to ensure data integrity by
  deregistering services for that allocation.

* When an allocation is deleted during eval garbage collection, we have an
  opportunity to ensure data integrity by deregistering services for that
  allocation. This is a cheap no-op if the allocation has been previously marked
  client-terminal.

This changeset does not address client-side retries for the originally reported
issue, which will be done in a separate PR.

Ref: https://github.com/hashicorp/nomad/issues/16616
2024-05-15 11:56:07 -04:00
2024-05-07 07:06:12 +00:00
2024-03-06 09:54:04 -06:00
2024-05-07 07:06:12 +00:00
2018-02-14 14:47:43 -08:00
2024-03-06 09:54:04 -06:00
2023-08-14 08:43:27 -05:00
2024-05-07 09:01:15 +02:00
2024-04-17 10:44:32 +02:00
2023-12-01 12:26:27 -08:00
2024-02-08 10:40:24 -05:00

Nomad License: BUSL-1.1 Discuss

HashiCorp Nomad logo

Nomad is a simple and flexible workload orchestrator to deploy and manage containers (docker, podman), non-containerized applications (executable, Java), and virtual machines (qemu) across on-prem and clouds at scale.

Nomad is supported on Linux, Windows, and macOS. A commercial version of Nomad, Nomad Enterprise, is also available.

Nomad provides several key features:

  • Deploy Containers and Legacy Applications: Nomads flexibility as an orchestrator enables an organization to run containers, legacy, and batch applications together on the same infrastructure. Nomad brings core orchestration benefits to legacy applications without needing to containerize via pluggable task drivers.

  • Simple & Reliable: Nomad runs as a single binary and is entirely self contained - combining resource management and scheduling into a single system. Nomad does not require any external services for storage or coordination. Nomad automatically handles application, node, and driver failures. Nomad is distributed and resilient, using leader election and state replication to provide high availability in the event of failures.

  • Device Plugins & GPU Support: Nomad offers built-in support for GPU workloads such as machine learning (ML) and artificial intelligence (AI). Nomad uses device plugins to automatically detect and utilize resources from hardware devices such as GPU, FPGAs, and TPUs.

  • Federation for Multi-Region, Multi-Cloud: Nomad was designed to support infrastructure at a global scale. Nomad supports federation out-of-the-box and can deploy applications across multiple regions and clouds.

  • Proven Scalability: Nomad is optimistically concurrent, which increases throughput and reduces latency for workloads. Nomad has been proven to scale to clusters of 10K+ nodes in real-world production environments.

  • HashiCorp Ecosystem: Nomad integrates seamlessly with Terraform, Consul, Vault for provisioning, service discovery, and secrets management.

Quick Start

Testing

See Developer: Getting Started for instructions on setting up a local Nomad cluster for non-production use.

Optionally, find Terraform manifests for bringing up a development Nomad cluster on a public cloud in the terraform directory.

Production

See Developer: Nomad Reference Architecture for recommended practices and a reference architecture for production deployments.

Documentation

Full, comprehensive documentation is available on the Nomad website: https://developer.hashicorp.com/nomad/docs

Guides are available on HashiCorp Developer.

Roadmap

A timeline of major features expected for the next release or two can be found in the Public Roadmap.

This roadmap is a best guess at any given point, and both release dates and projects in each release are subject to change. Do not take any of these items as commitments, especially ones later than one major release away.

Contributing

See the contributing directory for more developer documentation.

Description
No description provided
Readme 380 MiB
Languages
Go 76.9%
MDX 11%
JavaScript 8.2%
Handlebars 1.7%
HCL 1.4%
Other 0.7%