Files
nomad/website/content/docs/networking/index.mdx
Aimee Ukasick d9bb241b43 Docs SEO: Update runtime, networking, Nomad vs K8s, Nomad Enterprise, upgrading, release notes, and sectionless pages (#24764)
* Docs SEO: Updates

CE-781,782,785,788

* CE-791 single pages

* CE-786 enterprise section

* CE-789 release notes

* fix content-check error

* Update description and add intro body paragraph when appropriate

* fix typo

* Apply suggestions from Jeff's code review

Co-authored-by: Jeff Boruszak <104028618+boruszak@users.noreply.github.com>

---------

Co-authored-by: Jeff Boruszak <104028618+boruszak@users.noreply.github.com>
2025-02-03 10:03:36 -06:00

341 lines
12 KiB
Plaintext

---
layout: docs
page_title: Networking
description: |-
Nomad's networking features connect your workloads without running additional component tools like DNS servers and load balancers. Learn about workload allocation networking and bridge networking, which uses Container Network Interface (CNI) plugins or Docker. Review how Nomad networking differs from Kubernetes networking.
---
# Networking
This page provides conceptual information about Nomad's networking feature, how
networking works in Nomad, different patterns and configurations, and how Nomad
networking differs from Kubernetes networking.
## Introduction
Nomad is a workload orchestrator and focuses on the scheduling aspects of a
deployment, touching areas such as networking as little as possible.
Networking in Nomad is usually done via _configuration_ instead of
_infrastructure_. This means that Nomad provides ways for you to access the
information you need to connect your workloads instead of running additional
components behind the scenes, such as DNS servers and load balancers.
## Allocation networking
The base unit of scheduling in Nomad is an
[allocation](/nomad/docs/glossary#allocation), which means that all
tasks in the same allocation run in the same client and share common resources,
such as disk and networking. Allocations can request access to network
resources, such as ports, using the
[`network`](/nomad/docs/job-specification/network) block. You can define a basic
`network` block as the following:
```hcl
job "..." {
# ...
group "..." {
network {
port "http" {}
}
# ...
}
}
```
Nomad reserves a random port in the client between
[`min_dynamic_port`](/nomad/docs/configuration/client#min_dynamic_port) and
[`max_dynamic_port`](/nomad/docs/configuration/client#max_dynamic_port) that has
not been allocated yet. Nomad then creates a port mapping from the host network
interface to the allocation.
[comment-image-source]:
https://drive.google.com/file/d/1q4a2ab0TyLEPdWiO2DIianAPWuPqLqZ4/view?usp=share_link
[![Nomad Port
Mapping](/img/networking/port_mapping.png)](/img/networking/port_mapping.png)
Tasks can access the selected port number using the
[`NOMAD_PORT_<label>`](/nomad/docs/runtime/environment#network-related-variables)
environment variable to bind and expose the workload at the client's IP address
and the given port.
The specific configuration process depends on what you are running. However,
usually you use a
[`template`](/nomad/docs/job-specification/template#template-examples) to create
a configuration file such as the following:
```hcl
job "..." {
# ...
group "..." {
network {
port "http" {}
}
task "..." {
# ...
config {
args = [
"--port=${NOMAD_PORT_http}",
]
}
}
}
}
```
You may also pass configuration via command line arguments.
It is also possible to request a specific port number instead of a random one by
setting a [`static`](/nomad/docs/job-specification/network#static) value for the
`port`. This should only be used by specialized workloads, such as load
balancers and system jobs, since it can be hard to manage them manually to avoid
scheduling collisions.
With the task listening at one of the client's ports, other processes can access
the task directly using the client's IP and port, but first the processes need
to find these values. This process is called [service
discovery](/nomad/docs/networking/service-discovery).
When using IP and port to connect allocations it is important to make sure your
network topology and routing configuration allow the Nomad clients to
communicate with each other.
## Bridge networking
Linux clients support a network
[`mode`](/nomad/docs/job-specification/network#mode) called
[`bridge`](/nomad/docs/job-specification/network#bridge). A bridge network acts
like a virtual network switch, allowing processes connected to the bridge to
reach each other while isolating them from others.
### Container Network Interface (CNI) reference plugins
Nomad's bridge network leverages [CNI reference
plugins](https://github.com/containernetworking/plugins) to provide an
operating-system agnostic interface to configure workload networking. Nomad's
network plugin support extends Nomad's built-in compute resource scheduling to
allow scheduling tasks with specialty network configurations, which Nomad
implements with a combination of CNI reference plugins and CNI configuration
files.
### How bridge networking works
When an allocation uses bridge networking, the Nomad agent creates a bridge
called `nomad` (or the value set in
[`bridge_network_name`](/nomad/docs/configuration/client#bridge_network_name))
using the [`bridge` CNI plugin](
https://www.cni.dev/plugins/current/main/bridge/) if one doesn't exist yet.
Before using this mode you must first [install the CNI
plugins](/nomad/docs/networking/cni/) into your clients. By default, Nomad
creates a single bridge in each Nomad client.
[comment-image-source]:
https://drive.google.com/file/d/1q4a2ab0TyLEPdWiO2DIianAPWuPqLqZ4/view?usp=share_link
[![Nomad Bridge](/img/networking/bridge.png)](/img/networking/bridge.png)
Allocations that use the `bridge` network mode run in an isolated network
namespace and are connected to the bridge. This allows Nomad to map random ports
from the host to specific port numbers inside the allocation that tasks expect.
For example, you can configure an HTTP server that listens on port `3000` by
default with the following `network` block:
```hcl
job "..." {
# ...
group "..." {
network {
mode = "bridge"
port "http" {
to = 3000
}
}
# ...
}
}
```
To allow communication between allocations in different clients, Nomad creates
an `iptables` rule to forward requests from the host network interface to the
bridge. This results in three different network access scopes:
- Tasks that bind to the loopback interface (`localhost` or `127.0.0.1`) are
accessible only from within the allocation.
- Tasks that bind to the bridge (or other general addresses, such as `0.0.0.0`)
without `port` forwarding are only accessible from within the same client.
- Tasks that bind to the bridge (or other general addresses, such as `0.0.0.0`)
with `port` forwarding are accessible from external sources.
~> **Warning:** To prevent any type of external access when using `bridge`
network mode make sure to bind your workloads to the loopback interface only.
Bridge networking is at the core of [service
mesh](/nomad/docs/networking/service-mesh) and a requirement when using [Consul
Service Mesh](/nomad/docs/integrations/consul-connect).
### Bridge networking with Docker
The Docker daemon manages its own network configuration and creates its own
[bridge network](https://docs.docker.com/network/bridge/), network namespaces,
and [`iptable` rules](https://docs.docker.com/network/iptables/). Tasks using
the `docker` task driver connect to the Docker bridge instead of using the one
created by Nomad and, by default, each container runs in its own Docker managed
network namespace.
When using `bridge` network mode, Nomad creates a placeholder container using
the image defined in [`infra_image`](/nomad/docs/drivers/docker#infra_image) to
initialize a Docker network namespace that is shared by all tasks in the
allocation to allow them to communicate with each other.
The Docker task driver has its own task-level
[`network_mode`](/nomad/docs/drivers/docker#network_mode) configuration. Its
default value depends on the group-level `network.mode` configuration.
```hcl
group "..." {
network {
mode = "bridge"
}
task "..." {
driver = "docker"
config {
# This conflicts with the group-level network.mode configuration and
# should not be used.
network_mode = "bridge"
# ...
}
}
}
```
~> **Warning:** The task-level `network_mode` may conflict with the group-level
`network.mode` configuration and generate unexpected results. If you set the
group `network.mode = "bridge"` you should not set the Docker config
`network_mode`.
This diagram illustrates what happens when a Docker task is configured
incorrectly.
[comment-image-source]:
https://drive.google.com/file/d/1q4a2ab0TyLEPdWiO2DIianAPWuPqLqZ4/view?usp=share_link
[![Nomad
Bridge](/img/networking/docker_bridge.png)](/img/networking/docker_bridge.png)
The tasks in the rightmost allocation are not able to communicate with each
other using their loopback interface because they were placed in different
network namespaces.
Since the group `network.mode` is `bridge`, Nomad creates the pause container to
establish a shared network namespace for all tasks, but setting the task-level
`network_mode` to `bridge` places the task in a different namespace. This
prevents, for example, a task from communicating with its sidecar proxy in a
service mesh deployment.
Refer to the [`network_mode`](/nomad/docs/drivers/docker#network_mode)
documentation and the [Networking](/nomad/docs/drivers/docker#networking)
section for more information.
-> **Note:** Docker Desktop in non-Linux environments runs a local virtual
machine, adding an extra layer of indirection. Refer to the
[FAQ](/nomad/docs/faq#q-how-to-connect-to-my-host-network-when-using-docker-desktop-windows-and-macos)
for more details.
## Comparison with other tools
### Kubernetes and Docker Compose
Networking in Kubernetes and Docker Compose works differently than in Nomad. To
access a container you use a fully qualified domain name such as `db` in Docker
Compose or `db.prod.svc.cluster.local` in Kubernetes. This process relies on
additional infrastructure to resolve the hostname and distribute the requests
across multiple containers.
Docker Compose allows you to run and manage multiple containers using units
called services.
```yaml
version: "3.9"
services:
web:
build: .
ports:
- "8000:8000"
db:
image: postgres
ports:
- "8001:5432"
```
To access a service from another container you can reference the service name
directly, for example using `postgres://db:5432`. In order to enable this
pattern, Docker Compose includes an [internal DNS
services](https://docs.docker.com/config/containers/container-networking/#dns-services)
and a load balancer that is transparent to the user. When running in Swarm mode,
Docker Compose also requires an overlay network to route requests across hosts.
Kubernetes provides the
[`Service`](https://kubernetes.io/docs/concepts/services-networking/service/)
abstraction that can be used to declare how a set of Pods are accessed.
```yaml
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
selector:
app.kubernetes.io/name: MyApp
ports:
- protocol: TCP
port: 80
targetPort: 9376
```
To access the Service you use a FQDN such as
`my-service.prod.svc.cluster.local`. This name is resolved by the [DNS
service](https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/)
which is an add-on that runs in all nodes. Along with this service, each node
also runs a
[`kube-proxy`](https://kubernetes.io/docs/concepts/overview/components/#kube-proxy)
instance to distribute requests to all Pods matched by the Service.
You can use the same FQDN networking style with Nomad using [Consul's DNS
interface](/consul/docs/services/discovery/dns-overview) and configuring your
clients with [DNS forwarding](/consul/tutorials/networking/dns-forwarding), and
deploying a [load balancer](/nomad/tutorials/load-balancing).
Another key difference from Nomad is that in Kubernetes and Docker Compose each
container has its own IP address, requiring a virtual network to map physical IP
addresses to virtual ones. In the case of Docker Compose in Swarm mode, an
[`overlay`](https://docs.docker.com/network/overlay/) is also required to enable
traffic across multiple hosts. This allows multiple containers running the same
service to listen on the same port number.
In Nomad, allocations use the IP address of the client in which they are running
and are assigned random port numbers. Nomad service discovery with DNS uses
[`SRV` records]( https://en.wikipedia.org/wiki/SRV_record) instead of `A` or
`AAAA` records.
## Next topics
- [Service Discovery](/nomad/docs/networking/service-discovery)
- [Service Mesh](/nomad/docs/networking/service-mesh)
- [Container Network Interface](/nomad/docs/networking/cni) plugins guide
## Additional resources
- [Understanding Networking in Nomad - Karan
Sharma](https://mrkaran.dev/posts/nomad-networking-explained/)
- [Understanding Nomad Networking Patterns - Luiz Aoqui, HashiTalks: Canada
2021](https://www.youtube.com/watch?v=wTA5HxB_uuk)