diff --git a/website/content/docs/concepts/plugins/cni.mdx b/website/content/docs/concepts/plugins/cni.mdx
new file mode 100644
index 000000000..695074bc9
--- /dev/null
+++ b/website/content/docs/concepts/plugins/cni.mdx
@@ -0,0 +1,64 @@
+---
+layout: docs
+page_title: Network Plugins
+description: Learn how Nomad manages custom user-specified network configurations.
+---
+
+# Network plugins
+
+Nomad has built-in support for scheduling compute resources such as
+CPU, memory, and networking. Nomad's network plugin support extends
+this to allow scheduling tasks with purpose-created or specialty network
+configurations. Network plugins are third-party plugins that conform to the
+[Container Network Interface (CNI)][cni_spec] specification.
+
+Network plugins need to be installed and configured on each client. The [Nomad
+installation instructions][nomad_install] recommend installing the [CNI
+reference plugins][cni_ref] because certain Nomad networking features, like
+`bridge` network mode and Consul service mesh, leverage them to provide an
+operating-system agnostic interface to configure workload networking.
+
+Custom networking in Nomad is accomplished with a combination of CNI plugin
+binaries and CNI configuration files.
+
+## CNI plugins
+
+Spec-compliant plugins should work with Nomad, however, it's possible a plugin
+vendor has implemented their plugin to make non-standard API calls, or it is
+otherwise non-compliant with the CNI specification. In those situations the
+plugin may not function correctly in a Nomad environment. You should verify
+plugin compatibility with Nomad before deploying in production.
+
+CNI plugins are installed and configured on a per-client basis. Nomad consults
+the path given in the client's [`cni_path`][] to find CNI plugin executables.
+
+## CNI configuration files
+
+CNI defines a network configuration format for administrators. It contains
+directives for both the orchestrator and the plugins to consume.
+At plugin execution time, this configuration format is interpreted by the
+runtime and transformed in to a form to be passed to the plugins.
+
+Nomad reads the following extensions from the [`cni_config_dir`][]—
+`/opt/cni/config` by default:
+
+* `.conflist` files are loaded as [network
+ configurations][cni_spec_net_config] that contain a list of plugin
+ configurations.
+
+* `.conf` and `.json` files are loaded as individual [plugin
+ configurations][cni_spec_plugin_config] for a specific network.
+
+## Further reading
+
+You can read more about how Nomad uses CNI plugins in the [CNI section of the
+Nomad Networking documentation](/docs/networking/cni).
+
+[3rd_party_cni]: https://www.cni.dev/docs/#3rd-party-plugins
+[`cni_config_dir`]: /docs/configuration/client#cni_config_dir
+[`cni_path`]: /docs/configuration/client#cni_path
+[cni_ref]: https://github.com/containernetworking/plugins
+[cni_spec]: https://www.cni.dev/docs/spec/
+[cni_spec_net_config]: https://github.com/containernetworking/cni/blob/main/SPEC.md#configuration-format
+[cni_spec_plugin_config]: https://github.com/containernetworking/cni/blob/main/SPEC.md#plugin-configuration-objects
+[nomad_install]: https://developer.hashicorp.com/nomad/tutorials/get-started/get-started-install#post-installation-steps
diff --git a/website/content/docs/configuration/client.mdx b/website/content/docs/configuration/client.mdx
index 827e3361c..0a2fe0ebf 100644
--- a/website/content/docs/configuration/client.mdx
+++ b/website/content/docs/configuration/client.mdx
@@ -133,7 +133,9 @@ client {
- `cni_config_dir` `(string: "/opt/cni/config")` - Sets the directory where CNI
network configuration is located. The client will use this path when fingerprinting
- CNI networks. Filenames should use the `.conflist` extension.
+ CNI networks. Filenames should use the `.conflist` extension. Filenames with
+ the `.conf` or `.json` extensions are loaded as individual plugin
+ configuration.
- `bridge_network_name` `(string: "nomad")` - Sets the name of the bridge to be
created by nomad for allocations running with bridge networking mode on the
diff --git a/website/content/docs/job-specification/network.mdx b/website/content/docs/job-specification/network.mdx
index 23eaa6766..89428e608 100644
--- a/website/content/docs/job-specification/network.mdx
+++ b/website/content/docs/job-specification/network.mdx
@@ -54,6 +54,11 @@ configure the appropriate iptables rules.
Network modes are only supported in allocations running on Linux clients.
All other operating systems use the `host` networking mode.
+~> **Warning:** To prevent any type of external access when using `bridge`
+ network mode make sure to bind your workloads to the loopback interface
+ only. Refer to the [Bridge networking][docs_networking_bridge] documentation
+ for more information.
+
## `network` Parameters
- `mbits` ([_deprecated_](/docs/upgrade/upgrade-specific#nomad-0-12-0) int: 10) - Specifies the bandwidth required in MBits.
@@ -320,6 +325,7 @@ network {
- Only the `NOMAD_PORT_` and `NOMAD_HOST_PORT_` environment
variables are set for group network ports.
+[docs_networking_bridge]: /docs/networking#bridge-networking
[docker-driver]: /docs/drivers/docker 'Nomad Docker Driver'
[qemu-driver]: /docs/drivers/qemu 'Nomad QEMU Driver'
[connect]: /docs/job-specification/connect 'Nomad Consul Connect Integration'
diff --git a/website/content/docs/networking/cni.mdx b/website/content/docs/networking/cni.mdx
new file mode 100644
index 000000000..9368e70e4
--- /dev/null
+++ b/website/content/docs/networking/cni.mdx
@@ -0,0 +1,217 @@
+---
+layout: docs
+page_title: CNI
+description: |-
+ Learn about Nomad's runtime environment variables, interpolation, caveats,
+ and more.
+---
+
+# Container Network Interface (CNI)
+
+Nomad has built-in support for scheduling compute resources such as
+CPU, memory, and networking. Nomad's network plugin support extends
+this to allow scheduling tasks with purpose-created or specialty network
+configurations. Network plugins are third-party plugins that conform to the
+[Container Network Interface (CNI)][cni_spec] specification.
+
+Network plugins need to be installed and configured on each client. The [Nomad
+installation instructions][nomad_install] recommend installing the [CNI
+reference plugins][cni_ref] because certain Nomad networking features, like
+`bridge` network mode and Consul service mesh, leverage them to provide an
+operating-system agnostic interface to configure workload networking.
+
+Custom networking in Nomad is accomplished with a combination of CNI plugin
+binaries and CNI configuration files.
+
+## CNI plugins
+
+Spec-compliant plugins should work with Nomad, however, it's possible a plugin
+vendor has implemented their plugin to make non-standard API calls, or it is
+otherwise non-compliant with the CNI specification. In those situations the
+plugin may not function correctly in a Nomad environment. You should verify
+plugin compatibility with Nomad before deploying in production.
+
+CNI plugins are installed and configured on a per-client basis. Nomad consults
+the path given in the client's [`cni_path`][] to find CNI plugin executables.
+
+## CNI configuration files
+
+The CNI specification defines a network configuration format for administrators.
+It contains directives for both the orchestrator and the plugins to consume.
+At plugin execution time, this configuration format is interpreted by Nomad
+and transformed into arguments for the plugins.
+
+Nomad reads the following extensions from the [`cni_config_dir`][]—
+`/opt/cni/config` by default:
+
+* `.conflist` files are loaded as [network
+ configurations][cni_spec_net_config] that contain a list of plugin
+ configurations.
+
+* `.conf` and `.json` files are loaded as individual [plugin
+ configurations][cni_spec_plugin_config] for a specific network.
+
+## Using CNI networks with Nomad jobs
+
+To specify that a job should use a CNI network, set the task group's
+network [`mode`][] attribute to the value `cni/`.
+Nomad will then schedule the workload on client nodes that have fingerprinted a
+CNI configuration with the given name. For example, to use the configuration
+named `mynet`, you should set the task group's network mode to `cni/mynet`.
+Nodes that have a network configuration defining a network named `mynet` in
+their [`cni_config_dir`][] will be eligible to run the workload.
+
+## Nomad's `bridge` configuration
+
+Nomad itself uses CNI plugins and configuration as the underlying implementation
+for the `bridge` network mode, using the [loopback][], [bridge][], [firewall][],
+and [portmap][] CNI plugins configured together to create Nomad's bridge
+network.
+
+The following is the configuration template Nomad uses when setting up these
+networks with the template placeholders replaced with the default configuration
+values for [`bridge_network_name`][], [`bridge_network_subnet`][], and an
+internal constant that provides the value for `iptablesAdminChainName`. This is
+a convenient jumping off point to discuss a worked example.
+
+```json
+{
+ "cniVersion": "0.4.0",
+ "name": "nomad",
+ "plugins": [
+ {
+ "type": "loopback"
+ },
+ {
+ "type": "bridge",
+ "bridge": "nomad",
+ "ipMasq": true,
+ "isGateway": true,
+ "forceAddress": true,
+ "ipam": {
+ "type": "host-local",
+ "ranges": [
+ [
+ {
+ "subnet": "172.26.64.0/20"
+ }
+ ]
+ ],
+ "routes": [
+ { "dst": "0.0.0.0/0" }
+ ]
+ }
+ },
+ {
+ "type": "firewall",
+ "backend": "iptables",
+ "iptablesAdminChainName": "NOMAD-ADMIN"
+ },
+ {
+ "type": "portmap",
+ "capabilities": {"portMappings": true},
+ "snat": true
+ }
+ ]
+}
+```
+
+
+[](/img/nomad-bridge-network.png)
+
+For a more thorough understanding of this configuration, consider each CNI
+plugin's configuration in turn.
+
+### loopback
+
+The `loopback` plugin sets the default local interface, `lo0`, created inside
+the bridge network's network namespace to **UP**. This allows workload running
+inside the namespace to bind to a namespace-specific loopback interface.
+
+### bridge
+
+The `bridge` plugin creates a bridge (virtual switch) named `nomad` that resides
+in the host network namespace. Because this bridge is intended to provide
+network connectivity to allocations, it's configured to be a gateway by setting
+`isGateway` to `true`. This tells the plugin to assign an IP address to the
+bridge interface
+
+The bridge plugin connects allocations (on the same host) into a bridge (virtual
+switch) that resides in the host network namespace. By default Nomad creates a
+single bridge for each client. Since Nomad's bridge network is designed to
+provide network connectivity to the allocations, it configures the bridge
+interface to be a gateway for outgoing traffic by providing it with an address
+using an `ipam` configuration. The default configuration creates a host-local
+address for the host side of the bridge in the `172.26.64.0/20` subnet at
+`172.26.64.1`. When associating allocations to the bridge, it creates addresses
+for the allocations from that same subnet using the host-local plugin. The
+configuration also specifies a default route for the allocations of the
+host-side bridge address.
+
+### firewall
+
+The firewall plugin creates firewall rules to allow traffic to/from the
+allocation's IP address via the host network. Nomad uses the iptables backend
+for the firewall plugin. This configuration creates two new iptables chains,
+`CNI-FORWARD` and `NOMAD-ADMIN`, in the filter table and add rules that allow
+the given interface to send/receive traffic.
+
+The firewall creates an admin chain using the name provided in the
+`iptablesAdminChainName` attribute. For this case, it's called `NOMAD-ADMIN`.
+The admin chain is a user-controlled chain for custom rules that run before
+rules managed by the firewall plugin. The firewall plugin does not add, delete,
+or modify rules in the admin chain.
+
+A new chain, `CNI-FORWARD` is added to the `FORWARD` chain. `CNI-FORWARD` is
+the chain where rules will be added when allocations are created and from where
+rules will be removed when those allocations stop. The `CNI-FORWARD` chain
+first sends all traffic to `NOMAD-ADMIN` chain.
+
+You can use the following command to list the iptables rules present in each
+chain.
+
+```shell-session
+$ sudo iptables -L
+```
+
+### portmap
+
+Nomad needs to be able to map specific ports from the host to tasks running in
+the allocation namespace. The `portmap` plugin forwards traffic from one or more
+ports on the host to the allocation using network address translation (NAT)
+rules.
+
+The plugin sets up two sequences of chains and rules:
+
+- One “primary” `DNAT` (destination NAT) sequence to rewrite the destination.
+- One `SNAT` (source NAT) sequence that will masquerade traffic as needed.
+
+You can use the following command to list the iptables rules in the NAT table.
+
+```shell-session
+$ sudo iptables -t nat -L
+```
+
+## Create your own
+
+You can use this template as a basis for your own CNI-based bridge network
+configuration in cases where you need access to an unsupported option in the
+default configuration, like hairpin mode. When making your own bridge network
+based on this template, ensure that you change the `iptablesAdminChainName` to
+a unique value for your configuration.
+
+[3rd_party_cni]: https://www.cni.dev/docs/#3rd-party-plugins
+[`bridge_network_name`]: /docs/configuration/client#bridge_network_name
+[`bridge_network_subnet`]: /docs/configuration/client#bridge_network_subnet
+[`cni_config_dir`]: /docs/configuration/client#cni_config_dir
+[`cni_path`]: /docs/configuration/client#cni_path
+[`mode`]: /docs/job-specification/network#mode
+[bridge]: https://www.cni.dev/plugins/current/main/bridge/
+[cni_ref]: https://github.com/containernetworking/plugins
+[cni_spec]: https://www.cni.dev/docs/spec/
+[cni_spec_net_config]: https://github.com/containernetworking/cni/blob/main/SPEC.md#configuration-format
+[cni_spec_plugin_config]: https://github.com/containernetworking/cni/blob/main/SPEC.md#plugin-configuration-objects
+[firewall]: https://www.cni.dev/plugins/current/meta/firewall/
+[loopback]: https://github.com/containernetworking/plugins#main-interface-creating
+[nomad_install]: https://developer.hashicorp.com/nomad/tutorials/get-started/get-started-install#post-installation-steps
+[portmap]: https://www.cni.dev/plugins/current/meta/portmap/
diff --git a/website/content/docs/networking/index.mdx b/website/content/docs/networking/index.mdx
new file mode 100644
index 000000000..2a2c02a1e
--- /dev/null
+++ b/website/content/docs/networking/index.mdx
@@ -0,0 +1,331 @@
+---
+layout: docs
+page_title: Networking
+description: |-
+ Learn about Nomad's networking internals and how it compares with other
+ tools.
+---
+
+# Networking
+
+Nomad is a workload orchestrator and so it focuses on the scheduling aspects of
+a deployment, touching areas such as networking as little as possible.
+
+**Networking in Nomad is usually done via _configuration_ instead of
+_infrastructure_**. This means that Nomad provides ways for you to access the
+information you need to connect your workloads instead of running additional
+components behind the scenes, such as DNS servers and load balancers.
+
+This can be confusing at first since it is quite different from what you may
+be used to from other tools. This section explains how networking works in
+Nomad, some of the different patterns and configurations you are likely to find
+and use, and how Nomad differs from other tools in this aspect.
+
+## Allocation networking
+
+The base unit of scheduling in Nomad is an [allocation][], which means that all
+tasks in the same allocation run in the same client and share common resources,
+such as disk and networking. Allocations can request access to network
+resources, such as ports, using the [`network`][jobspec_network] block. At its
+simplest configuration, a `network` block can be defined as:
+
+```hcl
+job "..." {
+ # ...
+ group "..." {
+ network {
+ port "http" {}
+ }
+ # ...
+ }
+}
+```
+
+Nomad reserves a random port in the client between [`min_dynamic_port`][] and
+[`max_dynamic_port`][] that has not been allocated yet and creates a port
+mapping from the host network interface to the allocation.
+
+
+[](/img/networking/port_mapping.png)
+
+The selected port number can be accessed by tasks using the
+[`NOMAD_PORT_`][runtime_environment] environment variable to bind and
+expose the workload at the client's IP address and the given port.
+
+The specific configuration process depends on what you are running, but it is
+usually done using a configuration file rendered from a [`template`][] or
+passed directly via command line arguments:
+
+```hcl
+job "..." {
+ # ...
+ group "..." {
+ network {
+ port "http" {}
+ }
+
+ task "..." {
+ # ...
+ config {
+ args = [
+ "--port=${NOMAD_PORT_http}",
+ ]
+ }
+ }
+ }
+}
+```
+
+It is also possible to request a specific port number, instead of a random one,
+by setting a [`static`][] value for the `port`. **This should only be used by
+specialized workloads**, such as load balancers and system jobs, since it can
+be hard to manage them manually to avoid scheduling collisions.
+
+With the task listening at one of the client's ports, other processes can
+access it directly using the client's IP and port, but first they need to find
+these values. This process is called [service discovery][].
+
+When using IP and port to connect allocations it is important to make sure your
+network topology and routing configuration allow the Nomad clients to
+communicate with each other.
+
+## Bridge networking
+
+Linux clients support a network [`mode`][network_mode] called [`bridge`][]. A
+bridge network acts like a virtual network switch allowing processes connected
+to the bridge to reach each other while isolating them from others.
+
+When an allocation uses bridge networking, the Nomad agent creates a bridge
+called `nomad` (or the value set in [`bridge_network_name`][]) using the
+[`bridge` CNI plugin][cni_bridge] if one doesn't exist yet. Before using this
+mode you must first [install the CNI plugins][cni_install] into your clients.
+By default a single bridge is created in each Nomad client.
+
+
+[](/img/networking/bridge.png)
+
+Allocations that use the `bridge` network mode run in an isolated network
+namespace and are connected to the bridge. This allows Nomad to map random
+ports from the host to specific port numbers inside the allocation that are
+expected by the tasks.
+
+For example, an HTTP server that listens on port `3000` by default can be
+configured with the following `network` block:
+
+```hcl
+job "..." {
+ # ...
+ group "..." {
+ network {
+ mode = "bridge"
+
+ port "http" {
+ to = 3000
+ }
+ }
+ # ...
+ }
+}
+```
+
+To allow communication between allocations in different clients, Nomad creates
+an `iptables` rule to forward requests from the host network interface to the
+bridge. This results in three different network access scopes:
+
+- Tasks that bind to the loopback interface (`localhost` or `127.0.0.1`) are
+ accessible only from within the allocation.
+
+- Tasks that bind to the bridge (or other general addresses, such as `0.0.0.0`)
+ without `port` forwarding are only accessible from within the same client.
+
+- Tasks that bind to the bridge (or other general addresses, such as `0.0.0.0`)
+ with `port` forwarding are accessible from external sources.
+
+~> **Warning:** To prevent any type of external access when using `bridge`
+ network mode make sure to bind your workloads to the loopback interface
+ only.
+
+Bridge networking is at the core of [service mesh][] and a requirement when
+using [Consul Service Mesh][consul_service_mesh].
+
+### Bridge networking with Docker
+
+The Docker daemon manages its own network configuration and creates its own
+[bridge network][docker_bridge], network namespaces, and [`iptable`
+rules][docker_iptables]. Tasks using the `docker` task driver connect to the
+Docker bridge instead of using the one created by Nomad and, by default, each
+container runs in its own Docker managed network namespace.
+
+When using `bridge` network mode, Nomad creates a placeholder container using
+the image defined in [`infra_image`][] to initialize a Docker network namespace
+that is shared by all tasks in the allocation to allow them to communicate with
+each other.
+
+The Docker task driver has its own task-level
+[`network_mode`][docker_network_mode] configuration. Its default value depends
+on the group-level [`network.mode`][network_mode] configuration.
+
+~> **Warning:** The task-level `network_mode` may conflict with the group-level
+ `network.mode` configuration and generate unexpected results. If you set the
+ group `network.mode = "bridge"` you should not set the Docker config
+ `network_mode`.
+
+```hcl
+group "..." {
+ network {
+ mode = "bridge"
+ }
+
+ task "..." {
+ driver = "docker"
+
+ config {
+ # This conflicts with the group-level network.mode configuration and
+ # should not be used.
+ network_mode = "bridge"
+ # ...
+ }
+ }
+}
+```
+
+The diagram below illustrates what happens when a Docker task is configured
+incorrectly.
+
+
+[](/img/networking/docker_bridge.png)
+
+The tasks in the rightmost allocation are not able to communicate with each
+other using their loopback interface because they were placed in different
+network namespaces.
+
+Since the group `network.mode` is `bridge`, Nomad creates the pause container
+to establish a shared network namespace for all tasks, but setting the
+task-level `network_mode` to `bridge` places the task in a different namespace.
+This prevents, for example, a task from communicating with its sidecar proxy in
+a [service mesh][] deployment.
+
+Refer to the [`network_mode`][docker_network_mode] documentation and the
+[Networking][docker_networking] section for more information.
+
+-> **Note:** Docker Desktop in non-Linux environments runs a local virtual
+ machine, adding an extra layer of indirection. Refer to the
+ [FAQ][faq_docker] for more details.
+
+## Comparing with other tools
+
+### Kubernetes and Docker Compose
+
+Networking in Kubernetes and Docker Compose works differently than in Nomad. To
+access a container you use a fully qualified domain name such as `db` in Docker
+Compose or `db.prod.svc.cluster.local` in Kubernetes. This process relies on
+additional infrastructure to resolve the hostname and distribute the requests
+across multiple containers.
+
+Docker Compose allows you to run and manage multiple containers using units
+called services.
+
+```yaml
+version: "3.9"
+services:
+ web:
+ build: .
+ ports:
+ - "8000:8000"
+ db:
+ image: postgres
+ ports:
+ - "8001:5432"
+```
+
+To access a service from another container you can reference the service name
+directly, for example using `postgres://db:5432`. In order to enable this
+pattern, Docker Compose includes an [internal DNS services][docker_dns] and a
+load balancer that is transparent to user. When running in Swarm mode, Docker
+Compose also requires an overlay network to route requests across hosts.
+
+
+Kubernetes provides the [`Service`][k8s_service] abstraction that can be used
+to declare how a set of Pods are accessed.
+
+```yaml
+apiVersion: v1
+kind: Service
+metadata:
+ name: my-service
+spec:
+ selector:
+ app.kubernetes.io/name: MyApp
+ ports:
+ - protocol: TCP
+ port: 80
+ targetPort: 9376
+```
+
+To access the Service you use a FQDN such as
+`my-service.prod.svc.cluster.local`. This name is resolved by the [DNS
+service][k8s_dns] which is an add-on that runs in all nodes. Along with this
+service, each node also runs a [`kube-proxy`][k8s_kubeproxy] instance to
+distribute requests to all Pods matched by the Service.
+
+You can use the same FQDN networking style with Nomad using [Consul's DNS
+interface][consul_dns] and configuring your clients with [DNS
+forwarding][consul_dns_forwarding], and deploying a [load
+balancer][nomad_load_balancer].
+
+Another key difference from Nomad is that in Kubernetes and Docker Compose each
+container has its own IP address, requiring a virtual network to map physical
+IP addresses to virtual ones. In case of Docker Compose in Swarm mode an
+[`overlay`][docker_overlay] is also required to enable traffic across multiple
+hosts. This allows multiple containers running the same service to listen on
+the same port number.
+
+In Nomad, allocations use the IP address of the client they are running and are
+assigned random port numbers and so Nomad service discovery with DNS uses
+[`SRV` records][dns_srv] instead of `A` or `AAA` records.
+
+## Next topics
+
+- [Service Discovery][service discovery]
+- [Service Mesh][service mesh]
+- [Container Network Interface][cni]
+
+## Additional resources
+
+- [Understanding Networking in Nomad - Karan Sharma](https://mrkaran.dev/posts/nomad-networking-explained/)
+- [Understanding Nomad Networking Patterns - Luiz Aoqui, HashiTalks: Canada 2021](https://www.youtube.com/watch?v=wTA5HxB_uuk)
+
+[`bridge_network_name`]: /docs/configuration/client#bridge_network_name
+[`bridge`]: /docs/job-specification/network#bridge
+[`infra_image`]: /docs/drivers/docker#infra_image
+[`max_dynamic_port`]: /docs/configuration/client#max_dynamic_port
+[`min_dynamic_port`]: /docs/configuration/client#min_dynamic_port
+[`static`]: /docs/job-specification/network#static
+[`template`]: /docs/job-specification/template#template-examples
+[allocation]: /docs/concepts/architecture#allocation
+[cni]: /docs/networking/cni
+[cni_bridge]: https://www.cni.dev/plugins/current/main/bridge/
+[cni_install]: https://developer.hashicorp.com/nomad/tutorials/get-started/get-started-install#post-installation-steps
+[consul_dns]: https://developer.hashicorp.com/consul/docs/discovery/dns
+[consul_dns_forwarding]: https://developer.hashicorp.com/consul/tutorials/networking/dns-forwarding
+[consul_service_mesh]: /docs/integrations/consul-connect
+[dns_srv]: https://en.wikipedia.org/wiki/SRV_record
+[docker_bridge]: https://docs.docker.com/network/bridge/
+[docker_compose]: https://docs.docker.com/compose/
+[docker_dns]: https://docs.docker.com/config/containers/container-networking/#dns-services
+[docker_iptables]: https://docs.docker.com/network/iptables/
+[docker_network_mode]: /drivers/docker#network_mode
+[docker_networking]: /docs/drivers/docker#networking
+[docker_overlay]: https://docs.docker.com/network/overlay/
+[docker_swarm]: https://docs.docker.com/engine/swarm/
+[faq_docker]: /docs/faq#q-how-to-connect-to-my-host-network-when-using-docker-desktop-windows-and-macos
+[jobspec_network]: /docs/job-specification/network
+[k8s_dns]: https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/
+[k8s_ingress]: https://kubernetes.io/docs/concepts/services-networking/ingress/
+[k8s_kubeproxy]: https://kubernetes.io/docs/concepts/overview/components/#kube-proxy
+[k8s_service]: https://kubernetes.io/docs/concepts/services-networking/service/
+[network_mode]: /docs/job-specification/network#mode
+[nomad_load_balancer]: https://developer.hashicorp.com/nomad/tutorials/load-balancing
+[runtime_environment]: /docs/runtime/environment#network-related-variables
+[service discovery]: /docs/networking/service-discovery
+[service mesh]: /docs/networking/service-mesh
diff --git a/website/content/docs/networking/service-discovery.mdx b/website/content/docs/networking/service-discovery.mdx
new file mode 100644
index 000000000..a8b95fdf6
--- /dev/null
+++ b/website/content/docs/networking/service-discovery.mdx
@@ -0,0 +1,144 @@
+---
+layout: docs
+page_title: Service Discovery
+description: |-
+ Learn about how service discovery works in Nomad to connect workloads.
+---
+
+# Service discovery
+
+Service discovery is the process of retrieving the low-level information
+necessary to access a service, such as its IP and port, via a high-level
+identifier, such as a human-friendly service name.
+
+The service information is stored in a service catalog which provide interfaces
+that can be used to query data. In Nomad there are two service catalogs:
+
+- The **native service discovery** catalog is embedded in Nomad and requires no
+ additional infrastructure.
+
+- The [**Consul service discovery catalog**][consul_sd] requires access to a
+ Consul cluster, but it provides additional features, including a [DNS
+ interface][consul_dns] and [service mesh][] capabilities.
+
+Services are registered using the [`service`][] block, with the [`provider`][]
+parameter defining which catalog to use. Nomad stores the IP and port of all
+allocations associated with the service in the catalog and keeps these entries
+updated as you schedule new workloads or deploy new versions of your
+applications.
+
+```hcl
+job "..." {
+ # ...
+ group "..." {
+ service {
+ name = "database"
+ port = "db"
+ provider = "nomad"
+ # ...
+ }
+ # ...
+ }
+}
+```
+
+To access services, other allocations can query the catalog using
+[`template`][] blocks with the [`service`][ct_service_fn] function to query the
+Consul catalog or the [`nomadService`][ct_nomad_service_fn] function when using
+Nomad native service discovery. The `template` can then be used as a
+configuration file or have its content loaded as environment variables to
+configure connection information in applications.
+
+```hcl
+job "..." {
+ # ...
+ group "..." {
+ task "..." {
+ template {
+ data = < **Note**: Services are registered with either `tags` or `canary_tags`. In
+ order to share values they must be set in both fields.
+
+[`canary_tags`]: /docs/job-specification/service#canary_tags
+[`check`]: /docs/job-specification/check
+[`provider`]: /docs/job-specification/service#provider
+[`service`]: /docs/job-specification/service
+[`tags`]: /docs/job-specification/service#tags
+[`template`]: /docs/job-specification/template#template-examples
+[consul_dns]: https://developer.hashicorp.com/consul/docs/discovery/dns
+[consul_sd]: https://developer.hashicorp.com/consul/docs/concepts/service-discovery
+[ct_nomad_service_fn]: https://github.com/hashicorp/consul-template/blob/main/docs/templating-language.md#nomadservice
+[ct_service_fn]: https://github.com/hashicorp/consul-template/blob/main/docs/templating-language.md#service
+[jobspec_update_blue_green]: /docs/job-specification/update#blue-green-upgrades
+[jobspec_update_canary]: /docs/job-specification/update#canary-upgrades
+[learn_lb]: /nomad/tutorials/load-balancing
+[service mesh]: /docs/networking/service-mesh
diff --git a/website/content/docs/networking/service-mesh.mdx b/website/content/docs/networking/service-mesh.mdx
new file mode 100644
index 000000000..c09eee171
--- /dev/null
+++ b/website/content/docs/networking/service-mesh.mdx
@@ -0,0 +1,162 @@
+---
+layout: docs
+page_title: Service Mesh
+description: |-
+ Learn about how service mesh works in Nomad to securely connect and isolate
+ your workloads.
+---
+
+# Service mesh
+
+Service mesh is a networking pattern that deploys and configures
+infrastructure to directly connect workloads. One of the most common pieces of
+infrastructure deployed are sidecar proxies. These proxies usually run
+alongside the main workload in an isolated network namespace such that all
+network traffic flows through the proxy.
+
+The proxies are often referred to as the **data plane** since they are
+responsible for _moving data_ while the components that configure them are part
+of the **control plane** because they are responsible for controlling the _flow
+of data_.
+
+By funneling traffic through a common layer of infrastructure the control plane
+is able to centralize and automatically apply configuration to all proxies to
+enable features such as automated traffic encryption, fine-grained routing, and
+service-based access control permissions throughout the entire mesh.
+
+## Consul Service Mesh
+
+Nomad has native integration with Consul to provide service mesh capabilities.
+The [`connect`][] block is the entrypoint for all service mesh configuration.
+Nomad automatically deploys a sidecar proxy task to all allocations that have a
+[`sidecar_service`][] block.
+
+This proxy task is responsible for exposing the service to the mesh and can
+also be used to access other services from within the allocation. These
+external services are called upstreams and are declared using the
+[`upstreams`][] block.
+
+The allocation `network_mode` must be set to `bridge` for network isolation and
+all external traffic is provided by the sidecar.
+
+~> **Warning:** To fully isolate your workloads make sure to bind them only to
+ the `loopback` interface.
+
+The job below exposes a service called `api` to the mesh:
+
+```hcl
+job "..." {
+ # ...
+ group "..." {
+ network {
+ mode = "bridge"
+
+ port "http" {}
+ }
+
+ service {
+ name = "api"
+ port = "http"
+
+ connect {
+ sidecar_service {}
+ }
+ }
+ # ...
+ }
+}
+```
+
+To access this service, a job can be configured as follows:
+
+```hcl
+job "..." {
+ # ...
+ group "..." {
+ network {
+ mode = "bridge"
+ # ...
+ }
+
+ service {
+ # ...
+ connect {
+ sidecar_service {
+ proxy {
+ upstreams {
+ destination_name = "api"
+ local_bind_port = 8080
+ }
+ }
+ }
+ }
+ }
+ }
+}
+```
+
+A request starting from a task within an allocation of this job follows the
+path:
+
+1. The task makes a request to `localhost:8080` which is the port where the
+ proxy binds the `api` service as an upstream.
+
+2. The proxy, configured by Consul, forwards the request to a proxy running
+ within an allocation that is part of the `api` service.
+
+3. The proxy for the `api` service forwards the request to the local port in
+ the allocation.
+
+4. The response is sent back to the first task following the same path in
+ reverse.
+
+The IP and port to use to connect to an upstream can also be read from the
+[`NOMAD_UPSTREAM_ADDR_`][runtime_network] environment variable.
+
+### Envoy proxy
+
+Consul Service Mesh uses [Envoy][] as proxy. Nomad calls Consul's [`consul
+connect envoy -bootstrap`][consul_cli_envoy] CLI command to generate the
+initial proxy configuration.
+
+Nomad injects a prestart sidecar Docker task to run the Envoy proxy. This task
+can be customized using the [`sidecar_task`][] block.
+
+### Gateways
+
+Since the mesh defines a closed boundary that only selected services can
+participate in, there are specialized proxies called gateways that can be used
+for mesh-wide connectivity. Nomad can deploy these gateways using the
+[`gateway`][] block. Nomad injects an Envoy proxy task to any `group` with a
+`gateway` service.
+
+The types of gateways provided by Consul Service Mesh are:
+
+- **Mesh gateways** allow communication between different service meshes and
+ are deployed using the [`mesh`][] parameter.
+
+- **Ingress gateways** allow services outside the mesh to connect to services
+ inside the mesh and are deployed using the [`ingress`][] parameter.
+
+- **Egress gateways** allow services inside the mesh to communication with
+ services outside the mesh and are deployed using the [`terminating`][]
+ parameter.
+
+
+## Additional Resources
+
+- [Consul Service Mesh documentation](https://developer.hashicorp.com/consul/docs/connect)
+- [Nomad Consul Service Mesh integration](/docs/integrations/consul-connect)
+
+[Envoy]: https://www.envoyproxy.io/
+[`connect`]: /docs/job-specification/connect
+[`gateway`]: /docs/job-specification/gateway
+[`ingress`]: /docs/job-specification/gateway#ingress
+[`mesh`]: /docs/job-specification/gateway#mesh
+[`proxy`]: /docs/job-specification/proxy
+[`sidecar_service`]: /docs/job-specification/sidecar_service
+[`sidecar_task`]: /docs/job-specification/sidecar_task
+[`terminating`]: /docs/job-specification/gateway#terminating
+[`upstreams`]: /docs/job-specification/upstreams
+[consul_cli_envoy]: https://developer.hashicorp.com/consul/commands/connect/envoy
+[runtime_network]: /docs/runtime/environment#network-related-variables
diff --git a/website/content/docs/runtime/environment.mdx b/website/content/docs/runtime/environment.mdx
index 26fc38fa9..349d9157e 100644
--- a/website/content/docs/runtime/environment.mdx
+++ b/website/content/docs/runtime/environment.mdx
@@ -15,10 +15,6 @@ environment variables.
@include 'envvars.mdx'
-~> Port labels and task names will have any non-alphanumeric or underscore
-characters in their names replaced by underscores `_` when they're used in
-environment variable names such as `NOMAD_ADDR__`.
-
## Task Identifiers
Nomad will pass both the allocation ID and name, the deployment ID that created
diff --git a/website/content/partials/envvars.mdx b/website/content/partials/envvars.mdx
index 1bb5613ec..64789d7c6 100644
--- a/website/content/partials/envvars.mdx
+++ b/website/content/partials/envvars.mdx
@@ -1,47 +1,59 @@
-| Variable | Description |
-| ------------------------------------------------------------ | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
-| `NOMAD_ALLOC_DIR` | The path to the shared `alloc/` directory. See [here](/docs/runtime/environment#task-directories) for more information. |
-| `NOMAD_TASK_DIR` | The path to the task `local/` directory. See [here](/docs/runtime/environment#task-directories) for more information. |
-| `NOMAD_SECRETS_DIR` | Path to the task's secrets directory. See [here](/docs/runtime/environment#task-directories) for more information. |
-| `NOMAD_MEMORY_LIMIT` | Memory limit in MB for the task |
-| `NOMAD_MEMORY_MAX_LIMIT` | The maximum memory limit the task may use if client has excess memory capacity, in MB. Omitted if task isn't configured with memory oversubscription. |
-| `NOMAD_CPU_LIMIT` | CPU limit in MHz for the task |
-| `NOMAD_CPU_CORES` | The specific CPU cores reserved for the task in cpuset list notation. Omitted if the task does not request cpu cores. E.g. `0-2,7,12-14` |
-| `NOMAD_ALLOC_ID` | Allocation ID of the task |
-| `NOMAD_SHORT_ALLOC_ID` | The first 8 characters of the allocation ID of the task |
-| `NOMAD_ALLOC_NAME` | Allocation name of the task |
-| `NOMAD_ALLOC_INDEX` | Allocation index; useful to distinguish instances of task groups. From 0 to (count - 1). The index is unique within a given version of a job, but canaries or failed tasks in a deployment may reuse the index. |
-| `NOMAD_TASK_NAME` | Task's name |
-| `NOMAD_GROUP_NAME` | Group's name |
-| `NOMAD_JOB_ID` | Job's ID, which is equal to the Job name when submitted through CLI but can be different when using the API |
-| `NOMAD_JOB_NAME` | Job's name |
-| `NOMAD_JOB_PARENT_ID` | ID of the Job's parent if it has one |
-| `NOMAD_DC` | Datacenter in which the allocation is running |
-| `NOMAD_PARENT_CGROUP` | The parent cgroup used to contain task cgroups (Linux only) |
-| `NOMAD_NAMESPACE` | Namespace in which the allocation is running |
-| `NOMAD_REGION` | Region in which the allocation is running |
-| `NOMAD_META_` | The metadata value given by `key` on the task's metadata. Note that this is different from [`${meta.}`](/docs/runtime/interpolation#node-variables-) which are keys in the node's metadata. |
-| `VAULT_TOKEN` | The task's Vault token. See [Vault Integration](/docs/integrations/vault-integration) for more details |
-| **Network-related Variables** |
-| `NOMAD_IP_` | Host IP for the given port `label`. See [here for more](/docs/job-specification/network) information. |
-| `NOMAD_PORT_` | Port for the given port `label`. Driver-specified port when a port map is used, otherwise the host's static or dynamic port allocation. Services should bind to this port. See [here for more](/docs/job-specification/network) information. |
-| `NOMAD_ADDR_` | Host `IP:Port` pair for the given port `label`. |
-| `NOMAD_HOST_PORT_` | Port on the host for the port `label`. See [here](/docs/job-specification/network#mapped-ports) for more information. |
-| `NOMAD_IP__` | **Deprecated**. Host IP for the given port `label` and `task` for tasks in the same task group. Only available when setting ports via the task resource network port mapping. |
-| `NOMAD_PORT__` | **Deprecated**. Port for the given port `label` and `task` for tasks in the same task group. Driver-specified port when a port map is used, otherwise the host's static or dynamic port allocation. Services should bind to this port. Only available when setting ports via the task resource network port mapping. |
-| `NOMAD_ADDR__` | **Deprecated**. Host `IP:Port` pair for the given port `label` and `task` for tasks in the same task group. Only available when setting ports via the task resource network port mapping. |
-| `NOMAD_HOST_PORT__` | **Deprecated**. Port on the host for the port `label` and `task` for tasks in the same task group. Only available when setting ports via the task resource network port mapping. |
-| `NOMAD_UPSTREAM_IP_` | IP for the given `service` when defined as a Consul Connect [upstream](/docs/job-specification/upstreams). |
-| `NOMAD_UPSTREAM_PORT_` | Port for the given `service` when defined as a Consul Connect [upstream](/docs/job-specification/upstreams). |
-| `NOMAD_UPSTREAM_ADDR_` | Host `IP:Port` for the given `service` when defined as a Consul Connect [upstream](/docs/job-specification/upstreams). |
-| `NOMAD_ENVOY_ADMIN_ADDR_` | Local address `127.0.0.2:Port` for the admin port of the envoy sidecar for the given `service` when defined as a Consul Connect enabled service. Envoy runs inside the group network namespace unless configured for host networking. |
-| `NOMAD_ENVOY_READY_ADDR_` | Local address `127.0.0.1:Port` for the ready port of the envoy sidecar for the given `service` when defined as a Consul Connect enabled service. Envoy runs inside the group network namespace unless configured for host networking. |
-| **Consul-related Variables** (only set for connect native tasks) |
-| `CONSUL_HTTP_ADDR` | Specifies the address to the local Consul agent. Will be automatically set to a unix domain socket in bridge networking mode, or a tcp address in host networking mode. |
-| `CONSUL_HTTP_TOKEN` | Specifies the Consul ACL token used to authorize with Consul. Will be automatically set to a generated Connect service identity token specific to the service instance if Consul ACLs are enabled. |
-| `CONSUL_HTTP_SSL` | Specifies whether HTTPS should be used when communicating with consul. Will be automatically set to true if Nomad is configured to communicate with Consul using TLS. |
-| `CONSUL_HTTP_SSL_VERIFY` | Specifies whether the HTTPS connection with Consul should be mutually verified. Will be automatically set to true if Nomad is configured to verify TLS certificates. |
-| `CONSUL_CACERT` | Specifies the path to the CA certificate used for Consul communication. Will be automatically set if Nomad is configured with the `consul.share_ssl` option. |
-| `CONSUL_CLIENT_CERT` | Specifies the path to the Client certificate used for Consul communication. Will be automatically set if Nomad is configured with the `consul.share_ssl` option. |
-| `CONSUL_CLIENT_KEY` | Specifies the path to the CLient Key certificate used for Consul communication. Will be automatically set if Nomad is configured with the `consul.share_ssl` option. |
-| `CONSUL_TLS_SERVER_NAME` | Specifies the server name to use as the SNI host for Consul communication. Will be automatically set if Consul is configured to use TLS and the task is in a group using bridge networking mode. |
+### Job-related variables
+
+| Variable | Description |
+| ------------------------ | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
+| `NOMAD_ALLOC_DIR` | The path to the shared `alloc/` directory. See [here](/docs/runtime/environment#task-directories) for more information. |
+| `NOMAD_TASK_DIR` | The path to the task `local/` directory. See [here](/docs/runtime/environment#task-directories) for more information. |
+| `NOMAD_SECRETS_DIR` | Path to the task's secrets directory. See [here](/docs/runtime/environment#task-directories) for more information. |
+| `NOMAD_MEMORY_LIMIT` | Memory limit in MB for the task |
+| `NOMAD_MEMORY_MAX_LIMIT` | The maximum memory limit the task may use if client has excess memory capacity, in MB. Omitted if task isn't configured with memory oversubscription. |
+| `NOMAD_CPU_LIMIT` | CPU limit in MHz for the task |
+| `NOMAD_CPU_CORES` | The specific CPU cores reserved for the task in cpuset list notation. Omitted if the task does not request cpu cores. E.g. `0-2,7,12-14` |
+| `NOMAD_ALLOC_ID` | Allocation ID of the task |
+| `NOMAD_SHORT_ALLOC_ID` | The first 8 characters of the allocation ID of the task |
+| `NOMAD_ALLOC_NAME` | Allocation name of the task |
+| `NOMAD_ALLOC_INDEX` | Allocation index; useful to distinguish instances of task groups. From 0 to (count - 1). The index is unique within a given version of a job, but canaries or failed tasks in a deployment may reuse the index. |
+| `NOMAD_TASK_NAME` | Task's name |
+| `NOMAD_GROUP_NAME` | Group's name |
+| `NOMAD_JOB_ID` | Job's ID, which is equal to the Job name when submitted through CLI but can be different when using the API |
+| `NOMAD_JOB_NAME` | Job's name |
+| `NOMAD_JOB_PARENT_ID` | ID of the Job's parent if it has one |
+| `NOMAD_DC` | Datacenter in which the allocation is running |
+| `NOMAD_PARENT_CGROUP` | The parent cgroup used to contain task cgroups (Linux only) |
+| `NOMAD_NAMESPACE` | Namespace in which the allocation is running |
+| `NOMAD_REGION` | Region in which the allocation is running |
+| `NOMAD_META_` | The metadata value given by `key` on the task's metadata. Note that this is different from [`${meta.}`](/docs/runtime/interpolation#node-variables-) which are keys in the node's metadata. |
+| `VAULT_TOKEN` | The task's Vault token. See [Vault Integration](/docs/integrations/vault-integration) for more details |
+
+### Network-related Variables
+
+| Variable | Description |
+| ---------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
+| `NOMAD_IP_` | Host IP for the given port `label`. See [here for more](/docs/job-specification/network) information. |
+| `NOMAD_PORT_` | Port for the given port `label`. Driver-specified port when a port map is used, otherwise the host's static or dynamic port allocation. Services should bind to this port. See [here for more](/docs/job-specification/network) information. |
+| `NOMAD_ADDR_` | Host `IP:Port` pair for the given port `label`. |
+| `NOMAD_HOST_PORT_` | Port on the host for the port `label`. See [here](/docs/job-specification/network#mapped-ports) for more information. |
+| `NOMAD_UPSTREAM_IP_` | IP for the given `service` when defined as a Consul Connect [upstream](/docs/job-specification/upstreams). |
+| `NOMAD_UPSTREAM_PORT_` | Port for the given `service` when defined as a Consul Connect [upstream](/docs/job-specification/upstreams). |
+| `NOMAD_UPSTREAM_ADDR_` | Host `IP:Port` for the given `service` when defined as a Consul Connect [upstream](/docs/job-specification/upstreams). |
+| `NOMAD_ENVOY_ADMIN_ADDR_` | Local address `127.0.0.2:Port` for the admin port of the envoy sidecar for the given `service` when defined as a Consul Connect enabled service. Envoy runs inside the group network namespace unless configured for host networking. |
+| `NOMAD_ENVOY_READY_ADDR_` | Local address `127.0.0.1:Port` for the ready port of the envoy sidecar for the given `service` when defined as a Consul Connect enabled service. Envoy runs inside the group network namespace unless configured for host networking. |
+
+~> **Note:** Port labels and task names will have any non-alphanumeric or
+ underscore characters in their names replaced by underscores `_` when they're
+ used in environment variable names such as `NOMAD_ADDR__`.
+
+### Consul-related Variables
+
+This variables are only set for Connect native tasks.
+
+| Variable | Description |
+| ------------------------ | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
+| `CONSUL_HTTP_ADDR` | Specifies the address to the local Consul agent. Will be automatically set to a unix domain socket in bridge networking mode, or a tcp address in host networking mode. |
+| `CONSUL_HTTP_TOKEN` | Specifies the Consul ACL token used to authorize with Consul. Will be automatically set to a generated Connect service identity token specific to the service instance if Consul ACLs are enabled. |
+| `CONSUL_HTTP_SSL` | Specifies whether HTTPS should be used when communicating with consul. Will be automatically set to true if Nomad is configured to communicate with Consul using TLS. |
+| `CONSUL_HTTP_SSL_VERIFY` | Specifies whether the HTTPS connection with Consul should be mutually verified. Will be automatically set to true if Nomad is configured to verify TLS certificates. |
+| `CONSUL_CACERT` | Specifies the path to the CA certificate used for Consul communication. Will be automatically set if Nomad is configured with the `consul.share_ssl` option. |
+| `CONSUL_CLIENT_CERT` | Specifies the path to the Client certificate used for Consul communication. Will be automatically set if Nomad is configured with the `consul.share_ssl` option. |
+| `CONSUL_CLIENT_KEY` | Specifies the path to the CLient Key certificate used for Consul communication. Will be automatically set if Nomad is configured with the `consul.share_ssl` option. |
+| `CONSUL_TLS_SERVER_NAME` | Specifies the server name to use as the SNI host for Consul communication. Will be automatically set if Consul is configured to use TLS and the task is in a group using bridge networking mode. |
diff --git a/website/data/docs-nav-data.json b/website/data/docs-nav-data.json
index 2853fd0c9..39e471368 100644
--- a/website/data/docs-nav-data.json
+++ b/website/data/docs-nav-data.json
@@ -107,6 +107,10 @@
{
"title": "Storage",
"path": "concepts/plugins/csi"
+ },
+ {
+ "title": "Networking",
+ "path": "concepts/plugins/cni"
}
]
},
@@ -1845,6 +1849,27 @@
}
]
},
+ {
+ "title": "Networking",
+ "routes": [
+ {
+ "title": "Overview",
+ "path": "networking"
+ },
+ {
+ "title": "Service Discovery",
+ "path": "networking/service-discovery"
+ },
+ {
+ "title": "Service Mesh",
+ "path": "networking/service-mesh"
+ },
+ {
+ "title": "CNI",
+ "path": "networking/cni"
+ }
+ ]
+ },
{
"title": "Autoscaling",
"routes": [
diff --git a/website/public/img/networking/bridge.png b/website/public/img/networking/bridge.png
new file mode 100644
index 000000000..e2b3951c5
Binary files /dev/null and b/website/public/img/networking/bridge.png differ
diff --git a/website/public/img/networking/docker_bridge.png b/website/public/img/networking/docker_bridge.png
new file mode 100644
index 000000000..5cbb521f0
Binary files /dev/null and b/website/public/img/networking/docker_bridge.png differ
diff --git a/website/public/img/networking/port_mapping.png b/website/public/img/networking/port_mapping.png
new file mode 100644
index 000000000..ccd700c1e
Binary files /dev/null and b/website/public/img/networking/port_mapping.png differ
diff --git a/website/public/img/nomad-bridge-network.png b/website/public/img/nomad-bridge-network.png
new file mode 100644
index 000000000..45bc1dd87
Binary files /dev/null and b/website/public/img/nomad-bridge-network.png differ