Separate cluster formation into separate documentation pages

This commit is contained in:
Seth Vargo
2016-10-08 18:57:51 +08:00
parent 910deabe87
commit 2e8dc2135c
6 changed files with 498 additions and 468 deletions

View File

@@ -0,0 +1,116 @@
---
layout: "docs"
page_title: "Automatically Bootstrapping a Nomad Cluster"
sidebar_current: "docs-cluster-automatic"
description: |-
Learn how to automatically bootstrap a Nomad cluster using Consul. By having
a Consul agent installed on each host, Nomad can automatically discover other
clients and servers to bootstrap the cluster without operator involvement.
---
# Automatic Bootstrapping
To automatically bootstrap a Nomad cluster, we must leverage another HashiCorp
open source tool, [Consul](https://www.consul.io/). Bootstrapping Nomad is
easiest against an existing Consul cluster. The Nomad servers and clients
will become informed of each other's existence when the Consul agent is
installed and configured on each host. As an added benefit, integrating Consul
with Nomad provides service and health check registration for applications which
later run under Nomad.
Consul models infrastructures as datacenters and multiple Consul datacenters can
be connected over the WAN so that clients can discover nodes in other
datacenters. Since Nomad regions can encapsulate many datacenters, we recommend
running a Consul cluster in every Nomad datacenter and connecting them over the
WAN. Please refer to the Consul guide for both
[bootstrapping](https://www.consul.io/docs/guides/bootstrapping.html) a single
datacenter and [connecting multiple Consul clusters over the
WAN](https://www.consul.io/docs/guides/datacenters.html).
If a Consul agent is installed on the host prior to Nomad starting, the Nomad
agent will register with Consul and discover other nodes.
For servers, we must inform the cluster how many servers we expect to have. This
is required to form the initial quorum, since Nomad is unaware of how many peers
to expect. For example, to form a region with three Nomad servers, you would use
the following Nomad configuration file:
```hcl
# /etc/nomad.d/server.hcl
server {
enabled = true
bootstrap_expect = 3
}
```
This configuration would be saved to disk and then run:
```shell
$ nomad agent -config=/etc/nomad.d/server.hcl
```
A similar configuration is available for Nomad clients:
```hcl
# /etc/nomad.d/client.hcl
datacenter = "dc1"
client {
enabled = true
}
```
The agent is started in a similar manner:
```shell
$ nomad agent -config=/etc/nomad.d/client.hcl
```
As you can see, the above configurations include no IP or DNS addresses between
the clients and servers. This is because Nomad detected the existence of Consul
and utilized service discovery to form the cluster.
## Internals
~> This section discusses the internals of the Consul and Nomad integration at a
very high level. Reading is only recommended for those curious to the
implementation.
As discussed in the previous section, Nomad merges multiple configuration files
together, so the `-config` may be specified more than once:
```shell
$ nomad agent -config=base.hcl -config=server.hcl
```
In addition to merging configuration on the command line, Nomad also maintains
its own internal configurations (called "default configs") which include sane
base defaults. One of those default configurations includes a "consul" block,
which specifies sane defaults for connecting to and integrating with Consul. In
essence, this configuration file resembles the following:
```hcl
# You do not need to add this to your configuration file. This is an example
# that is part of Nomad's internal default configuration for Consul integration.
consul {
# The address to the Consul agent.
address = "127.0.0.1:8500"
# The service name to register the server and client with Consul.
server_service_name = "nomad"
client_service_name = "nomad-client"
# Enables automatically registering the services.
auto_advertise = true
# Enabling the server and client to bootstrap using Consul.
server_auto_join = true
client_auto_join = true
}
```
Please refer to the [Consul
documentation](/docs/agent/config.html#consul_options) for the complete set of
configuration options.

View File

@@ -1,270 +1,24 @@
---
layout: "docs"
page_title: "Creating a Nomad Cluster"
page_title: "Bootstrapping a Nomad Cluster"
sidebar_current: "docs-cluster-bootstrap"
description: |-
Learn how to bootstrap a Nomad cluster.
---
# Creating a cluster
# Bootstrapping a Nomad Cluster
Nomad models infrastructure as regions and datacenters. Regions may contain
multiple datacenters. Servers are assigned to regions and manage all state for
the region and make scheduling decisions within that region. Clients are
registered to a single datacenter and region.
Nomad models infrastructure into regions and datacenters. Servers reside at the
regional layer and manage all state and scheduling decisions for that region.
Regions contain multiple datacenters, and clients are registered to a single
datacenter (and thus a region that contains that region). For more details on
the architecture of Nomad and how it models infrastructure see the [architecture
page](/docs/internals/architecture.html).
[![Regional Architecture](/assets/images/nomad-architecture-region.png)](/assets/images/nomad-architecture-region.png)
There are two strategies for bootstrapping a Nomad cluster:
This page will explain how to bootstrap a production grade Nomad region, both
with and without Consul, and how to federate multiple regions together.
1. <a href="/docs/cluster/automatic.html">Automatic bootstrapping</a>
1. <a href="/docs/cluster/manual.html">Manual bootstrapping</a>
[![Global Architecture](/assets/images/nomad-architecture-global.png)](/assets/images/nomad-architecture-global.png)
Bootstrapping Nomad is made significantly easier when there already exists a
Consul cluster in place. Since Nomad's topology is slightly richer than Consul's
since it supports not only datacenters but also regions lets start with how
Consul should be deployed in relation to Nomad.
For more details on the architecture of Nomad and how it models infrastructure
see the [Architecture page](/docs/internals/architecture.html).
## Deploying Consul Clusters
A Nomad cluster gains the ability to bootstrap itself as well as provide service
and health check registration to applications when Consul is deployed along side
Nomad.
Consul models infrastructures as datacenters and multiple Consul datacenters can
be connected over the WAN so that clients can discover nodes in other
datacenters. Since Nomad regions can encapsulate many datacenters, we recommend
running a Consul cluster in every Nomad datacenter and connecting them over the
WAN. Please refer to the Consul guide for both
[bootstrapping](https://www.consul.io/docs/guides/bootstrapping.html) a single datacenter and
[connecting multiple Consul clusters over the
WAN](https://www.consul.io/docs/guides/datacenters.html).
## Bootstrapping a Nomad cluster
Nomad supports merging multiple configuration files together on startup. This is
done to enable generating a base configuration that can be shared by Nomad
servers and clients. A suggested base configuration is:
```
# Name the region, if omitted, the default "global" region will be used.
region = "europe"
# Persist data to a location that will survive a machine reboot.
data_dir = "/opt/nomad/"
# Bind to all addresses so that the Nomad agent is available both on loopback
# and externally.
bind_addr = "0.0.0.0"
# Advertise an accessible IP address so the server is reachable by other servers
# and clients. The IPs can be materialized by Terraform or be replaced by an
# init script.
advertise {
http = "${self.ipv4_address}:4646"
rpc = "${self.ipv4_address}:4647"
serf = "${self.ipv4_address}:4648"
}
# Ship metrics to monitor the health of the cluster and to see task resource
# usage.
telemetry {
statsite_address = "${var.statsite}"
disable_hostname = true
}
# Enable debug endpoints.
enable_debug = true
```
### With Consul
If a local Consul cluster is bootstrapped before Nomad, on startup Nomad
server's will register with Consul and discover other server's. With their set
of peers, they will automatically form quorum, respecting the `bootstrap_expect`
field. Thus to form a 3 server region, the below configuration can be used in
conjunction with the base config:
```
server {
enabled = true
bootstrap_expect = 3
}
```
And an equally simple configuration can be used for clients:
```
# Replace with the relevant datacenter.
datacenter = "dc1"
client {
enabled = true
}
```
As you can see, the above configurations have no mention of the other server's to
join or any Consul configuration. That is because by default, the following is
merged with the configuration file:
```
consul {
# The address to the Consul agent.
address = "127.0.0.1:8500"
# The service name to register the server and client with Consul.
server_service_name = "nomad"
client_service_name = "nomad-client"
# Enables automatically registering the services.
auto_advertise = true
# Enabling the server and client to bootstrap using Consul.
server_auto_join = true
client_auto_join = true
}
```
Since the `consul` block is merged by default, bootstrapping a cluster becomes
as easy as running the following on each of the three servers:
```
$ nomad agent -config base.hcl -config server.hcl
```
And on every client in the cluster, the following should be run:
```
$ nomad agent -config base.hcl -config client.hcl
```
With the above configurations and commands the Nomad agents will automatically
register themselves with Consul and discover other Nomad servers. If the agent
is a server, it will join the quorum and if it is a client, it will register
itself and join the cluster.
Please refer to the [Consul documentation](/docs/agent/config.html#consul_options)
for the complete set of configuration options.
### Without Consul
When bootstrapping without Consul, Nomad servers and clients must be started
knowing the address of at least one Nomad server.
To join the Nomad server's we can either encode the address in the server
configs as such:
```
server {
enabled = true
bootstrap_expect = 3
retry_join = ["<known-address>"]
}
```
Alternatively, the address can be supplied after the servers have all been started by
running the [`server-join` command](/docs/commands/server-join.html) on the servers
individual to cluster the servers. All servers can join just one other server,
and then rely on the gossip protocol to discover the rest.
```
nomad server-join <known-address>
```
On the client side, the addresses of the servers are expected to be specified
via the client configuration.
```
client {
enabled = true
servers = ["10.10.11.2:4647", "10.10.11.3:4647", "10.10.11.4:4647"]
}
```
If servers are added or removed from the cluster, the information will be
pushed to the client. This means, that only one server must be specified because
after initial contact, the full set of servers in the client's region will be
pushed to the client.
The port corresponds to the RPC port. If no port is specified with the IP address,
the default RCP port of `4647` is assumed.
The same commmands can be used to start the servers and clients as shown in the
bootstrapping with Consul section.
### Federating a cluster
Nomad clusters across multiple regions can be federated allowing users to submit
jobs or interact with the HTTP API targeting any region, from any server.
Federating multiple Nomad clusters is as simple as joining servers. From any
server in one region, simply issue a join command to a server in the remote
region:
```
nomad server-join 10.10.11.8:4648
```
Servers across regions discover other servers in the cluster via the gossip
protocol and hence it's enough to join one known server.
If the Consul clusters in the different Nomad regions are federated, and Consul
`server_auto_join` is enabled, then federation occurs automatically.
## Network Topology
### Nomad Servers
Nomad servers are expected to have sub 10 millisecond network latencies between
each other to ensure liveness and high throughput scheduling. Nomad servers
can be spread across multiple datacenters if they have low latency
connections between them to achieve high availability.
For example, on AWS every region comprises of multiple zones which have very low
latency links between them, so every zone can be modeled as a Nomad datacenter
and every Zone can have a single Nomad server which could be connected to form a
quorum and a region.
Nomad servers uses Raft for state replication and Raft being highly consistent
needs a quorum of servers to function, therefore we recommend running an odd
number of Nomad servers in a region. Usually running 3-5 servers in a region is
recommended. The cluster can withstand a failure of one server in a cluster of
three servers and two failures in a cluster of five servers. Adding more servers
to the quorum adds more time to replicate state and hence throughput decreases
so we don't recommend having more than seven servers in a region.
### Nomad Clients
Nomad clients do not have the same latency requirements as servers since they
are not participating in Raft. Thus clients can have 100+ millisecond latency to
their servers. This allows having a set of Nomad servers that service clients
that can be spread geographically over a continent or even the world in the case
of having a single "global" region and many datacenter.
## Production Considerations
### Nomad Servers
Depending on the number of jobs the cluster will be managing and the rate at
which jobs are submitted, the Nomad servers may need to be run on large machine
instances. We suggest having 8+ cores, 32 GB+ of memory, 80 GB+ of disk and
significant network bandwith. The core count and network recommendations are to
ensure high throughput as Nomad heavily relies on network communication and as
the Servers are managing all the nodes in the region and performing scheduling.
The memory and disk requirements are due to the fact that Nomad stores all state
in memory and will store two snapshots of this data onto disk. Thus disk should
be at least 2 times the memory available to the server when deploying a high
load cluster.
### Nomad Clients
Nomad clients support reserving resources on the node that should not be used by
Nomad. This should be used to target a specific resource utilization per node
and to reserve resources for applications running outside of Nomad's supervision
such as Consul and the operating system itself.
Please see the [`reservation` config](/docs/agent/config.html#reserved) for more detail.
Please refer to the specific documentation links above or in the sidebar for
more detailed information about each strategy.

View File

@@ -0,0 +1,28 @@
---
layout: "docs"
page_title: "Federating a Nomad Cluster"
sidebar_current: "docs-cluster-federation"
description: |-
Learn how to join Nomad servers across multiple regions so users can submit
jobs to any server in any region using global federation.
---
# Federating a Cluster
Because Nomad operates at a regional level, federation is part of Nomad core.
Federation enables users to submit jobs or interact with the HTTP API targeting
any region, from any server, even if that server resides in a different region.
Federating multiple Nomad clusters is as simple as joining servers. From any
server in one region, issue a join command to a server in a remote region:
```shell
$ nomad server-join 1.2.3.4:4648
```
Note that only one join command is required per region. Servers across regions
discover other servers in the cluster via the gossip protocol and hence it's
enough to join just one known server.
If bootstrapped via Consul and the Consul clusters in the Nomad regions are
federated, then federation occurs automatically.

View File

@@ -0,0 +1,65 @@
---
layout: "docs"
page_title: "Manually Bootstrapping a Nomad Cluster"
sidebar_current: "docs-cluster-manual"
description: |-
Learn how to manually bootstrap a Nomad cluster using the server-join
command. This section also discusses Nomad federation across multiple
datacenters and regions.
---
# Manual Bootstrapping
Manually bootstrapping a Nomad cluster does not rely on additional tooling, but
does require operator participation in the cluster formation process. When
bootstrapping, Nomad servers and clients must be started and informed with the
address of at least one Nomad server.
As you can tell, this creates a chicken-and-egg problem where one server must
first be fully bootstrapped and configured before the remaining servers and
clients can join the cluster. This requirement can add additional provisioning
time as well as ordered dependencies during provisioning.
First, we bootstrap a single Nomad server and capture its IP address. After we
have that nodes IP address, we place this address in the configuration.
For Nomad servers, this configuration may look something like this:
```hcl
server {
enabled = true
bootstrap_expect = 3
# This is the IP address of the first server we provisioned
retry_join = ["<known-address>:4648"]
}
```
Alternatively, the address can be supplied after the servers have all been
started by running the [`server-join` command](/docs/commands/server-join.html)
on the servers individual to cluster the servers. All servers can join just one
other server, and then rely on the gossip protocol to discover the rest.
```
$ nomad server-join <known-address>
```
For Nomad clients, the configuration may look something like:
```hcl
client {
enabled = true
servers = ["<known-address>:4647"]
}
```
At this time, there is no equivalent of the <tt>server-join</tt> command for
Nomad clients.
The port corresponds to the RPC port. If no port is specified with the IP
address, the default RCP port of `4647` is assumed.
As servers are added or removed from the cluster, this information is pushed to
the client. This means only one server must be specified because, after initial
contact, the full set of servers in the client's region are shared with the
client.

View File

@@ -0,0 +1,57 @@
---
layout: "docs"
page_title: "Nomad Client and Server Requirements"
sidebar_current: "docs-cluster-requirements"
description: |-
Learn how to manually bootstrap a Nomad cluster using the server-join
command. This section also discusses Nomad federation across multiple
datacenters and regions.
---
# Cluster Requirements
## Resources (RAM, CPU, etc.)
**Nomad servers** may need to be run on large machine instances. We suggest
having 8+ cores, 32 GB+ of memory, 80 GB+ of disk and significant network
bandwidth. The core count and network recommendations are to ensure high
throughput as Nomad heavily relies on network communication and as the Servers
are managing all the nodes in the region and performing scheduling. The memory
and disk requirements are due to the fact that Nomad stores all state in memory
and will store two snapshots of this data onto disk. Thus disk should be at
least 2 times the memory available to the server when deploying a high load
cluster.
**Nomad clients** support reserving resources on the node that should not be
used by Nomad. This should be used to target a specific resource utilization per
node and to reserve resources for applications running outside of Nomad's
supervision such as Consul and the operating system itself.
Please see the [reservation configuration](/docs/agent/config.html#reserved) for
more detail.
## Network Topology
**Nomad servers** are expected to have sub 10 millisecond network latencies
between each other to ensure liveness and high throughput scheduling. Nomad
servers can be spread across multiple datacenters if they have low latency
connections between them to achieve high availability.
For example, on AWS every region comprises of multiple zones which have very low
latency links between them, so every zone can be modeled as a Nomad datacenter
and every Zone can have a single Nomad server which could be connected to form a
quorum and a region.
Nomad servers uses Raft for state replication and Raft being highly consistent
needs a quorum of servers to function, therefore we recommend running an odd
number of Nomad servers in a region. Usually running 3-5 servers in a region is
recommended. The cluster can withstand a failure of one server in a cluster of
three servers and two failures in a cluster of five servers. Adding more servers
to the quorum adds more time to replicate state and hence throughput decreases
so we don't recommend having more than seven servers in a region.
**Nomad clients** do not have the same latency requirements as servers since they
are not participating in Raft. Thus clients can have 100+ millisecond latency to
their servers. This allows having a set of Nomad servers that service clients
that can be spread geographically over a continent or even the world in the case
of having a single "global" region and many datacenter.

View File

@@ -1,232 +1,242 @@
<% wrap_layout :inner do %>
<% content_for :sidebar do %>
<div class="docs-sidebar hidden-print affix-top" role="complementary">
<ul class="nav docs-sidenav">
<li<%= sidebar_current("docs-home") %>>
<a href="/docs/index.html">Documentation Home</a>
</li>
<% content_for :sidebar do %>
<div class="docs-sidebar hidden-print affix-top" role="complementary">
<ul class="nav docs-sidenav">
<li<%= sidebar_current("docs-installing") %>>
<a href="/docs/install/index.html">Installing Nomad</a>
</li>
<li<%= sidebar_current("docs-internal") %>>
<a href="/docs/internals/index.html">Internals</a>
<ul class="nav">
<li<%= sidebar_current("docs-internals-architecture") %>>
<a href="/docs/internals/architecture.html">Architecture</a>
</li>
<li<%= sidebar_current("docs-cluster") %>>
<a href="/docs/cluster/bootstrapping.html">Bootstrapping Clusters</a>
<ul class="nav">
<li<%= sidebar_current("docs-cluster-automatic") %>>
<a href="/docs/cluster/automatic.html">Automatic</a>
</li>
<li<%= sidebar_current("docs-cluster-manual") %>>
<a href="/docs/cluster/manual.html">Manual</a>
</li>
<li<%= sidebar_current("docs-cluster-federation") %>>
<a href="/docs/cluster/federation.html">Federation</a>
</li>
<li<%= sidebar_current("docs-cluster-requirements") %>>
<a href="/docs/cluster/requirements.html">Requirements</a>
</li>
</ul>
</li>
<li<%= sidebar_current("docs-internals-consensus") %>>
<a href="/docs/internals/consensus.html">Consensus Protocol</a>
</li>
<li<%= sidebar_current("docs-jobops") %>>
<a href="/docs/jobops/index.html">Operating a Job</a>
<ul class="nav">
<li<%= sidebar_current("docs-jobops-task-config") %>>
<a href="/docs/jobops/taskconfig.html">Task Configuration</a>
</li>
<li<%= sidebar_current("docs-jobops-inspection") %>>
<a href="/docs/jobops/inspecting.html">Inspecting State</a>
</li>
<li<%= sidebar_current("docs-jobops-resource-utilization") %>>
<a href="/docs/jobops/resources.html">Resource Utilization</a>
</li>
<li<%= sidebar_current("docs-jobops-service-discovery") %>>
<a href="/docs/jobops/servicediscovery.html">Service Discovery</a>
</li>
<li<%= sidebar_current("docs-jobops-logs") %>>
<a href="/docs/jobops/logs.html">Accessing Logs</a>
</li>
<li<%= sidebar_current("docs-jobops-updating") %>>
<a href="/docs/jobops/updating.html">Updating Jobs</a>
</li>
</ul>
</li>
<li<%= sidebar_current("docs-internals-gossip") %>>
<a href="/docs/internals/gossip.html">Gossip Protocol</a>
</li>
<li<%= sidebar_current("docs-internals-scheduling") %>>
<a href="/docs/internals/scheduling.html">Scheduling</a>
</li>
</ul>
</li>
<li<%= sidebar_current("docs-install") %>>
<a href="/docs/install/index.html">Installation</a>
</li>
<li<%= sidebar_current("docs-cluster") %>>
<a href="/docs/cluster/bootstrapping.html">Creating a Cluster</a>
</li>
<li<%= sidebar_current("docs-jobops") %>>
<a href="/docs/jobops/index.html">Operating a Job</a>
<ul class="nav">
<li<%= sidebar_current("docs-jobops-task-config") %>>
<a href="/docs/jobops/taskconfig.html">Task Configuration</a>
</li>
<li<%= sidebar_current("docs-jobops-inspection") %>>
<a href="/docs/jobops/inspecting.html">Inspecting State</a>
</li>
<li<%= sidebar_current("docs-jobops-resource-utilization") %>>
<a href="/docs/jobops/resources.html">Resource Utilization</a>
</li>
<li<%= sidebar_current("docs-jobops-service-discovery") %>>
<a href="/docs/jobops/servicediscovery.html">Service Discovery</a>
</li>
<li<%= sidebar_current("docs-jobops-logs") %>>
<a href="/docs/jobops/logs.html">Accessing Logs</a>
</li>
<li<%= sidebar_current("docs-jobops-updating") %>>
<a href="/docs/jobops/updating.html">Updating Jobs</a>
</li>
</ul>
</li>
<li<%= sidebar_current("docs-upgrade") %>>
<a href="/docs/upgrade/index.html">Upgrading</a>
<ul class="nav">
<li<%= sidebar_current("docs-upgrade-upgrading") %>>
<a href="/docs/upgrade/index.html">Upgrading Nomad</a>
</li>
<li<%= sidebar_current("docs-upgrade-specific") %>>
<a href="/docs/upgrade/upgrade-specific.html">Specific Version Details</a>
</li>
</ul>
</li>
<li<%= sidebar_current("docs-upgrade") %>>
<a href="/docs/upgrade/index.html">Upgrading</a>
<ul class="nav">
<li<%= sidebar_current("docs-upgrade-upgrading") %>>
<a href="/docs/upgrade/index.html">Upgrading Nomad</a>
</li>
<li<%= sidebar_current("docs-upgrade-specific") %>>
<a href="/docs/upgrade/upgrade-specific.html">Specific Version Details</a>
</li>
</ul>
</li>
<li<%= sidebar_current("docs-jobspec") %>>
<a href="/docs/jobspec/index.html">Job Specification</a>
<ul class="nav">
<li<%= sidebar_current("docs-jobspec-syntax") %>>
<a href="/docs/jobspec/index.html">HCL Syntax</a>
</li>
<li<%= sidebar_current("docs-jobspec-json-syntax") %>>
<a href="/docs/jobspec/json.html">JSON Syntax</a>
</li>
<li<%= sidebar_current("docs-jobspec-interpreted") %>>
<a href="/docs/jobspec/interpreted.html">Interpreted Variables</a>
</li>
<li<%= sidebar_current("docs-jobspec-environment") %>>
<a href="/docs/jobspec/environment.html">Runtime Environment</a>
</li>
<li<%= sidebar_current("docs-jobspec-schedulers") %>>
<a href="/docs/jobspec/schedulers.html">Scheduler Types</a>
</li>
<li<%= sidebar_current("docs-jobspec-service-discovery") %>>
<a href="/docs/jobspec/servicediscovery.html">Service Discovery</a>
</li>
<li<%= sidebar_current("docs-jobspec-networking") %>>
<a href="/docs/jobspec/networking.html">Networking</a>
</li>
</ul>
</li>
<li<%= sidebar_current("docs-jobspec") %>>
<a href="/docs/jobspec/index.html">Job Specification</a>
<ul class="nav">
<li<%= sidebar_current("docs-jobspec-syntax") %>>
<a href="/docs/jobspec/index.html">HCL Syntax</a>
</li>
<li<%= sidebar_current("docs-jobspec-json-syntax") %>>
<a href="/docs/jobspec/json.html">JSON Syntax</a>
</li>
<li<%= sidebar_current("docs-jobspec-interpreted") %>>
<a href="/docs/jobspec/interpreted.html">Interpreted Variables</a>
</li>
<li<%= sidebar_current("docs-jobspec-environment") %>>
<a href="/docs/jobspec/environment.html">Runtime Environment</a>
</li>
<li<%= sidebar_current("docs-jobspec-schedulers") %>>
<a href="/docs/jobspec/schedulers.html">Scheduler Types</a>
</li>
<li<%= sidebar_current("docs-jobspec-service-discovery") %>>
<a href="/docs/jobspec/servicediscovery.html">Service Discovery</a>
</li>
<li<%= sidebar_current("docs-jobspec-networking") %>>
<a href="/docs/jobspec/networking.html">Networking</a>
</li>
</ul>
</li>
<li<%= sidebar_current("docs-drivers") %>>
<a href="/docs/drivers/index.html">Task Drivers</a>
<ul class="nav">
<li<%= sidebar_current("docs-drivers-docker") %>>
<a href="/docs/drivers/docker.html">Docker</a>
</li>
<li<%= sidebar_current("docs-drivers") %>>
<a href="/docs/drivers/index.html">Task Drivers</a>
<ul class="nav">
<li<%= sidebar_current("docs-drivers-docker") %>>
<a href="/docs/drivers/docker.html">Docker</a>
</li>
<li<%= sidebar_current("docs-drivers-exec") %>>
<a href="/docs/drivers/exec.html">Isolated Fork/Exec</a>
</li>
<li<%= sidebar_current("docs-drivers-exec") %>>
<a href="/docs/drivers/exec.html">Isolated Fork/Exec</a>
</li>
<li<%= sidebar_current("docs-drivers-raw-exec") %>>
<a href="/docs/drivers/raw_exec.html">Raw Fork/Exec</a>
</li>
<li<%= sidebar_current("docs-drivers-raw-exec") %>>
<a href="/docs/drivers/raw_exec.html">Raw Fork/Exec</a>
</li>
<li<%= sidebar_current("docs-drivers-java") %>>
<a href="/docs/drivers/java.html">Java</a>
</li>
<li<%= sidebar_current("docs-drivers-java") %>>
<a href="/docs/drivers/java.html">Java</a>
</li>
<li<%= sidebar_current("docs-drivers-qemu") %>>
<a href="/docs/drivers/qemu.html">Qemu</a>
</li>
<li<%= sidebar_current("docs-drivers-qemu") %>>
<a href="/docs/drivers/qemu.html">Qemu</a>
</li>
<li<%= sidebar_current("docs-drivers-rkt") %>>
<a href="/docs/drivers/rkt.html">Rkt</a>
</li>
<li<%= sidebar_current("docs-drivers-rkt") %>>
<a href="/docs/drivers/rkt.html">Rkt</a>
</li>
<li<%= sidebar_current("docs-drivers-custom") %>>
<a href="/docs/drivers/custom.html">Custom</a>
</li>
</ul>
</li>
<li<%= sidebar_current("docs-drivers-custom") %>>
<a href="/docs/drivers/custom.html">Custom</a>
</li>
</ul>
</li>
<li<%= sidebar_current("docs-commands") %>>
<a href="/docs/commands/index.html">Commands (CLI)</a>
<ul class="nav">
<li<%= sidebar_current("docs-commands-_agent") %>>
<a href="/docs/commands/agent.html">agent</a>
</li>
<li<%= sidebar_current("docs-commands-agent-info") %>>
<a href="/docs/commands/agent-info.html">agent-info</a>
</li>
<li<%= sidebar_current("docs-commands-alloc-status") %>>
<a href="/docs/commands/alloc-status.html">alloc-status</a>
</li>
<li<%= sidebar_current("docs-commands-client-config") %>>
<a href="/docs/commands/client-config.html">client-config</a>
</li>
<li<%= sidebar_current("docs-commands-eval-status") %>>
<a href="/docs/commands/eval-status.html">eval-status</a>
</li>
<li<%= sidebar_current("docs-commands-fs") %>>
<a href="/docs/commands/fs.html">fs</a>
</li>
<li<%= sidebar_current("docs-commands-init") %>>
<a href="/docs/commands/init.html">init</a>
</li>
<li<%= sidebar_current("docs-commands-inspect") %>>
<a href="/docs/commands/inspect.html">inspect</a>
</li>
<li<%= sidebar_current("docs-commands-logs") %>>
<a href="/docs/commands/logs.html">logs</a>
</li>
<li<%= sidebar_current("docs-commands-node-drain") %>>
<a href="/docs/commands/node-drain.html">node-drain</a>
</li>
<li<%= sidebar_current("docs-commands-node-status") %>>
<a href="/docs/commands/node-status.html">node-status</a>
</li>
<li<%= sidebar_current("docs-commands-plan") %>>
<a href="/docs/commands/plan.html">plan</a>
</li>
<li<%= sidebar_current("docs-commands-run") %>>
<a href="/docs/commands/run.html">run</a>
</li>
<li<%= sidebar_current("docs-commands-server-force-leave") %>>
<a href="/docs/commands/server-force-leave.html">server-force-leave</a>
</li>
<li<%= sidebar_current("docs-commands-server-join") %>>
<a href="/docs/commands/server-join.html">server-join</a>
</li>
<li<%= sidebar_current("docs-commands-server-members") %>>
<a href="/docs/commands/server-members.html">server-members</a>
</li>
<li<%= sidebar_current("docs-commands-status") %>>
<a href="/docs/commands/status.html">status</a>
</li>
<li<%= sidebar_current("docs-commands-stop") %>>
<a href="/docs/commands/stop.html">stop</a>
</li>
<li<%= sidebar_current("docs-commands-validate") %>>
<a href="/docs/commands/validate.html">validate</a>
</li>
<li<%= sidebar_current("docs-commands-version") %>>
<a href="/docs/commands/version.html">version</a>
</li>
</ul>
</li>
<li<%= sidebar_current("docs-commands") %>>
<a href="/docs/commands/index.html">Commands (CLI)</a>
<ul class="nav">
<li<%= sidebar_current("docs-commands-_agent") %>>
<a href="/docs/commands/agent.html">agent</a>
</li>
<li<%= sidebar_current("docs-commands-agent-info") %>>
<a href="/docs/commands/agent-info.html">agent-info</a>
</li>
<li<%= sidebar_current("docs-commands-alloc-status") %>>
<a href="/docs/commands/alloc-status.html">alloc-status</a>
</li>
<li<%= sidebar_current("docs-commands-client-config") %>>
<a href="/docs/commands/client-config.html">client-config</a>
</li>
<li<%= sidebar_current("docs-commands-eval-status") %>>
<a href="/docs/commands/eval-status.html">eval-status</a>
</li>
<li<%= sidebar_current("docs-commands-fs") %>>
<a href="/docs/commands/fs.html">fs</a>
</li>
<li<%= sidebar_current("docs-commands-init") %>>
<a href="/docs/commands/init.html">init</a>
</li>
<li<%= sidebar_current("docs-commands-inspect") %>>
<a href="/docs/commands/inspect.html">inspect</a>
</li>
<li<%= sidebar_current("docs-commands-logs") %>>
<a href="/docs/commands/logs.html">logs</a>
</li>
<li<%= sidebar_current("docs-commands-node-drain") %>>
<a href="/docs/commands/node-drain.html">node-drain</a>
</li>
<li<%= sidebar_current("docs-commands-node-status") %>>
<a href="/docs/commands/node-status.html">node-status</a>
</li>
<li<%= sidebar_current("docs-commands-plan") %>>
<a href="/docs/commands/plan.html">plan</a>
</li>
<li<%= sidebar_current("docs-commands-run") %>>
<a href="/docs/commands/run.html">run</a>
</li>
<li<%= sidebar_current("docs-commands-server-force-leave") %>>
<a href="/docs/commands/server-force-leave.html">server-force-leave</a>
</li>
<li<%= sidebar_current("docs-commands-server-join") %>>
<a href="/docs/commands/server-join.html">server-join</a>
</li>
<li<%= sidebar_current("docs-commands-server-members") %>>
<a href="/docs/commands/server-members.html">server-members</a>
</li>
<li<%= sidebar_current("docs-commands-status") %>>
<a href="/docs/commands/status.html">status</a>
</li>
<li<%= sidebar_current("docs-commands-stop") %>>
<a href="/docs/commands/stop.html">stop</a>
</li>
<li<%= sidebar_current("docs-commands-validate") %>>
<a href="/docs/commands/validate.html">validate</a>
</li>
<li<%= sidebar_current("docs-commands-version") %>>
<a href="/docs/commands/version.html">version</a>
</li>
</ul>
</li>
<li<%= sidebar_current("docs-agent") %>>
<a href="/docs/agent/index.html">Nomad Agent</a>
<li<%= sidebar_current("docs-agent") %>>
<a href="/docs/agent/index.html">Nomad Agent</a>
<ul class="nav">
<li<%= sidebar_current("docs-agent-basics") %>>
<a href="/docs/agent/index.html">Basics</a>
</li>
<ul class="nav">
<li<%= sidebar_current("docs-agent-basics") %>>
<a href="/docs/agent/index.html">Basics</a>
</li>
<li<%= sidebar_current("docs-agent-config") %>>
<a href="/docs/agent/config.html">Configuration</a>
</li>
<li<%= sidebar_current("docs-agent-config") %>>
<a href="/docs/agent/config.html">Configuration</a>
</li>
<li<%= sidebar_current("docs-agent-telemetry") %>>
<a href="/docs/agent/telemetry.html">Telemetry</a>
</li>
</ul>
</li>
<li<%= sidebar_current("docs-agent-telemetry") %>>
<a href="/docs/agent/telemetry.html">Telemetry</a>
</li>
</ul>
</li>
<li<%= sidebar_current("docs-http") %>>
<a href="/docs/http/index.html">HTTP API</a>
</li>
<li<%= sidebar_current("docs-http") %>>
<a href="/docs/http/index.html">HTTP API</a>
</li>
<li<%= sidebar_current("docs-faq") %>>
<a href="/docs/faq.html">Frequently Asked Questions</a>
</li>
</ul>
</div>
<% end %>
<li<%= sidebar_current("docs-internal") %>>
<a href="/docs/internals/index.html">Internals</a>
<ul class="nav">
<li<%= sidebar_current("docs-internals-architecture") %>>
<a href="/docs/internals/architecture.html">Architecture</a>
</li>
<%= yield %>
<li<%= sidebar_current("docs-internals-consensus") %>>
<a href="/docs/internals/consensus.html">Consensus Protocol</a>
</li>
<li<%= sidebar_current("docs-internals-gossip") %>>
<a href="/docs/internals/gossip.html">Gossip Protocol</a>
</li>
<li<%= sidebar_current("docs-internals-scheduling") %>>
<a href="/docs/internals/scheduling.html">Scheduling</a>
</li>
</ul>
</li>
<li<%= sidebar_current("docs-faq") %>>
<a href="/docs/faq.html">Frequently Asked Questions</a>
</li>
</ul>
</div>
<% end %>
<%= yield %>
<% end %>