diff --git a/website/source/guides/operations/agent/index.html.md b/website/source/guides/operations/agent/index.html.md index d98bcbd43..96ea43f15 100644 --- a/website/source/guides/operations/agent/index.html.md +++ b/website/source/guides/operations/agent/index.html.md @@ -1,7 +1,7 @@ --- -layout: "docs" +layout: "guides" page_title: "Nomad Agent" -sidebar_current: "docs-agent" +sidebar_current: "guides-operations-agent" description: |- The Nomad agent is a long running process which can be used either in a client or server mode. diff --git a/website/source/guides/operations/autopilot.html.md b/website/source/guides/operations/autopilot.html.md index fc5f5b434..2219e2059 100644 --- a/website/source/guides/operations/autopilot.html.md +++ b/website/source/guides/operations/autopilot.html.md @@ -1,7 +1,7 @@ --- layout: "guides" page_title: "Autopilot" -sidebar_current: "guides-autopilot" +sidebar_current: "guides-operations-autopilot" description: |- This guide covers how to configure and use Autopilot features. --- @@ -13,15 +13,15 @@ operator-friendly management of Nomad servers. It includes cleanup of dead servers, monitoring the state of the Raft cluster, and stable server introduction. To enable Autopilot features (with the exception of dead server cleanup), -the `raft_protocol` setting in the [server stanza](/docs/agent/configuration/server.html) +the `raft_protocol` setting in the [server stanza](/docs/configuration/server.html) must be set to 3 on all servers. In Nomad 0.8 this setting defaults to 2; in Nomad 0.9 it will default to 3. -For more information, see the [Version Upgrade section](/docs/upgrade/upgrade-specific.html#raft-protocol-version-compatibility) +For more information, see the [Version Upgrade section](/guides/operations/upgrade/upgrade-specific.html#raft-protocol-version-compatibility) on Raft Protocol versions. ## Configuration The configuration of Autopilot is loaded by the leader from the agent's -[Autopilot settings](/docs/agent/configuration/autopilot.html) when initially +[Autopilot settings](/docs/configuration/autopilot.html) when initially bootstrapping the cluster: ``` @@ -149,7 +149,7 @@ setting. ## Server Read and Scheduling Scaling -With the [`non_voting_server`](/docs/agent/configuration/server.html#non_voting_server) option, a +With the [`non_voting_server`](/docs/configuration/server.html#non_voting_server) option, a server can be explicitly marked as a non-voter and will never be promoted to a voting member. This can be useful when more read scaling is needed; being a non-voter means that the server will still have data replicated to it, but it will not be part of the @@ -164,7 +164,7 @@ have an overly-large quorum (2-3 nodes per AZ) or give up redundancy within an A deploying just one server in each. If the `EnableRedundancyZones` setting is set, Nomad will use its value to look for a -zone in each server's specified [`redundancy_zone`](/docs/agent/configuration/server.html#redundancy_zone) +zone in each server's specified [`redundancy_zone`](/docs/configuration/server.html#redundancy_zone) field. Here's an example showing how to configure this: @@ -216,6 +216,6 @@ a migration, so that the migration logic can be used for updating the cluster wh changing configuration. If the `EnableCustomUpgrades` setting is set to `true`, Nomad will use its value to look for a -version in each server's specified [`upgrade_version`](/docs/agent/configuration/server.html#upgrade_version) +version in each server's specified [`upgrade_version`](/docs/configuration/server.html#upgrade_version) tag. The upgrade logic will follow semantic versioning and the `upgrade_version` must be in the form of either `X`, `X.Y`, or `X.Y.Z`. diff --git a/website/source/guides/operations/cluster/automatic.html.md b/website/source/guides/operations/cluster/automatic.html.md index 1b7daaf27..8256473af 100644 --- a/website/source/guides/operations/cluster/automatic.html.md +++ b/website/source/guides/operations/cluster/automatic.html.md @@ -1,14 +1,14 @@ --- layout: "guides" -page_title: "Automatically Bootstrapping a Nomad Cluster" -sidebar_current: "guides-cluster-automatic" +page_title: "Automatic Clustering with Consul" +sidebar_current: "guides-operations-cluster-automatic" description: |- Learn how to automatically bootstrap a Nomad cluster using Consul. By having a Consul agent installed on each host, Nomad can automatically discover other clients and servers to bootstrap the cluster without operator involvement. --- -# Automatic Bootstrapping +# Automatic Clustering with Consul To automatically bootstrap a Nomad cluster, we must leverage another HashiCorp open source tool, [Consul](https://www.consul.io/). Bootstrapping Nomad is @@ -115,5 +115,5 @@ consul { ``` Please refer to the [Consul -documentation](/docs/agent/configuration/consul.html) for the complete set of +documentation](/docs/configuration/consul.html) for the complete set of configuration options. diff --git a/website/source/guides/operations/cluster/bootstrapping.html.md b/website/source/guides/operations/cluster/bootstrapping.html.md index 53d6541a5..62c122e1c 100644 --- a/website/source/guides/operations/cluster/bootstrapping.html.md +++ b/website/source/guides/operations/cluster/bootstrapping.html.md @@ -1,12 +1,12 @@ --- layout: "guides" -page_title: "Bootstrapping a Nomad Cluster" -sidebar_current: "guides-cluster-bootstrap" +page_title: "Clustering" +sidebar_current: "guides-operations-cluster" description: |- - Learn how to bootstrap a Nomad cluster. + Learn how to cluster Nomad. --- -# Bootstrapping a Nomad Cluster +# Clustering Nomad models infrastructure into regions and datacenters. Servers reside at the regional layer and manage all state and scheduling decisions for that region. @@ -15,10 +15,12 @@ datacenter (and thus a region that contains that datacenter). For more details o the architecture of Nomad and how it models infrastructure see the [architecture page](/docs/internals/architecture.html). -There are two strategies for bootstrapping a Nomad cluster: +There are multiple strategies available for creating a multi-node Nomad cluster: + +1. Manual Clustering +1. Automatic Clustering with Consul +1. Cloud Auto-join -1. Automatic bootstrapping -1. Manual bootstrapping Please refer to the specific documentation links above or in the sidebar for more detailed information about each strategy. diff --git a/website/source/guides/operations/cluster/cloud_auto_join.html.md b/website/source/guides/operations/cluster/cloud_auto_join.html.md index d733fe706..0825c77a4 100644 --- a/website/source/guides/operations/cluster/cloud_auto_join.html.md +++ b/website/source/guides/operations/cluster/cloud_auto_join.html.md @@ -1,24 +1,24 @@ --- -layout: "docs" +layout: "guides" page_title: "Cloud Auto-join" -sidebar_current: "docs-agent-cloud-auto-join" +sidebar_current: "guides-operations-cluster-cloud-auto-join" description: |- - Nomad supports automatic cluster joining using cloud metadata from various cloud providers + Nomad supports automatic cluster joining using cloud metadata from various + cloud providers --- # Cloud Auto-joining As of Nomad 0.8.4, -[`retry_join`](/docs/agent/configuration/server_join.html#retry_join) accepts a +[`retry_join`](/docs/configuration/server_join.html#retry_join) accepts a unified interface using the [go-discover](https://github.com/hashicorp/go-discover) library for doing automatic cluster joining using cloud metadata. To use retry-join with a supported cloud provider, specify the configuration on the command line or -configuration file as a `key=value key=value ...` string. - -Values are taken literally and must not be URL -encoded. If the values contain spaces, backslashes or double quotes then -they need to be double quoted and the usual escaping rules apply. +configuration file as a `key=value key=value ...` string. Values are taken +literally and must not be URL encoded. If the values contain spaces, backslashes +or double quotes thenthey need to be double quoted and the usual escaping rules +apply. ```json { @@ -26,111 +26,12 @@ they need to be double quoted and the usual escaping rules apply. } ``` -The cloud provider-specific configurations are detailed below. This can be -combined with static IP or DNS addresses or even multiple configurations -for different providers. - -In order to use discovery behind a proxy, you will need to set +The cloud provider-specific configurations are documented [here](/docs/configuration/server_join.html#cloud-auto-join). +This can be combined with static IP or DNS addresses or even multiple configurations +for different providers. In order to use discovery behind a proxy, you will need to set `HTTP_PROXY`, `HTTPS_PROXY` and `NO_PROXY` environment variables per [Golang `net/http` library](https://golang.org/pkg/net/http/#ProxyFromEnvironment). -The following sections give the options specific to a subset of supported cloud -provider. For information on all providers, see further documentation in -[go-discover](https://github.com/hashicorp/go-discover). - -### Amazon EC2 - -This returns the first private IP address of all servers in the given -region which have the given `tag_key` and `tag_value`. -```json -{ - "retry_join": ["provider=aws tag_key=... tag_value=..."] -} -``` - -- `provider` (required) - the name of the provider ("aws" in this case). -- `tag_key` (required) - the key of the tag to auto-join on. -- `tag_value` (required) - the value of the tag to auto-join on. -- `region` (optional) - the AWS region to authenticate in. -- `addr_type` (optional) - the type of address to discover: `private_v4`, `public_v4`, `public_v6`. Default is `private_v4`. (>= 1.0) -- `access_key_id` (optional) - the AWS access key for authentication (see below for more information about authenticating). -- `secret_access_key` (optional) - the AWS secret access key for authentication (see below for more information about authenticating). - -#### Authentication & Precedence - -- Static credentials `access_key_id=... secret_access_key=...` -- Environment variables (`AWS_ACCESS_KEY_ID` and `AWS_SECRET_ACCESS_KEY`) -- Shared credentials file (`~/.aws/credentials` or the path specified by `AWS_SHARED_CREDENTIALS_FILE`) -- ECS task role metadata (container-specific). -- EC2 instance role metadata. - - The only required IAM permission is `ec2:DescribeInstances`, and it is - recommended that you make a dedicated key used only for auto-joining. If the - region is omitted it will be discovered through the local instance's [EC2 - metadata - endpoint](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instance-identity-documents.html). - -### Microsoft Azure - - This returns the first private IP address of all servers in the given region - which have the given `tag_key` and `tag_value` in the tenant and subscription, or in - the given `resource_group` of a `vm_scale_set` for Virtual Machine Scale Sets. - - - ```json -{ - "retry_join": ["provider=azure tag_name=... tag_value=... tenant_id=... client_id=... subscription_id=... secret_access_key=..."] -} -``` - -- `provider` (required) - the name of the provider ("azure" in this case). -- `tenant_id` (required) - the tenant to join machines in. -- `client_id` (required) - the client to authenticate with. -- `secret_access_key` (required) - the secret client key. - -Use these configuration parameters when using tags: -- `tag_name` - the name of the tag to auto-join on. -- `tag_value` - the value of the tag to auto-join on. - -Use these configuration parameters when using Virtual Machine Scale Sets (Consul 1.0.3 and later): -- `resource_group` - the name of the resource group to filter on. -- `vm_scale_set` - the name of the virtual machine scale set to filter on. - - When using tags the only permission needed is the `ListAll` method for `NetworkInterfaces`. When using - Virtual Machine Scale Sets the only role action needed is `Microsoft.Compute/virtualMachineScaleSets/*/read`. - -### Google Compute Engine - -This returns the first private IP address of all servers in the given -project which have the given `tag_value`. -``` - -```json -{ -"retry_join": ["provider=gce project_name=... tag_value=..."] -} -``` - -- `provider` (required) - the name of the provider ("gce" in this case). -- `tag_value` (required) - the value of the tag to auto-join on. -- `project_name` (optional) - the name of the project to auto-join on. Discovered if not set. -- `zone_pattern` (optional) - the list of zones can be restricted through an RE2 compatible regular expression. If omitted, servers in all zones are returned. -- `credentials_file` (optional) - the credentials file for authentication. See below for more information. - -#### Authentication & Precedence - -- Use credentials from `credentials_file`, if provided. -- Use JSON file from `GOOGLE_APPLICATION_CREDENTIALS` environment variable. -- Use JSON file in a location known to the gcloud command-line tool. -- On Windows, this is `%APPDATA%/gcloud/application_default_credentials.json`. -- On other systems, `$HOME/.config/gcloud/application_default_credentials.json`. -- On Google Compute Engine, use credentials from the metadata -server. In this final case any provided scopes are ignored. - -Discovery requires a [GCE Service -Account](https://cloud.google.com/compute/docs/access/service-accounts). -Credentials are searched using the following paths, in order of precedence. - diff --git a/website/source/guides/operations/cluster/manual.html.md b/website/source/guides/operations/cluster/manual.html.md index cddd390dc..11eff1ac2 100644 --- a/website/source/guides/operations/cluster/manual.html.md +++ b/website/source/guides/operations/cluster/manual.html.md @@ -1,14 +1,14 @@ --- layout: "guides" -page_title: "Manually Bootstrapping a Nomad Cluster" -sidebar_current: "guides-cluster-manual" +page_title: "Manually Clustering" +sidebar_current: "guides-operations-cluster-manual" description: |- Learn how to manually bootstrap a Nomad cluster using the server join command. This section also discusses Nomad federation across multiple datacenters and regions. --- -# Manual Bootstrapping +# Manual Clustering Manually bootstrapping a Nomad cluster does not rely on additional tooling, but does require operator participation in the cluster formation process. When diff --git a/website/source/guides/operations/consul-integration/index.html.md b/website/source/guides/operations/consul-integration/index.html.md index 0547dd96c..a27e669a5 100644 --- a/website/source/guides/operations/consul-integration/index.html.md +++ b/website/source/guides/operations/consul-integration/index.html.md @@ -1,35 +1,58 @@ --- -layout: "docs" -page_title: "Service Discovery" -sidebar_current: "docs-service-discovery" +layout: "guides" +page_title: "Consul Integration" +sidebar_current: "guides-operations-consul-integration" description: |- - Learn how to add service discovery to jobs + Learn how to integrate Nomad with Consul and add service discovery to jobs --- -# Service Discovery +# Consul Integration + +[Consul][] is a tool for discovering and configuring services in your +infrastructure. Consul's key features include service discover, health checking, +a KV store, and robust support for multi-datacenter deployments. Nomad's integration +with Consul enables automatic clustering, built-in service registration, and +dynamic rendering of configuration files and environment variables. The sections +below describe the integration in more detail. + +## Configuration + +In order to use Consul with Nomad, you will need to configure and +install Consul on your nodes alongside Nomad, or schedule it as a system job. +Nomad does not currently run Consul for you. + +To enable Consul integration, please see the +[Nomad agent Consul integration](/docs/configuration/consul.html) +configuration. + +## Automatic Clustering with Consul + +Nomad servers and clients will be automatically informed of each other's +existence when a running Consul cluster already exists and the Consul agent is +installed and configured on each host. Please see the [Automatic Clustering with +Consul](/guides/operations/cluster/automatic.html) guide for more information. + +## Service Discovery Nomad schedules workloads of various types across a cluster of generic hosts. Because of this, placement is not known in advance and you will need to use service discovery to connect tasks to other services deployed across your -cluster. Nomad integrates with [Consul][] to provide service discovery and +cluster. Nomad integrates with Consul to provide service discovery and monitoring. -Note that in order to use Consul with Nomad, you will need to configure and -install Consul on your nodes alongside Nomad, or schedule it as a system job. -Nomad does not currently run Consul for you. - -## Configuration - -To enable Consul integration, please see the -[Nomad agent Consul integration](/docs/agent/configuration/consul.html) -configuration. - - -## Service Definition Syntax - To configure a job to register with service discovery, please see the [`service` job specification documentation][service]. +## Dynamic Configuration + +Nomad's job specification includes a [`template` stanza](/docs/job-specification/template.html) +that utilizes a Consul ecosystem tool called [Consul Template](https://github.com/hashicorp/consul-template). This mechanism creates a convenient way to ship configuration files +that are populated from environment variables, Consul data, Vault secrets, or just +general configurations within a Nomad task. + +For more information on Nomad's template stanza and how it leverages Consul Template, +please see the [`template` job specification documentation](/docs/job-specification/template.html). + ## Assumptions - Consul 0.7.2 or later is needed for `tls_skip_verify` in HTTP checks. diff --git a/website/source/guides/operations/federation.md b/website/source/guides/operations/federation.md index 4d1760df9..436b0957d 100644 --- a/website/source/guides/operations/federation.md +++ b/website/source/guides/operations/federation.md @@ -1,13 +1,13 @@ --- layout: "guides" -page_title: "Federating a Nomad Cluster" -sidebar_current: "guides-cluster-federation" +page_title: "Multi-region Federation" +sidebar_current: "guides-operations-federation" description: |- Learn how to join Nomad servers across multiple regions so users can submit jobs to any server in any region using global federation. --- -# Federating a Cluster +# Multi-region Federation Because Nomad operates at a regional level, federation is part of Nomad core. Federation enables users to submit jobs or interact with the HTTP API targeting @@ -33,4 +33,4 @@ enough to join just one known server. If bootstrapped via Consul and the Consul clusters in the Nomad regions are federated, then federation occurs automatically. -[ports]: /guides/cluster/requirements.html#ports-used +[ports]: /guides/operations/requirements.html#ports-used diff --git a/website/source/guides/operations/index.html.md b/website/source/guides/operations/index.html.md new file mode 100644 index 000000000..82403fa74 --- /dev/null +++ b/website/source/guides/operations/index.html.md @@ -0,0 +1,13 @@ +--- +layout: "guides" +page_title: "Nomad Operations" +sidebar_current: "guides-operations" +description: |- + Learn how to operate Nomad. +--- + +# Nomad Operations + +The Nomad Operations guides section provides best practices and guidance for +operating Nomad in a real-world production setting. Please navigate the +appropriate sub-sections for more information. \ No newline at end of file diff --git a/website/source/guides/operations/install/index.html.md b/website/source/guides/operations/install/index.html.md index efc25197b..c47a76a06 100644 --- a/website/source/guides/operations/install/index.html.md +++ b/website/source/guides/operations/install/index.html.md @@ -1,7 +1,7 @@ --- -layout: "docs" +layout: "guides" page_title: "Installing Nomad" -sidebar_current: "docs-installing" +sidebar_current: "guides-operations-installing" description: |- Learn how to install Nomad. --- diff --git a/website/source/guides/operations/monitoring/nomad-metrics.html.md b/website/source/guides/operations/monitoring/nomad-metrics.html.md index 6401297a6..bdd655a4e 100644 --- a/website/source/guides/operations/monitoring/nomad-metrics.html.md +++ b/website/source/guides/operations/monitoring/nomad-metrics.html.md @@ -1,7 +1,7 @@ --- layout: "guides" page_title: "Setting up Nomad with Grafana and Prometheus Metrics" -sidebar_current: "guides-nomad-metrics" +sidebar_current: "guides-operations-monitoring-grafana" description: |- It is possible to collect metrics on Nomad and create dashboards with Grafana and Prometheus. Nomad has default configurations for these, but it is diff --git a/website/source/guides/operations/monitoring/telemetry.html.md b/website/source/guides/operations/monitoring/telemetry.html.md index 43643a282..88fc0d4fc 100644 --- a/website/source/guides/operations/monitoring/telemetry.html.md +++ b/website/source/guides/operations/monitoring/telemetry.html.md @@ -1,7 +1,7 @@ --- -layout: "docs" +layout: "guides" page_title: "Telemetry" -sidebar_current: "docs-agent-telemetry" +sidebar_current: "guides-operations-monitoring-telemetry" description: |- Learn about the telemetry data available in Nomad. --- @@ -30,7 +30,7 @@ Telemetry information can be streamed to both [statsite](https://github.com/armo as well as statsd based on providing the appropriate configuration options. To configure the telemetry output please see the [agent -configuration](/docs/agent/configuration/telemetry.html). +configuration](/docs/configuration/telemetry.html). Below is sample output of a telemetry dump: @@ -233,7 +233,7 @@ By default the collection interval is 1 second but it can be changed by the changing the value of the `collection_interval` key in the `telemetry` configuration block. -Please see the [agent configuration](/docs/agent/configuration/telemetry.html) +Please see the [agent configuration](/docs/configuration/telemetry.html) page for more details. As of Nomad 0.9, Nomad will emit additional labels for [parameterized](/docs/job-specification/parameterized.html) and diff --git a/website/source/guides/operations/node-draining.html.md b/website/source/guides/operations/node-draining.html.md index 17afbb816..8f0f74d79 100644 --- a/website/source/guides/operations/node-draining.html.md +++ b/website/source/guides/operations/node-draining.html.md @@ -1,20 +1,20 @@ --- layout: "guides" -page_title: "Decommissioning Nodes" -sidebar_current: "guides-decommissioning-nodes" +page_title: "Workload Migration" +sidebar_current: "guides-operations-decommissioning-nodes" description: |- - Decommissioning nodes is a normal part of cluster operations for a variety of + Workload migration is a normal part of cluster operations for a variety of reasons: server maintenance, operating system upgrades, etc. Nomad offers a number of parameters for controlling how running jobs are migrated off of draining nodes. --- -# Decommissioning Nomad Client Nodes +# Workload Migration -Decommissioning nodes is a normal part of cluster operations for a variety of -reasons: server maintenance, operating system upgrades, etc. Nomad offers a -number of parameters for controlling how running jobs are migrated off of -draining nodes. +Migrating workloads and decommissioning nodes are a normal part of cluster +operations for a variety of reasons: server maintenance, operating system +upgrades, etc. Nomad offers a number of parameters for controlling how running +jobs are migrated off of draining nodes. ## Configuring How Jobs are Migrated diff --git a/website/source/guides/operations/outage.html.markdown b/website/source/guides/operations/outage.html.markdown index 22b9267e2..74aacfe58 100644 --- a/website/source/guides/operations/outage.html.markdown +++ b/website/source/guides/operations/outage.html.markdown @@ -1,7 +1,7 @@ --- layout: "guides" page_title: "Outage Recovery" -sidebar_current: "guides-outage-recovery" +sidebar_current: "guides-operations-outage-recovery" description: |- Don't panic! This is a critical first step. Depending on your deployment configuration, it may take only a single server failure for cluster @@ -20,15 +20,15 @@ requires an operator to intervene, but the process is straightforward. ~> This guide is for recovery from a Nomad outage due to a majority of server nodes in a datacenter being lost. If you are looking to add or remove servers, -see the [bootstrapping guide](/guides/cluster/bootstrapping.html). +see the [bootstrapping guide](/guides/operations/cluster/bootstrapping.html). ## Failure of a Single Server Cluster If you had only a single server and it has failed, simply restart it. A single server configuration requires the -[`-bootstrap-expect=1`](/docs/agent/configuration/server.html#bootstrap_expect) +[`-bootstrap-expect=1`](/docs/configuration/server.html#bootstrap_expect) flag. If the server cannot be recovered, you need to bring up a new -server. See the [bootstrapping guide](/guides/cluster/bootstrapping.html) +server. See the [bootstrapping guide](/guides/operations/cluster/bootstrapping.html) for more detail. In the case of an unrecoverable server failure in a single server cluster, data @@ -126,7 +126,7 @@ any automated processes that will put the peers file in place on a periodic basis. The next step is to go to the -[`-data-dir`](/docs/agent/configuration/index.html#data_dir) of each Nomad +[`-data-dir`](/docs/configuration/index.html#data_dir) of each Nomad server. Inside that directory, there will be a `raft/` sub-directory. We need to create a `raft/peers.json` file. It should look something like: @@ -220,5 +220,5 @@ Nomad server in the cluster, like this: server's RPC port used for cluster communications. - `non_voter` `(bool: )` - This controls whether the server is a non-voter, which is used - in some advanced [Autopilot](/guides/autopilot.html) configurations. If omitted, it will + in some advanced [Autopilot](/guides/operations/autopilot.html) configurations. If omitted, it will default to false, which is typical for most clusters. diff --git a/website/source/guides/operations/requirements.html.md b/website/source/guides/operations/requirements.html.md index b38af402c..7a06501ba 100644 --- a/website/source/guides/operations/requirements.html.md +++ b/website/source/guides/operations/requirements.html.md @@ -1,13 +1,13 @@ --- layout: "guides" -page_title: "Nomad Client and Server Requirements" -sidebar_current: "guides-cluster-requirements" +page_title: "Hardware Requirements" +sidebar_current: "guides-operations-requirements" description: |- Learn about Nomad client and server requirements such as memory and CPU recommendations, network topologies, and more. --- -# Cluster Requirements +# Hardware Requirements ## Resources (RAM, CPU, etc.) @@ -29,7 +29,7 @@ used by Nomad. This should be used to target a specific resource utilization per node and to reserve resources for applications running outside of Nomad's supervision such as Consul and the operating system itself. -Please see the [reservation configuration](/docs/agent/configuration/client.html#reserved) for +Please see the [reservation configuration](/docs/configuration/client.html#reserved) for more detail. ## Network Topology diff --git a/website/source/guides/operations/upgrade/index.html.md b/website/source/guides/operations/upgrade/index.html.md index 758f20a25..5128e3196 100644 --- a/website/source/guides/operations/upgrade/index.html.md +++ b/website/source/guides/operations/upgrade/index.html.md @@ -1,12 +1,12 @@ --- -layout: "docs" +layout: "guides" page_title: "Upgrading" -sidebar_current: "docs-upgrade-upgrading" +sidebar_current: "guides-operations-upgrade" description: |- Learn how to upgrade Nomad. --- -# Upgrading Nomad +# Upgrading This page documents how to upgrade Nomad when a new version is released. @@ -23,7 +23,7 @@ For upgrades we strive to ensure backwards compatibility. For most upgrades, the process is as simple as upgrading the binary and restarting the service. Prior to starting the upgrade please check the -[specific version details](/docs/upgrade/upgrade-specific.html) page as some +[specific version details](/guides/operations/upgrade/upgrade-specific.html) page as some version differences may require specific steps. At a high level we complete the following steps to upgrade Nomad: @@ -102,8 +102,8 @@ Use the same actions in step #2 above to confirm cluster health. Following the successful upgrade of the servers you can now update your clients using a similar process as the servers. You may either upgrade clients -in-place or start new nodes on the new version. See the [Decommissioning Nodes -guide](/guides/node-draining.html) for instructions on how to migrate running +in-place or start new nodes on the new version. See the [Workload Migration +Guide](/guides/operations/node-draining.html) for instructions on how to migrate running allocations from the old nodes to the new nodes with the [`nomad node drain`](/docs/commands/node/drain.html) command. @@ -118,5 +118,5 @@ are in a `ready` state. The process of upgrading to a Nomad Enterprise version is identical to upgrading between versions of open source Nomad. The same guidance above should be followed and as always, prior to starting the upgrade please check the [specific -version details](/docs/upgrade/upgrade-specific.html) page as some version +version details](/guides/operations/upgrade/upgrade-specific.html) page as some version differences may require specific steps. diff --git a/website/source/guides/operations/upgrade/upgrade-specific.html.md b/website/source/guides/operations/upgrade/upgrade-specific.html.md index 41897bce7..778124221 100644 --- a/website/source/guides/operations/upgrade/upgrade-specific.html.md +++ b/website/source/guides/operations/upgrade/upgrade-specific.html.md @@ -1,15 +1,15 @@ --- -layout: "docs" +layout: "guides" page_title: "Upgrade Guides" -sidebar_current: "docs-upgrade-specific" +sidebar_current: "guides-operations-upgrade-specific" description: |- Specific versions of Nomad may have additional information about the upgrade process beyond the standard flow. --- -# Upgrading Specific Versions +# Upgrade Guides -The [upgrading page](/docs/upgrade/index.html) covers the details of doing +The [upgrading page](/guides/operations/upgrade/index.html) covers the details of doing a standard upgrade. However, specific versions of Nomad may have more details provided for their upgrades as a result of new features or changed behavior. This page is used to document those details separately from the @@ -21,7 +21,7 @@ standard upgrade flow. When upgrading to Nomad 0.8.0 from a version lower than 0.7.0, users will need to set the -[`raft_protocol`](/docs/agent/configuration/server.html#raft_protocol) option +[`raft_protocol`](/docs/configuration/server.html#raft_protocol) option in their `server` stanza to 1 in order to maintain backwards compatibility with the old servers during the upgrade. After the servers have been migrated to version 0.8.0, `raft_protocol` can be moved up to 2 and the servers restarted @@ -50,18 +50,18 @@ Raft Protocol versions supported by each Nomad version: -In order to enable all [Autopilot](/guides/autopilot.html) features, all servers +In order to enable all [Autopilot](/guides/operations/autopilot.html) features, all servers in a Nomad cluster must be running with Raft protocol version 3 or later. #### Upgrading to Raft Protocol 3 -This section provides details on upgrading to Raft Protocol 3 in Nomad 0.8 and higher. Raft protocol version 3 requires Nomad running 0.8.0 or newer on all servers in order to work. See [Raft Protocol Version Compatibility](/docs/upgrade/upgrade-specific.html#raft-protocol-version-compatibility) for more details. Also the format of `peers.json` used for outage recovery is different when running with the latest Raft protocol. See [Manual Recovery Using peers.json](/guides/outage.html#manual-recovery-using-peers-json) for a description of the required format. +This section provides details on upgrading to Raft Protocol 3 in Nomad 0.8 and higher. Raft protocol version 3 requires Nomad running 0.8.0 or newer on all servers in order to work. See [Raft Protocol Version Compatibility](/guides/operations/upgrade/upgrade-specific.html#raft-protocol-version-compatibility) for more details. Also the format of `peers.json` used for outage recovery is different when running with the latest Raft protocol. See [Manual Recovery Using peers.json](/guides/operations/outage.html#manual-recovery-using-peers-json) for a description of the required format. Please note that the Raft protocol is different from Nomad's internal protocol as shown in commands like `nomad server members`. To see the version of the Raft protocol in use on each server, use the `nomad operator raft list-peers` command. The easiest way to upgrade servers is to have each server leave the cluster, upgrade its `raft_protocol` version in the `server` stanza, and then add it back. Make sure the new server joins successfully and that the cluster is stable before rolling the upgrade forward to the next server. It's also possible to stand up a new set of servers, and then slowly stand down each of the older servers in a similar fashion. -When using Raft protocol version 3, servers are identified by their `node-id` instead of their IP address when Nomad makes changes to its internal Raft quorum configuration. This means that once a cluster has been upgraded with servers all running Raft protocol version 3, it will no longer allow servers running any older Raft protocol versions to be added. If running a single Nomad server, restarting it in-place will result in that server not being able to elect itself as a leader. To avoid this, either set the Raft protocol back to 2, or use [Manual Recovery Using peers.json](/guides/outage.html#manual-recovery-using-peers-json) to map the server to its node ID in the Raft quorum configuration. +When using Raft protocol version 3, servers are identified by their `node-id` instead of their IP address when Nomad makes changes to its internal Raft quorum configuration. This means that once a cluster has been upgraded with servers all running Raft protocol version 3, it will no longer allow servers running any older Raft protocol versions to be added. If running a single Nomad server, restarting it in-place will result in that server not being able to elect itself as a leader. To avoid this, either set the Raft protocol back to 2, or use [Manual Recovery Using peers.json](/guides/operations/outage.html#manual-recovery-using-peers-json) to map the server to its node ID in the Raft quorum configuration. ### Node Draining Improvements @@ -78,7 +78,7 @@ The `drain` command now blocks until the drain completes. To get the Nomad -force -detach ` See the [`migrate` stanza documentation][migrate] and [Decommissioning Nodes -guide](/guides/node-draining.html) for details. +guide](/guides/operations/node-draining.html) for details. ### Periods in Environment Variable Names No Longer Escaped @@ -124,7 +124,7 @@ as the old style will be deprecated in future versions of Nomad. ### RPC Advertise Address The behavior of the [advertised RPC -address](/docs/agent/configuration/index.html#rpc-1) has changed to be only used +address](/docs/configuration/index.html#rpc-1) has changed to be only used to advertise the RPC address of servers to client nodes. Server to server communication is done using the advertised Serf address. Existing cluster's should not be effected but the advertised RPC address may need to be updated to @@ -149,7 +149,7 @@ If you manually configure `advertise` addresses no changes are necessary. The change to the default, advertised IP also effect clients that do not specify which network_interface to use. If you have several routable IPs, it is advised to configure the client's [network -interface](https://www.nomadproject.io/docs/agent/configuration/client.html#network_interface) +interface](/docs/configuration/client.html#network_interface) such that tasks bind to the correct address. ## Nomad 0.5.5 diff --git a/website/source/guides/operations/vault-integration/index.html.md b/website/source/guides/operations/vault-integration/index.html.md index 80df28f66..d4b53278e 100644 --- a/website/source/guides/operations/vault-integration/index.html.md +++ b/website/source/guides/operations/vault-integration/index.html.md @@ -1,9 +1,9 @@ --- -layout: "docs" +layout: "guides" page_title: "Vault Integration" -sidebar_current: "docs-vault-integration" +sidebar_current: "guides-operations-vault-integration" description: |- - Learn how to integrate with HashiCorp Vault and retrieve Vault tokens for + Learn how to integrate Nomad with HashiCorp Vault and retrieve Vault tokens for tasks. --- @@ -341,8 +341,8 @@ You can see examples of `v1` and `v2` syntax in the [auth]: https://www.vaultproject.io/docs/auth/token.html "Vault Authentication Backend" -[config]: /docs/agent/configuration/vault.html "Nomad Vault Configuration Block" -[createfromrole]: /docs/agent/configuration/vault.html#create_from_role "Nomad vault create_from_role Configuration Flag" +[config]: /docs/configuration/vault.html "Nomad Vault Configuration Block" +[createfromrole]: /docs/configuration/vault.html#create_from_role "Nomad vault create_from_role Configuration Flag" [template]: /docs/job-specification/template.html "Nomad template Job Specification" [vault]: https://www.vaultproject.io/ "Vault by HashiCorp" [vault-spec]: /docs/job-specification/vault.html "Nomad Vault Job Specification"