mirror of
https://github.com/kemko/nomad.git
synced 2026-01-06 02:15:43 +03:00
Stateful Workload guides to learn.hashicorp.com
This commit is contained in:
@@ -50,8 +50,8 @@ Documentation & Guides
|
||||
* [Installing Nomad for Production](https://www.nomadproject.io/guides/operations/deployment-guide.html)
|
||||
* [Advanced Job Scheduling on Nomad with Affinities](https://www.nomadproject.io/guides/operating-a-job/advanced-scheduling/affinity.html)
|
||||
* [Increasing Nomad Fault Tolerance with Spread](https://www.nomadproject.io/guides/operating-a-job/advanced-scheduling/spread.html)
|
||||
* [Load Balancing on Nomad with Fabio & Consul](https://learn.hashicorp.com/guides/load-balancing/fabio)
|
||||
* [Deploying Stateful Workloads via Portworx](https://www.nomadproject.io/guides/stateful-workloads/portworx.html)
|
||||
* [Load Balancing on Nomad with Fabio & Consul](https://learn.hashicorp.com/nomad/load-balancing/fabio)
|
||||
* [Deploying Stateful Workloads via Portworx](https://learn.hashicorp.com/nomad/stateful-workloads/portworx)
|
||||
* [Running Apache Spark on Nomad](https://www.nomadproject.io/guides/spark/spark.html)
|
||||
* [Integrating Vault with Nomad for Secrets Management](https://www.nomadproject.io/guides/operations/vault-integration/index.html)
|
||||
* [Securing Nomad with TLS](https://www.nomadproject.io/guides/security/securing-nomad.html)
|
||||
|
||||
@@ -11,7 +11,7 @@ description: |-
|
||||
|
||||
These guides have been migrated to [HashiCorp's Learn website].
|
||||
|
||||
You can follow these links to specific guides at Learn:
|
||||
You can follow these links to find the specific guides on Learn:
|
||||
|
||||
- [Fabio](https://learn.hashicorp.com/nomad/load-balancing/fabio)
|
||||
- [NGINX](https://learn.hashicorp.com/nomad/load-balancing/nginx)
|
||||
|
||||
@@ -1,392 +0,0 @@
|
||||
---
|
||||
layout: "guides"
|
||||
page_title: "Stateful Workloads with Nomad Host Volumes"
|
||||
sidebar_current: "guides-stateful-workloads-host-volumes"
|
||||
description: |-
|
||||
There are multiple approaches to deploying stateful applications in Nomad.
|
||||
This guide uses Nomad Host to Volumes deploy a MySQL database.
|
||||
---
|
||||
|
||||
# Stateful Workloads with Nomad Host Volumes
|
||||
|
||||
~> **Note:** This guide requires Nomad 0.10.0 or later.
|
||||
|
||||
Nomad Host Volumes can manage storage for stateful workloads running inside your
|
||||
Nomad cluster. This guide walks you through deploying a MySQL workload to a node
|
||||
containing supporting storage.
|
||||
|
||||
Nomad host volumes provide a more workload-agnostic way to specify resources,
|
||||
available for Nomad drivers like `exec`, `java`, and `docker`. See the
|
||||
[`host_volume` specification][host_volume spec] for more information about
|
||||
supported drivers.
|
||||
|
||||
Nomad is also aware of host volumes during the scheduling process, enabling it
|
||||
to make scheduling decisions based on the availability of host volumes on a
|
||||
specific client.
|
||||
|
||||
This can be contrasted with Nomad support for Docker volumes. Because Docker
|
||||
volumes are managed outside of Nomad and the Nomad scheduled is not aware of
|
||||
them, Docker volumes have to either be deployed to all clients or operators have
|
||||
to use an additional, manually-maintained constraint to inform the scheduler
|
||||
where they are present.
|
||||
|
||||
## Reference material
|
||||
|
||||
- [Nomad `host_volume` specification][host_volume spec]
|
||||
- [Nomad `volume` specification][volume spec]
|
||||
- [Nomad `volume_mount` specification][volume_mount spec]
|
||||
|
||||
## Estimated time to complete
|
||||
|
||||
20 minutes
|
||||
|
||||
## Challenge
|
||||
|
||||
Deploy a MySQL database that needs to be able to persist data without using
|
||||
operator-configured Docker volumes.
|
||||
|
||||
## Solution
|
||||
|
||||
Configure Nomad Host Volumes on a Nomad client node in order to persist data
|
||||
in the event that the container is restarted.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
To perform the tasks described in this guide, you need to have a Nomad
|
||||
environment with Consul installed. You can use this [project][repo] to easily
|
||||
provision a sandbox environment. This guide will assume a cluster with one
|
||||
server node and three client nodes.
|
||||
|
||||
~> **Please Note:** This guide is for demo purposes and is only using a single
|
||||
server node. In a production cluster, 3 or 5 server nodes are recommended.
|
||||
|
||||
### Prerequisite 1: Install the MySQL client
|
||||
|
||||
We will use the MySQL client to connect to our MySQL database and verify our data.
|
||||
Ensure it is installed on a node with access to port 3306 on your Nomad clients:
|
||||
|
||||
Ubuntu:
|
||||
|
||||
```bash
|
||||
sudo apt install mysql-client
|
||||
```
|
||||
|
||||
CentOS:
|
||||
|
||||
```bash
|
||||
sudo yum install mysql
|
||||
```
|
||||
|
||||
macOS via Homebrew:
|
||||
|
||||
```bash
|
||||
brew install mysql-client
|
||||
```
|
||||
|
||||
### Step 1: Create a directory to use as a mount target
|
||||
|
||||
On a Nomad client node in your cluster, create a directory that will be used for
|
||||
persisting the MySQL data. For this example, let's create the directory
|
||||
`/opt/mysql/data`.
|
||||
|
||||
```bash
|
||||
sudo mkdir -p /opt/mysql/data
|
||||
```
|
||||
|
||||
You might need to change the owner on this folder if the Nomad client does not
|
||||
run as the `root` user.
|
||||
|
||||
```bash
|
||||
sudo chown «Nomad user» /opt/mysql/data
|
||||
```
|
||||
|
||||
### Step 2: Configure the `mysql` host volume on the client
|
||||
|
||||
Edit the Nomad configuration on this Nomad client to create the Host Volume.
|
||||
|
||||
Add the following to the `client` stanza of your Nomad configuration:
|
||||
|
||||
```hcl
|
||||
host_volume "mysql" {
|
||||
path = "/opt/mysql/data"
|
||||
read_only = false
|
||||
}
|
||||
```
|
||||
|
||||
Save this change, and then restart the Nomad service on this client to make the
|
||||
Host Volume active. While still on the client, you can easily verify that the
|
||||
host volume is configured by using the `nomad node status` command as shown
|
||||
below:
|
||||
|
||||
```shell
|
||||
$ nomad node status -short -self
|
||||
ID = 12937fa7
|
||||
Name = ip-172-31-15-65
|
||||
Class = <none>
|
||||
DC = dc1
|
||||
Drain = false
|
||||
Eligibility = eligible
|
||||
Status = ready
|
||||
Host Volumes = mysql
|
||||
Drivers = docker,exec,java,mock_driver,raw_exec,rkt
|
||||
...
|
||||
```
|
||||
|
||||
### Step 3: Create the `mysql.nomad` job file
|
||||
|
||||
We are now ready to deploy a MySQL database that can use Nomad Host Volumes for
|
||||
storage. Create a file called `mysql.nomad` and provide it the following
|
||||
contents:
|
||||
|
||||
```hcl
|
||||
job "mysql-server" {
|
||||
datacenters = ["dc1"]
|
||||
type = "service"
|
||||
|
||||
group "mysql-server" {
|
||||
count = 1
|
||||
|
||||
volume "mysql" {
|
||||
type = "host"
|
||||
read_only = false
|
||||
source = "mysql"
|
||||
}
|
||||
|
||||
restart {
|
||||
attempts = 10
|
||||
interval = "5m"
|
||||
delay = "25s"
|
||||
mode = "delay"
|
||||
}
|
||||
|
||||
task "mysql-server" {
|
||||
driver = "docker"
|
||||
|
||||
volume_mount {
|
||||
volume = "mysql"
|
||||
destination = "/var/lib/mysql"
|
||||
read_only = false
|
||||
}
|
||||
|
||||
env = {
|
||||
"MYSQL_ROOT_PASSWORD" = "password"
|
||||
}
|
||||
|
||||
config {
|
||||
image = "hashicorp/mysql-portworx-demo:latest"
|
||||
|
||||
port_map {
|
||||
db = 3306
|
||||
}
|
||||
}
|
||||
|
||||
resources {
|
||||
cpu = 500
|
||||
memory = 1024
|
||||
|
||||
network {
|
||||
port "db" {
|
||||
static = 3306
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
service {
|
||||
name = "mysql-server"
|
||||
port = "db"
|
||||
|
||||
check {
|
||||
type = "tcp"
|
||||
interval = "10s"
|
||||
timeout = "2s"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
#### Notes about the above job specification
|
||||
|
||||
- The service name is `mysql-server` which we will use later to connect to the
|
||||
database.
|
||||
|
||||
- The `read_only` argument is supplied on all of the volume-related stanzas in
|
||||
to help highlight all of the places you would need to change to make a
|
||||
read-only volume mount. Please see the [`host_volume`][host_volume spec],
|
||||
[`volume`][volume spec], and [`volume_mount`][volume_mount spec] specifications
|
||||
for more details.
|
||||
|
||||
- For lower-memory instances, you might need to reduce the requested memory in
|
||||
the resources stanza to harmonize with available resources in your cluster.
|
||||
|
||||
### Step 4: Deploy the MySQL database
|
||||
|
||||
Register the job file you created in the previous step with the following
|
||||
command:
|
||||
|
||||
```
|
||||
$ nomad run mysql.nomad
|
||||
==> Monitoring evaluation "aa478d82"
|
||||
Evaluation triggered by job "mysql-server"
|
||||
Allocation "6c3b3703" created: node "be8aad4e", group "mysql-server"
|
||||
Evaluation status changed: "pending" -> "complete"
|
||||
==> Evaluation "aa478d82" finished with status "complete"
|
||||
```
|
||||
|
||||
Check the status of the allocation and ensure the task is running:
|
||||
|
||||
```
|
||||
$ nomad status mysql-server
|
||||
ID = mysql-server
|
||||
...
|
||||
Summary
|
||||
Task Group Queued Starting Running Failed Complete Lost
|
||||
mysql-server 0 0 1 0 0 0
|
||||
```
|
||||
|
||||
### Step 5: Connect to MySQL
|
||||
|
||||
Using the mysql client (installed in [Prerequisite 1]), connect to the database
|
||||
and access the information:
|
||||
|
||||
```
|
||||
mysql -h mysql-server.service.consul -u web -p -D itemcollection
|
||||
```
|
||||
|
||||
The password for this demo database is `password`.
|
||||
|
||||
~> **Please Note:** This guide is for demo purposes and does not follow best
|
||||
practices for securing database passwords. See [Keeping Passwords
|
||||
Secure][password-security] for more information.
|
||||
|
||||
Consul is installed alongside Nomad in this cluster so we were able to
|
||||
connect using the `mysql-server` service name we registered with our task in
|
||||
our job file.
|
||||
|
||||
### Step 6: Add data to MySQL
|
||||
|
||||
Once you are connected to the database, verify the table `items` exists:
|
||||
|
||||
```
|
||||
mysql> show tables;
|
||||
+--------------------------+
|
||||
| Tables_in_itemcollection |
|
||||
+--------------------------+
|
||||
| items |
|
||||
+--------------------------+
|
||||
1 row in set (0.00 sec)
|
||||
```
|
||||
|
||||
Display the contents of this table with the following command:
|
||||
|
||||
```
|
||||
mysql> select * from items;
|
||||
+----+----------+
|
||||
| id | name |
|
||||
+----+----------+
|
||||
| 1 | bike |
|
||||
| 2 | baseball |
|
||||
| 3 | chair |
|
||||
+----+----------+
|
||||
3 rows in set (0.00 sec)
|
||||
```
|
||||
|
||||
Now add some data to this table (after we terminate our database in Nomad and
|
||||
bring it back up, this data should still be intact):
|
||||
|
||||
```
|
||||
mysql> INSERT INTO items (name) VALUES ('glove');
|
||||
```
|
||||
|
||||
Run the `INSERT INTO` command as many times as you like with different values.
|
||||
|
||||
```
|
||||
mysql> INSERT INTO items (name) VALUES ('hat');
|
||||
mysql> INSERT INTO items (name) VALUES ('keyboard');
|
||||
```
|
||||
|
||||
Once you you are done, type `exit` and return back to the Nomad client command
|
||||
line:
|
||||
|
||||
```
|
||||
mysql> exit
|
||||
Bye
|
||||
```
|
||||
|
||||
### Step 7: Stop and purge the database job
|
||||
|
||||
Run the following command to stop and purge the MySQL job from the cluster:
|
||||
|
||||
```
|
||||
$ nomad stop -purge mysql-server
|
||||
==> Monitoring evaluation "6b784149"
|
||||
Evaluation triggered by job "mysql-server"
|
||||
Evaluation status changed: "pending" -> "complete"
|
||||
==> Evaluation "6b784149" finished with status "complete"
|
||||
```
|
||||
|
||||
Verify no jobs are running in the cluster:
|
||||
|
||||
```
|
||||
$ nomad status
|
||||
No running jobs
|
||||
```
|
||||
|
||||
In more advanced cases, the directory backing the host volume could be a mounted
|
||||
network filesystem, like NFS, or cluster-aware filesystem, like glusterFS. This
|
||||
can enable more complex, automatic failure-recovery scenarios in the event of a
|
||||
node failure.
|
||||
|
||||
### Step 8: Re-deploy the database
|
||||
|
||||
Using the `mysql.nomad` job file from [Step 3], re-deploy the database to the
|
||||
Nomad cluster.
|
||||
|
||||
```
|
||||
==> Monitoring evaluation "61b4f648"
|
||||
Evaluation triggered by job "mysql-server"
|
||||
Allocation "8e1324d2" created: node "be8aad4e", group "mysql-server"
|
||||
Evaluation status changed: "pending" -> "complete"
|
||||
==> Evaluation "61b4f648" finished with status "complete"
|
||||
```
|
||||
|
||||
### Step 9: Verify data
|
||||
|
||||
Once you re-connect to MySQL, you should be able to see that the information you
|
||||
added prior to destroying the database is still present:
|
||||
|
||||
```
|
||||
mysql> select * from items;
|
||||
+----+----------+
|
||||
| id | name |
|
||||
+----+----------+
|
||||
| 1 | bike |
|
||||
| 2 | baseball |
|
||||
| 3 | chair |
|
||||
| 4 | glove |
|
||||
| 5 | hat |
|
||||
| 6 | keyboard |
|
||||
+----+----------+
|
||||
6 rows in set (0.00 sec)
|
||||
```
|
||||
|
||||
### Step 10: Tidying up
|
||||
|
||||
Once you have completed this guide, you should perform the following cleanup steps:
|
||||
|
||||
- Stop and purge the `mysql-server` job.
|
||||
|
||||
- Remove the `host_volume "mysql"` stanza from your Nomad client configuration
|
||||
and restart the Nomad service on that client
|
||||
|
||||
- Remove the /opt/mysql/data folder and as much of the directory tree that you
|
||||
no longer require.
|
||||
|
||||
[Prerequisite 1]: #prerequisite-1-install-the-mysql-client
|
||||
[Step 3]: #step-3-create-the-mysql-nomad-job-file
|
||||
[host_volume spec]: /docs/configuration/client.html#host_volume-stanza
|
||||
[volume spec]: /docs/job-specification/volume.html
|
||||
[volume_mount spec]: /docs/job-specification/volume_mount.html
|
||||
[password-security]: https://dev.mysql.com/doc/refman/8.0/en/password-security.html
|
||||
[repo]: https://github.com/hashicorp/nomad/tree/master/terraform#provision-a-nomad-cluster-in-the-cloud
|
||||
@@ -1,454 +0,0 @@
|
||||
---
|
||||
layout: "guides"
|
||||
page_title: "Stateful Workloads with Portworx"
|
||||
sidebar_current: "guides-stateful-workloads-portworx"
|
||||
description: |-
|
||||
There are multiple approaches to deploying stateful applications in Nomad.
|
||||
This guide uses Portworx deploy a MySQL database.
|
||||
---
|
||||
|
||||
# Stateful Workloads with Portworx
|
||||
|
||||
Portworx integrates with Nomad and can manage storage for stateful workloads
|
||||
running inside your Nomad cluster. This guide walks you through deploying an HA
|
||||
MySQL workload.
|
||||
|
||||
## Reference Material
|
||||
|
||||
- [Portworx on Nomad][portworx-nomad]
|
||||
|
||||
## Estimated Time to Complete
|
||||
|
||||
20 minutes
|
||||
|
||||
## Challenge
|
||||
|
||||
Deploy an HA MySQL database with a replication factor of 3, ensuring the data
|
||||
will be replicated on 3 different client nodes.
|
||||
|
||||
## Solution
|
||||
|
||||
Configure Portworx on each Nomad client node in order to create a storage pool
|
||||
that the MySQL task can use for storage and replication.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
To perform the tasks described in this guide, you need to have a Nomad
|
||||
environment with Consul installed. You can use this [repo][repo] to easily
|
||||
provision a sandbox environment. This guide will assume a cluster with one
|
||||
server node and three client nodes.
|
||||
|
||||
~> **Please Note:** This guide is for demo purposes and is only using a single
|
||||
server node. In a production cluster, 3 or 5 server nodes are recommended.
|
||||
|
||||
## Steps
|
||||
|
||||
### Step 1: Ensure Block Device Requirements
|
||||
|
||||
* Portworx needs an unformatted and unmounted block device that it can fully
|
||||
manage. If you have provisioned a Nomad cluster in AWS using the environment
|
||||
provided in this guide, you already have an external block device ready to use
|
||||
(`/dev/xvdd`) with a capacity of 50 GB.
|
||||
|
||||
* Ensure your root volume's size is at least 20 GB. If you are using the
|
||||
environment provided in this guide, add the following line to your
|
||||
`terraform.tfvars` file:
|
||||
|
||||
```
|
||||
root_block_device_size = 20
|
||||
```
|
||||
|
||||
### Step 2: Install the MySQL client
|
||||
|
||||
We will use the MySQL client to connect to our MySQL database and verify our data.
|
||||
Ensure it is installed on each client node:
|
||||
|
||||
```
|
||||
$ sudo apt install mysql-client
|
||||
```
|
||||
|
||||
### Step 3: Set up the PX-OCI Bundle
|
||||
|
||||
Run the following command on each client node to set up the [PX-OCI][px-oci]
|
||||
bundle:
|
||||
|
||||
```
|
||||
sudo docker run --entrypoint /runc-entry-point.sh \
|
||||
--rm -i --privileged=true \
|
||||
-v /opt/pwx:/opt/pwx -v /etc/pwx:/etc/pwx \
|
||||
portworx/px-enterprise:2.0.2.3
|
||||
```
|
||||
|
||||
If the command is successful, you will see output similar to the output show
|
||||
below (the output has been abbreviated):
|
||||
|
||||
```
|
||||
Unable to find image 'portworx/px-enterprise:2.0.2.3' locally
|
||||
2.0.2.3: Pulling from portworx/px-enterprise
|
||||
...
|
||||
Status: Downloaded newer image for portworx/px-enterprise:2.0.2.3
|
||||
Executing with arguments:
|
||||
INFO: Copying binaries...
|
||||
INFO: Copying rootfs...
|
||||
[###############################################################################[.....................................................................................................Total bytes written: 2303375360 (2.2GiB, 48MiB/s)
|
||||
INFO: Done copying OCI content.
|
||||
You can now run the Portworx OCI bundle by executing one of the following:
|
||||
|
||||
# sudo /opt/pwx/bin/px-runc run [options]
|
||||
# sudo /opt/pwx/bin/px-runc install [options]
|
||||
...
|
||||
```
|
||||
|
||||
### Step 4: Configure Portworx OCI Bundle
|
||||
|
||||
Configure the Portworx OCI bundle on each client node by running the following
|
||||
command (the values provided to the options will be different for your
|
||||
environment):
|
||||
|
||||
```
|
||||
$ sudo /opt/pwx/bin/px-runc install -k consul://172.31.49.111:8500 \
|
||||
-c my_test_cluster -s /dev/xvdd
|
||||
```
|
||||
* You can use client node you are on with the `-k` option since Consul is
|
||||
installed alongside Nomad
|
||||
|
||||
* Be sure to provide the `-s` option with your external block device path
|
||||
|
||||
If the configuration is successful, you will see the following output
|
||||
(abbreviated):
|
||||
|
||||
```
|
||||
INFO[0000] Rootfs found at /opt/pwx/oci/rootfs
|
||||
INFO[0000] PX binaries found at /opt/pwx/bin/px-runc
|
||||
INFO[0000] Initializing as version 2.0.2.3-c186a87 (OCI)
|
||||
...
|
||||
INFO[0000] Successfully written /etc/systemd/system/portworx.socket
|
||||
INFO[0000] Successfully written /etc/systemd/system/portworx-output.service
|
||||
INFO[0000] Successfully written /etc/systemd/system/portworx.service
|
||||
```
|
||||
|
||||
Since we have created new unit files, please run the following command to reload
|
||||
the systemd manager configuration:
|
||||
|
||||
```
|
||||
sudo systemctl daemon-reload
|
||||
```
|
||||
|
||||
### Step 5: Start Portworx and Check Status
|
||||
|
||||
Run the following command to start Portworx:
|
||||
|
||||
```
|
||||
$ sudo systemctl start portworx
|
||||
```
|
||||
Verify the service:
|
||||
|
||||
```
|
||||
$ sudo systemctl status portworx
|
||||
● portworx.service - Portworx OCI Container
|
||||
Loaded: loaded (/etc/systemd/system/portworx.service; disabled; vendor preset
|
||||
Active: active (running) since Wed 2019-03-06 15:16:51 UTC; 1h 47min ago
|
||||
Docs: https://docs.portworx.com/runc
|
||||
Process: 28230 ExecStartPre=/bin/sh -c /opt/pwx/bin/runc delete -f portworx ||
|
||||
Main PID: 28238 (runc)
|
||||
...
|
||||
```
|
||||
Wait a few moments (Portworx may still be initializing) and then check the
|
||||
status of Portworx using the `pxctl` command.
|
||||
|
||||
```
|
||||
$ pxctl status
|
||||
```
|
||||
|
||||
If everything is working properly, you should see the following output:
|
||||
|
||||
```
|
||||
Status: PX is operational
|
||||
License: Trial (expires in 31 days)
|
||||
Node ID: 07113eef-0533-4de8-b1cf-4471c18a7cda
|
||||
IP: 172.31.53.231
|
||||
Local Storage Pool: 1 pool
|
||||
POOL IO_PRIORITY RAID_LEVEL USABLE USED STATUS ZONE REGION
|
||||
0 LOW raid0 50 GiB 4.4 GiB Online us-east-1c us-east-1
|
||||
Local Storage Devices: 1 device
|
||||
```
|
||||
Once all nodes are configured, you should see a cluster summary with the total
|
||||
capacity of the storage pool (if you're using the environment provided in this
|
||||
guide, the total capacity will be 150 GB since the external block device
|
||||
attached to each client nodes has a capacity of 50 GB):
|
||||
|
||||
```
|
||||
Cluster Summary
|
||||
Cluster ID: my_test_cluster
|
||||
Cluster UUID: 705a1cbd-4d58-4a0e-a970-1e6b28375590
|
||||
Scheduler: none
|
||||
Nodes: 3 node(s) with storage (3 online)
|
||||
...
|
||||
Global Storage Pool
|
||||
Total Used : 13 GiB
|
||||
Total Capacity : 150 GiB
|
||||
```
|
||||
|
||||
### Step 6: Create a Portworx Volume
|
||||
|
||||
Run the following command to create a Portworx volume that our job will be able
|
||||
to use:
|
||||
|
||||
```
|
||||
$ pxctl volume create -s 10 -r 3 mysql
|
||||
```
|
||||
You should see output similar to what is shown below:
|
||||
|
||||
```
|
||||
Volume successfully created: 693373920899724151
|
||||
```
|
||||
|
||||
* Please note from the options provided that the name of the volume we created
|
||||
is `mysql` and the size is 10 GB.
|
||||
|
||||
* We have configured a replication factor of 3 which ensures our data is
|
||||
available on all 3 client nodes.
|
||||
|
||||
Run `pxctl volume inspect mysql` to verify the status of the volume:
|
||||
|
||||
```
|
||||
$ pxctl volume inspect mysql
|
||||
Volume : 693373920899724151
|
||||
Name : mysql
|
||||
Size : 10 GiB
|
||||
Format : ext4
|
||||
HA : 3
|
||||
...
|
||||
Replica sets on nodes:
|
||||
Set 0
|
||||
Node : 172.31.58.210 (Pool 0)
|
||||
Node : 172.31.51.110 (Pool 0)
|
||||
Node : 172.31.48.98 (Pool 0)
|
||||
Replication Status : Up
|
||||
```
|
||||
|
||||
### Step 7: Create the `mysql.nomad` Job File
|
||||
|
||||
We are now ready to deploy a MySQL database that can use Portworx for storage.
|
||||
Create a file called `mysql.nomad` and provide it the following contents:
|
||||
|
||||
```
|
||||
job "mysql-server" {
|
||||
datacenters = ["dc1"]
|
||||
type = "service"
|
||||
|
||||
group "mysql-server" {
|
||||
count = 1
|
||||
|
||||
restart {
|
||||
attempts = 10
|
||||
interval = "5m"
|
||||
delay = "25s"
|
||||
mode = "delay"
|
||||
}
|
||||
|
||||
task "mysql-server" {
|
||||
driver = "docker"
|
||||
|
||||
env = {
|
||||
"MYSQL_ROOT_PASSWORD" = "password"
|
||||
}
|
||||
|
||||
config {
|
||||
image = "hashicorp/mysql-portworx-demo:latest"
|
||||
|
||||
port_map {
|
||||
db = 3306
|
||||
}
|
||||
|
||||
volumes = [
|
||||
"mysql:/var/lib/mysql"
|
||||
]
|
||||
|
||||
volume_driver = "pxd"
|
||||
}
|
||||
|
||||
resources {
|
||||
cpu = 500
|
||||
memory = 1024
|
||||
|
||||
network {
|
||||
port "db" {
|
||||
static = 3306
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
service {
|
||||
name = "mysql-server"
|
||||
port = "db"
|
||||
|
||||
check {
|
||||
type = "tcp"
|
||||
interval = "10s"
|
||||
timeout = "2s"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
* Please note from the job file that we are using the `pxd` volume driver that
|
||||
has been configured from the previous steps.
|
||||
|
||||
* The service name is `mysql-server` which we will use later to connect to the
|
||||
database.
|
||||
|
||||
### Step 8: Deploy the MySQL Database
|
||||
|
||||
Register the job file you created in the previous step with the following
|
||||
command:
|
||||
|
||||
```
|
||||
$ nomad run mysql.nomad
|
||||
==> Monitoring evaluation "aa478d82"
|
||||
Evaluation triggered by job "mysql-server"
|
||||
Allocation "6c3b3703" created: node "be8aad4e", group "mysql-server"
|
||||
Evaluation status changed: "pending" -> "complete"
|
||||
==> Evaluation "aa478d82" finished with status "complete"
|
||||
```
|
||||
|
||||
Check the status of the allocation and ensure the task is running:
|
||||
|
||||
```
|
||||
$ nomad status mysql-server
|
||||
ID = mysql-server
|
||||
...
|
||||
Summary
|
||||
Task Group Queued Starting Running Failed Complete Lost
|
||||
mysql-server 0 0 1 0 0 0
|
||||
```
|
||||
|
||||
### Step 9: Connect to MySQL
|
||||
|
||||
Using the mysql client (installed in [Step
|
||||
2](#step-2-install-the-mysql-client)), connect to the database and access the
|
||||
information:
|
||||
|
||||
```
|
||||
mysql -h mysql-server.service.consul -u web -p -D itemcollection
|
||||
```
|
||||
The password for this demo database is `password`.
|
||||
|
||||
~> **Please Note:** This guide is for demo purposes and does not follow best
|
||||
practices for securing database passwords. See [Keeping Passwords
|
||||
Secure][password-security] for more information.
|
||||
|
||||
Consul is installed alongside Nomad in this cluster so we were able to
|
||||
connect using the `mysql-server` service name we registered with our task in
|
||||
our job file.
|
||||
|
||||
### Step 10: Add Data to MySQL
|
||||
|
||||
Once you are connected to the database, verify the table `items` exists:
|
||||
|
||||
```
|
||||
mysql> show tables;
|
||||
+--------------------------+
|
||||
| Tables_in_itemcollection |
|
||||
+--------------------------+
|
||||
| items |
|
||||
+--------------------------+
|
||||
1 row in set (0.00 sec)
|
||||
```
|
||||
|
||||
Display the contents of this table with the following command:
|
||||
|
||||
```
|
||||
mysql> select * from items;
|
||||
+----+----------+
|
||||
| id | name |
|
||||
+----+----------+
|
||||
| 1 | bike |
|
||||
| 2 | baseball |
|
||||
| 3 | chair |
|
||||
+----+----------+
|
||||
3 rows in set (0.00 sec)
|
||||
```
|
||||
|
||||
Now add some data to this table (after we terminate our database in Nomad and
|
||||
bring it back up, this data should still be intact):
|
||||
|
||||
```
|
||||
mysql> INSERT INTO items (name) VALUES ('glove');
|
||||
```
|
||||
|
||||
Run the `INSERT INTO` command as many times as you like with different values.
|
||||
|
||||
```
|
||||
mysql> INSERT INTO items (name) VALUES ('hat');
|
||||
mysql> INSERT INTO items (name) VALUES ('keyboard');
|
||||
```
|
||||
Once you you are done, type `exit` and return back to the Nomad client command
|
||||
line:
|
||||
|
||||
```
|
||||
mysql> exit
|
||||
Bye
|
||||
```
|
||||
|
||||
### Step 11: Stop and Purge the Database Job
|
||||
|
||||
Run the following command to stop and purge the MySQL job from the cluster:
|
||||
|
||||
```
|
||||
$ nomad stop -purge mysql-server
|
||||
==> Monitoring evaluation "6b784149"
|
||||
Evaluation triggered by job "mysql-server"
|
||||
Evaluation status changed: "pending" -> "complete"
|
||||
==> Evaluation "6b784149" finished with status "complete"
|
||||
```
|
||||
|
||||
Verify no jobs are running in the cluster:
|
||||
|
||||
```
|
||||
$ nomad status
|
||||
No running jobs
|
||||
```
|
||||
You can optionally stop the nomad service on whichever node you are on and move
|
||||
to another node to simulate a node failure.
|
||||
|
||||
### Step 12: Re-deploy the Database
|
||||
|
||||
Using the `mysql.nomad` job file from [Step
|
||||
6](#step-6-create-the-mysql-nomad-job-file), re-deploy the database to the Nomad
|
||||
cluster.
|
||||
|
||||
```
|
||||
==> Monitoring evaluation "61b4f648"
|
||||
Evaluation triggered by job "mysql-server"
|
||||
Allocation "8e1324d2" created: node "be8aad4e", group "mysql-server"
|
||||
Evaluation status changed: "pending" -> "complete"
|
||||
==> Evaluation "61b4f648" finished with status "complete"
|
||||
```
|
||||
|
||||
### Step 13: Verify Data
|
||||
|
||||
Once you re-connect to MySQL, you should be able to see that the information you
|
||||
added prior to destroying the database is still present:
|
||||
|
||||
```
|
||||
mysql> select * from items;
|
||||
+----+----------+
|
||||
| id | name |
|
||||
+----+----------+
|
||||
| 1 | bike |
|
||||
| 2 | baseball |
|
||||
| 3 | chair |
|
||||
| 4 | glove |
|
||||
| 5 | hat |
|
||||
| 6 | keyboard |
|
||||
+----+----------+
|
||||
6 rows in set (0.00 sec)
|
||||
```
|
||||
|
||||
[password-security]: https://dev.mysql.com/doc/refman/8.0/en/password-security.html
|
||||
[portworx-nomad]: https://docs.portworx.com/install-with-other/nomad
|
||||
[px-oci]: https://docs.portworx.com/install-with-other/docker/standalone/#why-oci
|
||||
[repo]: https://github.com/hashicorp/nomad/tree/master/terraform#provision-a-nomad-cluster-in-the-cloud
|
||||
@@ -9,27 +9,11 @@ description: |-
|
||||
|
||||
# Stateful Workloads
|
||||
|
||||
Nomad allows a user to mount persistent data from local or remote storage volumes
|
||||
into task environments in a couple of ways — host volume mounts or Docker Volume
|
||||
drivers.
|
||||
These guides have been migrated to [HashiCorp's Learn website].
|
||||
|
||||
Nomad host volumes allow you to mount any directory on the Nomad client into an
|
||||
allocation. These mounts can then be connected to individual tasks within a task
|
||||
group.
|
||||
You can follow these links to find the specific guides on Learn:
|
||||
|
||||
The Docker task driver's support for [volumes][docker-volumes] enables Nomad to
|
||||
integrate with software-defined storage (SDS) solutions like
|
||||
[Portworx][portworx] to support stateful workloads. Please keep in mind that
|
||||
Nomad does not actually manage storage pools or replication as these tasks are
|
||||
delegated to the SDS providers. Please assess all factors and risks when
|
||||
utilizing such providers to run stateful workloads (such as your production
|
||||
database).
|
||||
- [Host Volumes](https://learn.hashicorp.com/nomad/stateful-workloads/host-volumes)
|
||||
- [Portworx](https://learn.hashicorp.com/nomad//stateful-workloads/portworx)
|
||||
|
||||
Please refer to the specific documentation links below or in the sidebar for
|
||||
more detailed information about using specific storage integrations.
|
||||
|
||||
- [Host Volumes](/guides/stateful-workloads/host-volumes.html)
|
||||
- [Portworx](/guides/stateful-workloads/portworx.html)
|
||||
|
||||
[docker-volumes]: /docs/drivers/docker.html#volumes
|
||||
[portworx]: https://docs.portworx.com/install-with-other/nomad
|
||||
[HashiCorp's Learn website]: https://learn.hashicorp.com/nomad?track=stateful-workloads#stateful-workloads
|
||||
|
||||
@@ -210,14 +210,6 @@
|
||||
|
||||
<li<%= sidebar_current("guides-stateful-workloads") %>>
|
||||
<a href="/guides/stateful-workloads/stateful-workloads.html">Stateful Workloads</a>
|
||||
<ul class="nav">
|
||||
<li<%= sidebar_current("guides-stateful-workloads-host-volumes") %>>
|
||||
<a href="/guides/stateful-workloads/host-volumes.html">Host Volumes</a>
|
||||
</li>
|
||||
<li<%= sidebar_current("guides-stateful-workloads-portworx") %>>
|
||||
<a href="/guides/stateful-workloads/portworx.html">Portworx</a>
|
||||
</li>
|
||||
</ul>
|
||||
</li>
|
||||
|
||||
<li<%= sidebar_current("guides-analytical-workloads") %>>
|
||||
|
||||
@@ -49,6 +49,9 @@
|
||||
/guides/load-balancing/haproxy.html https://learn.hashicorp.com/nomad/load-balancing/haproxy
|
||||
/guides/load-balancing/traefik.html https://learn.hashicorp.com/nomad/load-balancing/traefik
|
||||
|
||||
/guides/stateful-workloads/host-volumes.html https://learn.hashicorp.com/nomad/stateful-workloads/host-volumes
|
||||
/guides/stateful-workloads/portworx.html https://learn.hashicorp.com/nomad/stateful-workloads/portworx
|
||||
|
||||
# Website
|
||||
/community.html /resources.html
|
||||
|
||||
|
||||
Reference in New Issue
Block a user