getting started cleanup

This commit is contained in:
Clint Shryock
2015-09-27 19:04:34 -05:00
parent 8cf9782f2e
commit 7f718425cb
4 changed files with 74 additions and 63 deletions

View File

@@ -57,18 +57,19 @@ $ sudo nomad agent -config server.hcl
==> Nomad agent started! Log data will stream in below:
[INFO] serf: EventMemberJoin: nomad.global 127.0.0.1
[INFO] nomad: starting 4 scheduling worker(s) for [service batch _core]
[INFO] raft: Node at 127.0.0.1:4647 [Follower] entering Follower state
[WARN] serf: Failed to re-join any previously known node
[INFO] nomad: adding server nomad.global (Addr: 127.0.0.1:4647) (DC: dc1)
[WARN] raft: Heartbeat timeout reached, starting election
[INFO] raft: Node at 127.0.0.1:4647 [Candidate] entering Candidate state
[DEBUG] raft: Votes needed: 1
[DEBUG] raft: Vote granted. Tally: 1
[INFO] raft: Election won. Tally: 1
[INFO] raft: Node at 127.0.0.1:4647 [Leader] entering Leader state
[INFO] nomad: cluster leadership acquired
2015/09/28 00:13:27 [INFO] serf: EventMemberJoin: nomad.global 127.0.0.1
2015/09/28 00:13:27 [INFO] nomad: starting 4 scheduling worker(s) for [service batch _core]
2015/09/28 00:13:27 [INFO] raft: Node at 127.0.0.1:4647 [Follower] entering Follower state
2015/09/28 00:13:27 [INFO] nomad: adding server nomad.global (Addr: 127.0.0.1:4647) (DC: dc1)
2015/09/28 00:13:28 [WARN] raft: Heartbeat timeout reached, starting election
2015/09/28 00:13:28 [INFO] raft: Node at 127.0.0.1:4647 [Candidate] entering Candidate state
2015/09/28 00:13:28 [DEBUG] raft: Votes needed: 1
2015/09/28 00:13:28 [DEBUG] raft: Vote granted. Tally: 1
2015/09/28 00:13:28 [INFO] raft: Election won. Tally: 1
2015/09/28 00:13:28 [INFO] raft: Node at 127.0.0.1:4647 [Leader] entering Leader state
2015/09/28 00:13:28 [INFO] nomad: cluster leadership acquired
2015/09/28 00:13:28 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2015/09/28 00:13:28 [DEBUG] raft: Node 127.0.0.1:4647 updated peer set (2): [127.0.0.1:4647]
```
We can see above that client mode is disabled, and that we are
@@ -140,8 +141,8 @@ we should see both nodes in the `ready` state:
```
$ nomad node-status
ID DC Name Class Drain Status
e5239796-7285-3ed2-efe1-37cdc2d459d4 dc1 nomad <none> false ready
d12e4ab0-4206-bd33-ff75-e1367590eceb dc1 nomad <none> false ready
f7780117-2cae-8ee9-4b36-f34dd796ab02 dc1 nomad <none> false ready
ffb5b55a-6059-9ec7-6108-23a2bbba95da dc1 nomad <none> false ready
```
We now have a simple three node cluster running. The only difference
@@ -158,13 +159,13 @@ Then, use the [`run` command](/docs/commands/run.html) to submit the job:
```
$ nomad run example.nomad
==> Monitoring evaluation "2d742049-497f-c602-c56d-ae2a328a5671"
==> Monitoring evaluation "77e5075f-2a1b-9cce-d14e-fe98cca9e17f"
Evaluation triggered by job "example"
Allocation "44d46439-655d-701e-55ce-552ee74fbbd8" created: node "e5239796-7285-3ed2-efe1-37cdc2d459d4", group "cache"
Allocation "624be24f-5992-0c75-742d-7f8dbd3044a2" created: node "e5239796-7285-3ed2-efe1-37cdc2d459d4", group "cache"
Allocation "a133a2c7-cc3c-2f8c-8664-71d2389c7759" created: node "d12e4ab0-4206-bd33-ff75-e1367590eceb", group "cache"
Allocation "711edd85-f183-99ea-910a-6445b23d79e4" created: node "ffb5b55a-6059-9ec7-6108-23a2bbba95da", group "cache"
Allocation "98218a8a-627c-308f-8941-acdbffe1940c" created: node "f7780117-2cae-8ee9-4b36-f34dd796ab02", group "cache"
Allocation "e8957a7f-6fff-f61f-2878-57715c26725d" created: node "f7780117-2cae-8ee9-4b36-f34dd796ab02", group "cache"
Evaluation status changed: "pending" -> "complete"
==> Evaluation "2d742049-497f-c602-c56d-ae2a328a5671" finished with status "complete"
==> Evaluation "77e5075f-2a1b-9cce-d14e-fe98cca9e17f" finished with status "complete"
```
We can see in the output that the scheduler assigned two of the
@@ -180,17 +181,17 @@ Name = example
Type = service
Priority = 50
Datacenters = dc1
Status =
Status = <none>
==> Evaluations
ID Priority TriggeredBy Status
2d742049-497f-c602-c56d-ae2a328a5671 50 job-register complete
ID Priority TriggeredBy Status
77e5075f-2a1b-9cce-d14e-fe98cca9e17f 50 job-register complete
==> Allocations
ID EvalID NodeID TaskGroup Desired Status
44d46439-655d-701e-55ce-552ee74fbbd8 2d742049-497f-c602-c56d-ae2a328a5671 e5239796-7285-3ed2-efe1-37cdc2d459d4 cache run running
a133a2c7-cc3c-2f8c-8664-71d2389c7759 2d742049-497f-c602-c56d-ae2a328a5671 d12e4ab0-4206-bd33-ff75-e1367590eceb cache run running
624be24f-5992-0c75-742d-7f8dbd3044a2 2d742049-497f-c602-c56d-ae2a328a5671 e5239796-7285-3ed2-efe1-37cdc2d459d4 cache run running
711edd85-f183-99ea-910a-6445b23d79e4 77e5075f-2a1b-9cce-d14e-fe98cca9e17f ffb5b55a-6059-9ec7-6108-23a2bbba95da cache run running
98218a8a-627c-308f-8941-acdbffe1940c 77e5075f-2a1b-9cce-d14e-fe98cca9e17f f7780117-2cae-8ee9-4b36-f34dd796ab02 cache run running
e8957a7f-6fff-f61f-2878-57715c26725d 77e5075f-2a1b-9cce-d14e-fe98cca9e17f f7780117-2cae-8ee9-4b36-f34dd796ab02 cache run running
```
We can see that all our tasks have been allocated and are running.

View File

@@ -41,20 +41,25 @@ $ vagrant ssh
...
vagrant@nomad:~$ nomad
usage: nomad [--version] [--help] <command> [<args>]
Available commands are:
agent Runs a Nomad agent
agent-info Display status information about the local agent
alloc-status Display allocation status information and metadata
client-config View or modify client configuration details
eval-monitor Monitor an evaluation interactively
init Create an example job file
node-drain Toggle drain mode on a given node
node-status Display status information about nodes
run Run a new job
run Run a new job or update an existing job
server-force-leave Force a server into the 'left' state
server-join Join server nodes together
server-members Display a list of known servers and their status
status Display status information about jobs
stop Stop a running job
validate Checks if a given job specification is valid
version Prints the Nomad version
```

View File

@@ -46,11 +46,11 @@ We can register our example job now:
```
$ nomad run example.nomad
==> Monitoring evaluation "f119efb5-e2fa-a94f-e4cc-0c9f6c2a07f6"
==> Monitoring evaluation "3d823c52-929a-fa8b-c50d-1ac4d00cf6b7"
Evaluation triggered by job "example"
Allocation "c1d2f085-7049-6c4a-4479-1b2310fdaba9" created: node "1f43787c-7ab4-8d10-d2d6-1593ed06463a", group "cache"
Allocation "85b839d7-f67a-72a4-5a13-104020ae4807" created: node "2512929f-5b7c-a959-dfd9-bf8a8eb022a6", group "cache"
Evaluation status changed: "pending" -> "complete"
==> Evaluation "f119efb5-e2fa-a94f-e4cc-0c9f6c2a07f6" finished with status "complete"
==> Evaluation "3d823c52-929a-fa8b-c50d-1ac4d00cf6b7" finished with status "complete"
```
Anytime a job is updated, Nomad creates an evaluation to determine what
@@ -67,15 +67,15 @@ Name = example
Type = service
Priority = 50
Datacenters = dc1
Status =
Status = <none>
==> Evaluations
ID Priority TriggeredBy Status
f119efb5-e2fa-a94f-e4cc-0c9f6c2a07f6 50 job-register complete
3d823c52-929a-fa8b-c50d-1ac4d00cf6b7 50 job-register complete
==> Allocations
ID EvalID NodeID TaskGroup Desired Status
c1d2f085-7049-6c4a-4479-1b2310fdaba9 f119efb5-e2fa-a94f-e4cc-0c9f6c2a07f6 1f43787c-7ab4-8d10-d2d6-1593ed06463a cache run running
85b839d7-f67a-72a4-5a13-104020ae4807 3d823c52-929a-fa8b-c50d-1ac4d00cf6b7 2512929f-5b7c-a959-dfd9-bf8a8eb022a6 cache run running
```
Here we can see that our evaluation that was created has completed, and that
@@ -100,12 +100,13 @@ push the updated version of the job:
```
$ nomad run example.nomad
==> Monitoring evaluation "f358a19c-e451-acf1-a023-91f5b146e1ee"
==> Monitoring evaluation "ec199c63-2022-f5c7-328d-1cf85e61bf66"
Evaluation triggered by job "example"
Allocation "412b58c4-6be3-8ffe-0538-eace7b8a4c08" created: node "1f43787c-7ab4-8d10-d2d6-1593ed06463a", group "cache"
Allocation "7147246f-5ddd-5061-0534-ed28ede2d099" created: node "1f43787c-7ab4-8d10-d2d6-1593ed06463a", group "cache"
Allocation "21551679-5224-cb6b-80a2-d0b091612d2e" created: node "2512929f-5b7c-a959-dfd9-bf8a8eb022a6", group "cache"
Allocation "b1be1410-a01c-20ad-80ff-96750ec0f1da" created: node "2512929f-5b7c-a959-dfd9-bf8a8eb022a6", group "cache"
Allocation "ed32a35d-8086-3f04-e299-4432e562cbf2" created: node "2512929f-5b7c-a959-dfd9-bf8a8eb022a6", group "cache"
Evaluation status changed: "pending" -> "complete"
==> Evaluation "f358a19c-e451-acf1-a023-91f5b146e1ee" finished with status "complete"
==> Evaluation "ec199c63-2022-f5c7-328d-1cf85e61bf66" finished with status "complete"
```
Because we set the count of the task group to three, Nomad created two
@@ -131,12 +132,13 @@ specification now:
```
$ nomad run example.nomad
==> Monitoring evaluation "f358a19c-e451-acf1-a023-91f5b146e1ee"
==> Monitoring evaluation "d34d37f4-19b1-f4c0-b2da-c949e6ade82d"
Evaluation triggered by job "example"
Allocation "412b58c4-6be3-8ffe-0538-eace7b8a4c08" created: node "1f43787c-7ab4-8d10-d2d6-1593ed06463a", group "cache"
Allocation "7147246f-5ddd-5061-0534-ed28ede2d099" created: node "1f43787c-7ab4-8d10-d2d6-1593ed06463a", group "cache"
Allocation "5614feb0-212d-21e5-ccfb-56a394fc41d5" created: node "2512929f-5b7c-a959-dfd9-bf8a8eb022a6", group "cache"
Allocation "bf7e3ad5-b217-14fe-f3f8-2b83af9dbb42" created: node "2512929f-5b7c-a959-dfd9-bf8a8eb022a6", group "cache"
Allocation "e3978af2-f61e-c601-7aa1-90aea9b23cf6" created: node "2512929f-5b7c-a959-dfd9-bf8a8eb022a6", group "cache"
Evaluation status changed: "pending" -> "complete"
==> Evaluation "f358a19c-e451-acf1-a023-91f5b146e1ee" finished with status "complete"
==> Evaluation "d34d37f4-19b1-f4c0-b2da-c949e6ade82d" finished with status "complete"
```
We can see that Nomad handled the updated in three phases, each
@@ -151,10 +153,10 @@ is stopping the job. This is done with the [`stop` command](/docs/commands/stop.
```
$ nomad stop example
==> Monitoring evaluation "4b236340-d5ed-1838-be15-a896095d3ac9"
==> Monitoring evaluation "bb407de4-02cb-f009-d986-646d6c11366d"
Evaluation triggered by job "example"
Evaluation status changed: "pending" -> "complete"
==> Evaluation "4b236340-d5ed-1838-be15-a896095d3ac9" finished with status "complete"
==> Evaluation "bb407de4-02cb-f009-d986-646d6c11366d" finished with status "complete"
```
When we stop a job, it creates an evaluation which is used to stop all

View File

@@ -27,37 +27,40 @@ job configurations or prototype interactions. It should _**not**_ be used in
production as it does not persist state.
```
$ sudo nomad agent -dev
vagrant@nomad:~$ sudo nomad agent -dev
==> Starting Nomad agent...
==> Nomad agent configuration:
Atlas: <disabled>
Client: true
Log Level: debug
Log Level: DEBUG
Region: global (DC: dc1)
Server: true
==> Nomad agent started! Log data will stream in below:
[INFO] serf: EventMemberJoin: nomad.global 127.0.0.1
[INFO] nomad: starting 4 scheduling worker(s) for [service batch _core]
[INFO] raft: Node at 127.0.0.1:4647 [Follower] entering Follower state
[INFO] nomad: adding server nomad.global (Addr: 127.0.0.1:4647) (DC: dc1)
[DEBUG] client: applied fingerprints [storage arch cpu host memory]
[DEBUG] client: available drivers [exec docker]
[WARN] raft: Heartbeat timeout reached, starting election
[INFO] raft: Node at 127.0.0.1:4647 [Candidate] entering Candidate state
[DEBUG] raft: Votes needed: 1
[DEBUG] raft: Vote granted. Tally: 1
[INFO] raft: Election won. Tally: 1
[INFO] raft: Node at 127.0.0.1:4647 [Leader] entering Leader state
[INFO] raft: Disabling EnableSingleNode (bootstrap)
[DEBUG] raft: Node 127.0.0.1:4647 updated peer set (2): [127.0.0.1:4647]
[INFO] nomad: cluster leadership acquired
[DEBUG] client: node registration complete
[DEBUG] client: updated allocations at index 1 (0 allocs)
[DEBUG] client: allocs: (added 0) (removed 0) (updated 0) (ignore 0)
[DEBUG] client: state updated to ready
2015/09/27 23:51:27 [INFO] serf: EventMemberJoin: nomad.global 127.0.0.1
2015/09/27 23:51:27 [INFO] nomad: starting 4 scheduling worker(s) for [service batch _core]
2015/09/27 23:51:27 [INFO] client: using alloc directory /tmp/NomadClient599911093
2015/09/27 23:51:27 [INFO] raft: Node at 127.0.0.1:4647 [Follower] entering Follower state
2015/09/27 23:51:27 [INFO] nomad: adding server nomad.global (Addr: 127.0.0.1:4647) (DC: dc1)
2015/09/27 23:51:27 [WARN] fingerprint.network: Ethtool not found, checking /sys/net speed file
2015/09/27 23:51:28 [WARN] raft: Heartbeat timeout reached, starting election
2015/09/27 23:51:28 [INFO] raft: Node at 127.0.0.1:4647 [Candidate] entering Candidate state
2015/09/27 23:51:28 [DEBUG] raft: Votes needed: 1
2015/09/27 23:51:28 [DEBUG] raft: Vote granted. Tally: 1
2015/09/27 23:51:28 [INFO] raft: Election won. Tally: 1
2015/09/27 23:51:28 [INFO] raft: Node at 127.0.0.1:4647 [Leader] entering Leader state
2015/09/27 23:51:28 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2015/09/27 23:51:28 [DEBUG] raft: Node 127.0.0.1:4647 updated peer set (2): [127.0.0.1:4647]
2015/09/27 23:51:28 [INFO] nomad: cluster leadership acquired
2015/09/27 23:51:29 [DEBUG] client: applied fingerprints [arch cpu host memory storage network]
2015/09/27 23:51:29 [DEBUG] client: available drivers [docker exec java]
2015/09/27 23:51:29 [DEBUG] client: node registration complete
2015/09/27 23:51:29 [DEBUG] client: updated allocations at index 1 (0 allocs)
2015/09/27 23:51:29 [DEBUG] client: allocs: (added 0) (removed 0) (updated 0) (ignore 0)
2015/09/27 23:51:29 [DEBUG] client: state updated to ready
```
As you can see, the Nomad agent has started and has output some log