Commit Graph

14677 Commits

Author SHA1 Message Date
Lang Martin
ae7a8da18c agent/config, config/* mapstructure tags -> hcl tags 2019-04-30 10:29:14 -04:00
Lang Martin
bac0d5f0ed config_parse add new ParseConfigFileDirectHCL
- parse by using hcl.Decode directly
- handle time.Duration strings in a second pass
- report unexpected keys in a third pass
2019-04-30 10:29:14 -04:00
Lang Martin
35aa070189 update vendor/github.com/hashicorp/hcl 2019-04-23 11:56:07 -04:00
Michael Schurter
987ed01b11 Merge pull request #5599 from hashicorp/docs-091rc1
docs: add download link to 0.9.1-rc1
2019-04-23 08:54:41 -07:00
Michael Schurter
c2ffc69a41 docs: add download link to 0.9.1-rc1 2019-04-23 08:47:21 -07:00
Nick Ethier
cfad24c976 website: add plugin docs (#5501)
website: add plugin docs
2019-04-23 11:22:08 -04:00
Nick Ethier
ff9b3e370a website: fixs a few errors in new plugin docs 2019-04-23 11:15:26 -04:00
Mahmood Ali
c76130c052 Merge pull request #5598 from hashicorp/b-dont-forward-logs
fix crash when executor parent nomad process dies
2019-04-23 10:15:30 -04:00
Mahmood Ali
c07c0c810f fix crash when executor parent nomad process dies
Fixes https://github.com/hashicorp/nomad/issues/5593

Executor seems to die unexpectedly after nomad agent dies or is
restarted.  The crash seems to occur at the first log message after
the nomad agent dies.

To ease debugging we forward executor log messages to executor.log as
well as to Stderr.  `go-plugin` sets up plugins with Stderr pointing to
a pipe being read by plugin client, the nomad agent in our case[1].
When the nomad agent dies, the pipe is closed, and any subsequent
executor logs fail with ErrClosedPipe and SIGPIPE signal.  SIGPIPE
results into executor process dying.

I considered adding a handler to ignore SIGPIPE, but hc-log library
currently panics when logging write operation fails[2]

This we opt to revert to v0.8 behavior of exclusively writing logs to
executor.log, while we investigate alternative options.

[1] https://github.com/hashicorp/nomad/blob/v0.9.0/vendor/github.com/hashicorp/go-plugin/client.go#L528-L535
[2] https://github.com/hashicorp/nomad/blob/v0.9.0/vendor/github.com/hashicorp/go-hclog/int.go#L320-L323
2019-04-23 09:52:46 -04:00
Danielle Lancashire
04d4d86e16 changelog: Update for GH-5512 and GH-5577 2019-04-23 13:12:08 +02:00
Danielle
9a4fe5e98f Merge pull request #5512 from hashicorp/dani/f-alloc-stop
alloc-lifecycle: nomad alloc stop
2019-04-23 13:05:08 +02:00
Danielle Lancashire
bb142af5d6 allocs: Add nomad alloc stop
This adds a `nomad alloc stop` command that can be used to stop and
force migrate an allocation to a different node.

This is built on top of the AllocUpdateDesiredTransitionRequest and
explicitly limits the scope of access to that transition to expose it
under the alloc-lifecycle ACL.

The API returns the follow up eval that can be used as part of
monitoring in the CLI or parsed and used in an external tool.
2019-04-23 12:50:23 +02:00
Chris Baker
09c998a4a1 Merge pull request #5591 from hashicorp/cgbaker/changelog
changelog: added entry for #5540 fix
2019-04-22 15:31:22 -04:00
Michael Schurter
95bc6fe301 Merge pull request #5586 from hashicorp/docs-deploy-ver
docs: bump deployment guide to 0.9.0
2019-04-22 12:29:22 -07:00
Chris Baker
184e171e11 changelog: added entry for #5540 fix 2019-04-22 19:27:40 +00:00
Chris Baker
7b4ac71d2f Merge pull request #5541 from hashicorp/b/5540-bad-client-alloc-metrics
client/metrics: fixed stale metrics
2019-04-22 15:07:30 -04:00
Mahmood Ali
151e0ae772 Merge pull request #5577 from hashicorp/dani/b-logmon-unrecoverable
logging: Attempt to recover logmon failures
2019-04-22 14:40:24 -04:00
Michael Schurter
0f91277d85 tweak logging level for failed log line
Co-Authored-By: notnoop <mahmood@notnoop.com>
2019-04-22 14:40:17 -04:00
Chris Baker
7d8fa4c045 client/metrics: modified metrics to use (updated) client copy of allocation instead of (unupdated) server copy 2019-04-22 18:31:45 +00:00
Michael Schurter
a3e8f51643 docs: bump deployment guide to 0.9.0 2019-04-19 12:39:38 -07:00
Michael Schurter
8a0df4034d Merge pull request #5583 from ygersie/fingerprint_nilpointer
fix nil pointer in fingerprinting AWS env leading to crash
2019-04-19 08:08:59 -07:00
Mahmood Ali
54e1e0760b Merge pull request #5437 from hashicorp/r-upstream-libcontainer-plain
Use upstream libcontainer package
2019-04-19 10:15:13 -04:00
Mahmood Ali
6747195682 comment on using init() for libcontainer handling 2019-04-19 09:49:04 -04:00
Mahmood Ali
9bf54eae97 comment what refer to 2019-04-19 09:49:04 -04:00
Mahmood Ali
b6af5c9dca Move libcontainer helper to executor package 2019-04-19 09:49:04 -04:00
Mahmood Ali
0088f40fd4 vendor upstream opencontainers/runc 2019-04-19 09:49:04 -04:00
Mahmood Ali
9050f5f611 Merge pull request #5585 from hashicorp/b-drivers-node-registration
client: wait for batched driver updates before registering nodes
2019-04-19 09:47:21 -04:00
Mahmood Ali
8041b0cbe2 clarify cryptic log line 2019-04-19 09:31:43 -04:00
Mahmood Ali
9a2f46f332 client: log detected driver health state
Noticed that `detected drivers` log line was misleading - when a driver
doesn't fingerprint before timeout, their health status is empty string
`""` which we would mark as detected.

Now, we log all drivers along with their state to ease driver
fingerprint debugging.
2019-04-19 09:15:25 -04:00
Mahmood Ali
9dcebcd8a3 client: avoid registering node twice right away
I noticed that `watchNodeUpdates()` almost immediately after
`registerAndHeartbeat()` calls `retryRegisterNode()`, well after 5
seconds.

This call is unnecessary and made debugging a bit harder.  So here, we
ensure that we only re-register node for new node events, not for
initial registration.
2019-04-19 09:12:50 -04:00
Preetha
92a4033a1a Update CHANGELOG.md 2019-04-19 08:02:48 -05:00
Mahmood Ali
7a68d76160 client: wait for batched driver updated
Here we retain 0.8.7 behavior of waiting for driver fingerprints before
registering a node, with some timeout.  This is needed for system jobs,
as system job scheduling for node occur at node registration, and the
race might mean that a system job may not get placed on the node because
of missing drivers.

The timeout isn't strictly necessary, but raising it to 1 minute as it's
closer to indefinitely blocked than 1 second.  We need to keep the value
high enough to capture as much drivers/devices, but low enough that
doesn't risk blocking too long due to misbehaving plugin.

Fixes https://github.com/hashicorp/nomad/issues/5579
2019-04-19 09:00:24 -04:00
Yorick Gersie
77a8fda87c fix nil pointer in fingerprinting AWS env leading to crash
HTTP Client returns a nil response if an error has occured. We first
  need to check for an error before being able to check the HTTP response
  code.
2019-04-19 11:07:13 +02:00
Preetha
83a2e693b7 Merge pull request #5580 from hashicorp/f-api-preemption-info
Add preemption related fields to AllocationListStub
2019-04-18 18:38:25 -07:00
Preetha Appan
ad77c18c87 Add preemption related fields to AllocationListStub 2019-04-18 10:36:44 -05:00
Danielle
11388ab992 Merge pull request #5572 from hashicorp/dani/b-docker-volumes
Switch to pre-0.9 behaviour for handling volumes
2019-04-18 15:48:23 +02:00
Danielle
4789948ba8 Merge pull request #5573 from hashicorp/dani/update-vol-docs
docs: Clarify docker volume behaviour
2019-04-18 14:30:16 +02:00
Danielle Lancashire
ccce364cbd Switch to pre-0.9 behaviour for handling volumes
In Nomad 0.9, we made volume driver handling the same for `""`, and
`"local"` volumes. Prior to Nomad 0.9 however these had slightly different
behaviour for relative paths and named volumes.

Prior to 0.9 the empty string would expand relative paths within the task
dir, and `"local"` volumes that are not absolute paths would be treated
as docker named volumes.

This commit reverts to the previous behaviour as follows:

| Nomad Version | Driver  |   Volume Spec    | Behaviour                 |
|-------------------------------------------------------------------------
| all           | ""      | testing:/testing | allocdir/testing          |
| 0.8.7         | "local" | testing:/testing | "testing" as named volume |
| 0.9.0         | "local" | testing:/testing | allocdir/testing          |
| 0.9.1         | "local" | testing:/testing | "testing" as named volume |
2019-04-18 14:28:45 +02:00
Danielle Lancashire
269e2c00fb loggging: Attempt to recover logmon failures
Currently, when logmon fails to reattach, we will retry reattachment to
the same pid until the task restart specification is exhausted.

Because we cannot clear hook state during error conditions, it is not
possible for us to signal to a future restart that it _shouldn't_
attempt to reattach to the plugin.

Here we revert to explicitly detecting reattachment seperately from a
launch of a new logmon, so we can recover from scenarios where a logmon
plugin has failed.

This is a net improvement over the current hard failure situation, as it
means in the most common case (the pid has gone away), we can recover.

Other reattachment failure modes where the plugin may still be running
could potentially cause a duplicate process, or a subsequent failure to launch
a new plugin.

If there was a duplicate process, it could potentially cause duplicate
logging. This is better than a production workload outage.

If there was a subsequent failure to launch a new plugin, it would fail
in the same (retry until restarts are exhausted) as the current failure
mode.
2019-04-18 13:41:56 +02:00
Chris Baker
15c64875d1 Merge pull request #5559 from ArangoGutierrez/website_docs_singularity
list singularity as a community driver
2019-04-17 12:42:29 -04:00
Charlie Voiselle
4a0da839a9 fixed header level 2019-04-17 10:12:43 -04:00
Danielle Lancashire
acf8ab8665 docs: Clairfy docker volume behaviour 2019-04-17 11:31:55 +02:00
Mahmood Ali
c07b72959d Merge pull request #5568 from hashicorp/b-nomad-logger-restart
Fixes #5566 .

Fix a case where docker logging process may lock up nomad agent restart.

Looks like we have a case where docker logger is started even through logmon isn't. In such case, the fifo writer blocks indefinitely and because the open operation happens in the main goroutine, nomad agent blocks indefinitely.

This fixes the issue where the fifo open operation happens in goroutine instead of main goroutine.

We should follow up independently to ensure logmon <-> dockerlogger ordering and consider having task recovery happen in non-main goroutine with some sensible timeouts.
2019-04-16 19:34:37 -04:00
Eduardo Arango
9f97da0956 resolve merge conflicts
Signed-off-by: Eduardo Arango <eduardo@sylabs.io>
2019-04-16 17:01:22 -05:00
Eduardo Arango
bd0d641a5e address @cgbaker comments
Signed-off-by: Eduardo Arango <eduardo@sylabs.io>
2019-04-16 16:59:59 -05:00
Michael Schurter
009b750e21 Merge pull request #5479 from hashicorp/b-vault-renewal
vault: fix renewal time
2019-04-16 12:20:26 -07:00
Michael Schurter
888304b074 changelog: add #5479 2019-04-16 11:23:28 -07:00
Michael Schurter
b135d28450 vault: fix data races 2019-04-16 11:22:44 -07:00
Michael Schurter
0e6da17a8f vault: fix renewal time
Renewal time was being calculated as 10s+Intn(lease-10s), so the renewal
time could be very rapid or within 1s of the deadline: [10s, lease)

This commit fixes the renewal time by calculating it as:

	(lease/2) +/- 10s

For a lease of 60s this means the renewal will occur in [20s, 40s).
2019-04-16 11:22:44 -07:00
Mahmood Ali
96a54cbbd3 locking and opening streams in goroutine comment 2019-04-16 11:02:19 -04:00