From 5b0df334c0e7fc871e1abfc27f0228e8e2af556c Mon Sep 17 00:00:00 2001 From: Rob Genova Date: Fri, 22 Jun 2018 20:55:12 +0000 Subject: [PATCH] Update guides/spark --- website/source/guides/spark/configuration.html.md | 10 +++++----- website/source/guides/spark/hdfs.html.md | 2 +- website/source/guides/spark/monitoring.html.md | 4 ++-- website/source/guides/spark/spark.html.md | 2 +- 4 files changed, 9 insertions(+), 9 deletions(-) diff --git a/website/source/guides/spark/configuration.html.md b/website/source/guides/spark/configuration.html.md index 929861be9..cd0438253 100644 --- a/website/source/guides/spark/configuration.html.md +++ b/website/source/guides/spark/configuration.html.md @@ -40,30 +40,30 @@ the first Nomad server contacted. - `spark.nomad.docker.email` `(string: nil)` - Specifies the email address to use when downloading the Docker image specified by [spark.nomad.dockerImage](#spark.nomad.dockerImage). See the -[Docker driver authentication](https://www.nomadproject.io/docs/drivers/docker.html#authentication) +[Docker driver authentication](/docs/drivers/docker.html#authentication) docs for more information. - `spark.nomad.docker.password` `(string: nil)` - Specifies the password to use when downloading the Docker image specified by [spark.nomad.dockerImage](#spark.nomad.dockerImage). See the -[Docker driver authentication](https://www.nomadproject.io/docs/drivers/docker.html#authentication) +[Docker driver authentication](/docs/drivers/docker.html#authentication) docs for more information. - `spark.nomad.docker.serverAddress` `(string: nil)` - Specifies the server address (domain/IP without the protocol) to use when downloading the Docker image specified by [spark.nomad.dockerImage](#spark.nomad.dockerImage). Docker Hub is used by default. See the -[Docker driver authentication](https://www.nomadproject.io/docs/drivers/docker.html#authentication) +[Docker driver authentication](/docs/drivers/docker.html#authentication) docs for more information. - `spark.nomad.docker.username` `(string: nil)` - Specifies the username to use when downloading the Docker image specified by [spark.nomad.dockerImage](#spark-nomad-dockerImage). See the -[Docker driver authentication](https://www.nomadproject.io/docs/drivers/docker.html#authentication) +[Docker driver authentication](/docs/drivers/docker.html#authentication) docs for more information. - `spark.nomad.dockerImage` `(string: nil)` - Specifies the `URL` for the -[Docker image](https://www.nomadproject.io/docs/drivers/docker.html#image) to +[Docker image](/docs/drivers/docker.html#image) to use to run Spark with Nomad's `docker` driver. When not specified, Nomad's `exec` driver will be used instead. diff --git a/website/source/guides/spark/hdfs.html.md b/website/source/guides/spark/hdfs.html.md index d7d95013b..a901412c2 100644 --- a/website/source/guides/spark/hdfs.html.md +++ b/website/source/guides/spark/hdfs.html.md @@ -117,7 +117,7 @@ DataNodes to generically reference the NameNode: ``` Another viable option for DataNode task group is to use a dedicated -[system](https://www.nomadproject.io/docs/runtime/schedulers.html#system) job. +[system](/docs/schedulers.html#system) job. This will deploy a DataNode to every client node in the system, which may or may not be desirable depending on your use case. diff --git a/website/source/guides/spark/monitoring.html.md b/website/source/guides/spark/monitoring.html.md index 2299e9650..69430664d 100644 --- a/website/source/guides/spark/monitoring.html.md +++ b/website/source/guides/spark/monitoring.html.md @@ -127,9 +127,9 @@ $ spark-submit \ Nomad clients collect the `stderr` and `stdout` of running tasks. The CLI or the HTTP API can be used to inspect logs, as documented in -[Accessing Logs](https://www.nomadproject.io/guides/operating-a-job/accessing-logs.html). +[Accessing Logs](/guides/operating-a-job/accessing-logs.html). In cluster mode, the `stderr` and `stdout` of the `driver` application can be -accessed in the same way. The [Log Shipper Pattern](https://www.nomadproject.io/guides/operating-a-job/accessing-logs.html#log-shipper-pattern) uses sidecar tasks to forward logs to a central location. This +accessed in the same way. The [Log Shipper Pattern](/guides/operating-a-job/accessing-logs.html#log-shipper-pattern) uses sidecar tasks to forward logs to a central location. This can be done using a job template as follows: ```hcl diff --git a/website/source/guides/spark/spark.html.md b/website/source/guides/spark/spark.html.md index c2ef5b0d2..856958ce4 100644 --- a/website/source/guides/spark/spark.html.md +++ b/website/source/guides/spark/spark.html.md @@ -10,7 +10,7 @@ description: |- Nomad is well-suited for analytical workloads, given its [performance characteristics](https://www.hashicorp.com/c1m/) and first-class support for -[batch scheduling](https://www.nomadproject.io/docs/runtime/schedulers.html). +[batch scheduling](/docs/schedulers.html). Apache Spark is a popular data processing engine/framework that has been architected to use third-party schedulers. The Nomad ecosystem includes a [fork of Apache Spark](https://github.com/hashicorp/nomad-spark) that natively