Merge pull request #3557 from hashicorp/f-terraform-azure

Add Packer/Terraform support for Azure
This commit is contained in:
Rob Genova
2017-11-21 10:41:50 -08:00
committed by GitHub
32 changed files with 834 additions and 263 deletions

3
.gitignore vendored
View File

@@ -82,3 +82,6 @@ rkt-*
# generated routes file
command/agent/bindata_assetfs.go
# auto-generated cert file for Terraform/Azure
azure-hashistack.pem

View File

@@ -1,6 +1,6 @@
# Provision a Nomad cluster on AWS with Packer & Terraform
# Provision a Nomad cluster in the Cloud
Use this to easily provision a Nomad sandbox environment on AWS with
Use this repo to easily provision a Nomad sandbox environment on AWS or Azure with
[Packer](https://packer.io) and [Terraform](https://terraform.io).
[Consul](https://www.consul.io/intro/index.html) and
[Vault](https://www.vaultproject.io/intro/index.html) are also installed
@@ -11,79 +11,28 @@ integration](examples/spark/README.md) is included.
## Setup
Clone this repo and (optionally) use [Vagrant](https://www.vagrantup.com/intro/index.html)
Clone the repo and optionally use [Vagrant](https://www.vagrantup.com/intro/index.html)
to bootstrap a local staging environment:
```bash
$ git clone git@github.com:hashicorp/nomad.git
$ cd terraform/aws
$ cd terraform
$ vagrant up && vagrant ssh
```
The Vagrant staging environment pre-installs Packer, Terraform, and Docker.
### Pre-requisites
You will need the following:
- AWS account
- [API access keys](http://aws.amazon.com/developers/access-keys/)
- [SSH key pair](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-key-pairs.html)
Set environment variables for your AWS credentials:
```bash
$ export AWS_ACCESS_KEY_ID=[ACCESS_KEY_ID]
$ export AWS_SECRET_ACCESS_KEY=[SECRET_ACCESS_KEY]
```
The Vagrant staging environment pre-installs Packer, Terraform, Docker and the
Azure CLI.
## Provision a cluster
`cd` to an environment subdirectory:
- Follow the steps [here](aws/README.md) to provision a cluster on AWS.
- Follow the steps [here](azure/README.md) to provision a cluster on Azure.
```bash
$ cd env/us-east
```
Continue with the steps below after a cluster has been provisioned.
Update `terraform.tfvars` with your SSH key name:
## Test
```bash
region = "us-east-1"
ami = "ami-a780afdc"
instance_type = "t2.medium"
key_name = "KEY_NAME"
server_count = "3"
client_count = "4"
```
Note that a pre-provisioned, publicly available AMI is used by default
(for the `us-east-1` region). To provision your own customized AMI with
[Packer](https://www.packer.io/intro/index.html), follow the instructions
[here](aws/packer/README.md). You will need to replace the AMI ID in
`terraform.tfvars` with your own. You can also modify the `region`,
`instance_type`, `server_count`, and `client_count`. At least one client and
one server are required.
Provision the cluster:
```bash
$ terraform get
$ terraform plan
$ terraform apply
```
## Access the cluster
SSH to one of the servers using its public IP:
```bash
$ ssh -i /path/to/key ubuntu@PUBLIC_IP
```
Note that the AWS security group is configured by default to allow all traffic
over port 22. This is *not* recommended for production deployments.
Run a few basic commands to verify that Consul and Nomad are up and running
Run a few basic status commands to verify that Consul and Nomad are up and running
properly:
```bash
@@ -92,7 +41,9 @@ $ nomad server-members
$ nomad node-status
```
Optionally, initialize and unseal Vault:
## Unseal the Vault cluster (optional)
To initialize and unseal Vault, run:
```bash
$ vault init -key-shares=1 -key-threshold=1
@@ -106,23 +57,26 @@ convenience. For a production environment, it is recommended that you create at
least five unseal key shares and securely distribute them to independent
operators. The `vault init` command defaults to five key shares and a key
threshold of three. If you provisioned more than one server, the others will
become standby nodes (but should still be unsealed). You can query the active
become standby nodes but should still be unsealed. You can query the active
and standby nodes independently:
```bash
$ dig active.vault.service.consul
$ dig active.vault.service.consul SRV
$ dig standby.vault.service.consul
```
```
See the [Getting Started guide](https://www.vaultproject.io/intro/getting-started/first-secret.html)
for an introduction to Vault.
## Getting started with Nomad & the HashiCorp stack
See:
Use the following links to get started with Nomad and its HashiCorp integrations:
* [Getting Started with Nomad](https://www.nomadproject.io/intro/getting-started/jobs.html)
* [Consul integration](https://www.nomadproject.io/docs/service-discovery/index.html)
* [Vault integration](https://www.nomadproject.io/docs/vault-integration/index.html)
* [consul-template integration](https://www.nomadproject.io/docs/job-specification/template.html)
* [consul-template integration](https://www.nomadproject.io/docs/job-specification/template.html)
## Apache Spark integration

View File

@@ -8,14 +8,20 @@ Vagrant.configure(2) do |config|
cd /tmp
PACKERVERSION=1.0.0
PACKERVERSION=1.1.2
PACKERDOWNLOAD=https://releases.hashicorp.com/packer/${PACKERVERSION}/packer_${PACKERVERSION}_linux_amd64.zip
TERRAFORMVERSION=0.9.8
TERRAFORMVERSION=0.11.0
TERRAFORMDOWNLOAD=https://releases.hashicorp.com/terraform/${TERRAFORMVERSION}/terraform_${TERRAFORMVERSION}_linux_amd64.zip
echo "Dependencies..."
sudo apt-get install -y unzip tree
# Azure CLI
echo "deb [arch=amd64] https://packages.microsoft.com/repos/azure-cli/ wheezy main" | sudo tee /etc/apt/sources.list.d/azure-cli.list
sudo apt-key adv --keyserver packages.microsoft.com --recv-keys 417A0893
sudo apt-get install apt-transport-https
sudo apt-get update && sudo apt-get install azure-cli
# Disable the firewall
sudo ufw disable
@@ -43,9 +49,10 @@ Vagrant.configure(2) do |config|
SHELL
config.vm.synced_folder "../aws/", "/home/vagrant/aws", owner: "vagrant", group: "vagrant"
config.vm.synced_folder "../shared/", "/home/vagrant/shared", owner: "vagrant", group: "vagrant"
config.vm.synced_folder "../examples/", "/home/vagrant/examples", owner: "vagrant", group: "vagrant"
config.vm.synced_folder "aws/", "/home/vagrant/aws", owner: "vagrant", group: "vagrant"
config.vm.synced_folder "azure/", "/home/vagrant/azure", owner: "vagrant", group: "vagrant"
config.vm.synced_folder "shared/", "/home/vagrant/shared", owner: "vagrant", group: "vagrant"
config.vm.synced_folder "examples/", "/home/vagrant/examples", owner: "vagrant", group: "vagrant"
config.vm.provider "virtualbox" do |vb|
vb.memory = "2048"

79
terraform/aws/README.md Normal file
View File

@@ -0,0 +1,79 @@
# Provision a Nomad cluster on AWS
## Pre-requisites
To get started, create the following:
- AWS account
- [API access keys](http://aws.amazon.com/developers/access-keys/)
- [SSH key pair](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-key-pairs.html)
## Set the AWS environment variables
```bash
$ export AWS_ACCESS_KEY_ID=[AWS_ACCESS_KEY_ID]
$ export AWS_SECRET_ACCESS_KEY=[AWS_SECRET_ACCESS_KEY]
```
## Build an AWS machine image with Packer
[Packer](https://www.packer.io/intro/index.html) is HashiCorp's open source tool
for creating identical machine images for multiple platforms from a single
source configuration. The Terraform templates included in this repo reference a
publicly avaialble Amazon machine image (AMI) by default. The AMI can be customized
through modifications to the [build configuration script](../shared/scripts/setup.sh)
and [packer.json](packer.json).
Use the following command to build the AMI:
```bash
$ packer build packer.json
```
## Provision a cluster with Terraform
`cd` to an environment subdirectory:
```bash
$ cd env/us-east
```
Update `terraform.tfvars` with your SSH key name and your AMI ID if you created
a custom AMI:
```bash
region = "us-east-1"
ami = "ami-6ce26316"
instance_type = "t2.medium"
key_name = "KEY_NAME"
server_count = "3"
client_count = "4"
```
You can also modify the `region`, `instance_type`, `server_count`, and `client_count`.
At least one client and one server are required.
Provision the cluster:
```bash
$ terraform init
$ terraform get
$ terraform plan
$ terraform apply
```
## Access the cluster
SSH to one of the servers using its public IP:
```bash
$ ssh -i /path/to/private/key ubuntu@PUBLIC_IP
```
The infrastructure that is provisioned for this test environment is configured to
allow all traffic over port 22. This is obviously not recommended for production
deployments.
## Next Steps
Click [here](../README.md#test) for next steps.

View File

@@ -22,9 +22,9 @@ variable "client_count" {
default = "4"
}
variable "cluster_tag_value" {
variable "retry_join" {
description = "Used by Consul to automatically form a cluster."
default = "auto-join"
default = "provider=aws tag_key=ConsulAutoJoin tag_value=auto-join"
}
provider "aws" {
@@ -34,22 +34,20 @@ provider "aws" {
module "hashistack" {
source = "../../modules/hashistack"
region = "${var.region}"
ami = "${var.ami}"
instance_type = "${var.instance_type}"
key_name = "${var.key_name}"
server_count = "${var.server_count}"
client_count = "${var.client_count}"
cluster_tag_value = "${var.cluster_tag_value}"
region = "${var.region}"
ami = "${var.ami}"
instance_type = "${var.instance_type}"
key_name = "${var.key_name}"
server_count = "${var.server_count}"
client_count = "${var.client_count}"
retry_join = "${var.retry_join}"
}
output "IP_Addresses" {
value = <<CONFIGURATION
Client public IPs: ${join(", ", module.hashistack.client_public_ips)}
Client private IPs: ${join(", ", module.hashistack.client_private_ips)}
Server public IPs: ${join(", ", module.hashistack.primary_server_public_ips)}
Server private IPs: ${join(", ", module.hashistack.primary_server_private_ips)}
Server public IPs: ${join(", ", module.hashistack.server_public_ips)}
To connect, add your private key and SSH into any client or server with
`ssh ubuntu@PUBLIC_IP`. You can test the integrity of the cluster by running:
@@ -67,6 +65,7 @@ executing:
Simply wait a few seconds and rerun the command if this occurs.
The Nomad UI can be accessed at http://PUBLIC_IP:4646/ui.
The Consul UI can be accessed at http://PUBLIC_IP:8500/ui.
CONFIGURATION

View File

@@ -1,7 +1,7 @@
region = "us-east-1"
ami = "ami-a780afdc"
ami = "ami-6ce26316"
instance_type = "t2.medium"
key_name = "KEY_NAME"
server_count = "1"
server_count = "3"
client_count = "4"
cluster_tag_value = "auto-join"

View File

@@ -3,4 +3,4 @@
set -e
exec > >(sudo tee /var/log/user-data.log|logger -t user-data -s 2>/dev/console) 2>&1
sudo bash /ops/shared/scripts/client.sh "${region}" "${cluster_tag_value}"
sudo bash /ops/shared/scripts/client.sh "aws" "${retry_join}"

View File

@@ -3,4 +3,4 @@
set -e
exec > >(sudo tee /var/log/user-data.log|logger -t user-data -s 2>/dev/console) 2>&1
sudo bash /ops/shared/scripts/server.sh "${server_count}" "${region}" "${cluster_tag_value}"
sudo bash /ops/shared/scripts/server.sh "aws" "${server_count}" "${retry_join}"

View File

@@ -4,13 +4,13 @@ variable "instance_type" {}
variable "key_name" {}
variable "server_count" {}
variable "client_count" {}
variable "cluster_tag_value" {}
variable "retry_join" {}
data "aws_vpc" "default" {
default = true
}
resource "aws_security_group" "primary" {
resource "aws_security_group" "hashistack" {
name = "hashistack"
vpc_id = "${data.aws_vpc.default.id}"
@@ -21,7 +21,15 @@ resource "aws_security_group" "primary" {
cidr_blocks = ["0.0.0.0/0"]
}
# Consul UI
# Nomad
ingress {
from_port = 4646
to_port = 4646
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
# Consul
ingress {
from_port = 8500
to_port = 8500
@@ -68,13 +76,13 @@ resource "aws_security_group" "primary" {
}
}
data "template_file" "user_data_server_primary" {
data "template_file" "user_data_server" {
template = "${file("${path.root}/user-data-server.sh")}"
vars {
server_count = "${var.server_count}"
region = "${var.region}"
cluster_tag_value = "${var.cluster_tag_value}"
server_count = "${var.server_count}"
region = "${var.region}"
retry_join = "${var.retry_join}"
}
}
@@ -82,25 +90,25 @@ data "template_file" "user_data_client" {
template = "${file("${path.root}/user-data-client.sh")}"
vars {
region = "${var.region}"
cluster_tag_value = "${var.cluster_tag_value}"
region = "${var.region}"
retry_join = "${var.retry_join}"
}
}
resource "aws_instance" "primary" {
resource "aws_instance" "server" {
ami = "${var.ami}"
instance_type = "${var.instance_type}"
key_name = "${var.key_name}"
vpc_security_group_ids = ["${aws_security_group.primary.id}"]
vpc_security_group_ids = ["${aws_security_group.hashistack.id}"]
count = "${var.server_count}"
#Instance tags
tags {
Name = "hashistack-server-${count.index}"
ConsulAutoJoin = "${var.cluster_tag_value}"
ConsulAutoJoin = "auto-join"
}
user_data = "${data.template_file.user_data_server_primary.rendered}"
user_data = "${data.template_file.user_data_server.rendered}"
iam_instance_profile = "${aws_iam_instance_profile.instance_profile.name}"
}
@@ -108,14 +116,14 @@ resource "aws_instance" "client" {
ami = "${var.ami}"
instance_type = "${var.instance_type}"
key_name = "${var.key_name}"
vpc_security_group_ids = ["${aws_security_group.primary.id}"]
vpc_security_group_ids = ["${aws_security_group.hashistack.id}"]
count = "${var.client_count}"
depends_on = ["aws_instance.primary"]
depends_on = ["aws_instance.server"]
#Instance tags
tags {
Name = "hashistack-client-${count.index}"
ConsulAutoJoin = "${var.cluster_tag_value}"
ConsulAutoJoin = "auto-join"
}
user_data = "${data.template_file.user_data_client.rendered}"
@@ -164,16 +172,8 @@ data "aws_iam_policy_document" "auto_discover_cluster" {
}
}
output "primary_server_private_ips" {
value = ["${aws_instance.primary.*.private_ip}"]
}
output "primary_server_public_ips" {
value = ["${aws_instance.primary.*.public_ip}"]
}
output "client_private_ips" {
value = ["${aws_instance.client.*.private_ip}"]
output "server_public_ips" {
value = ["${aws_instance.server.*.public_ip}"]
}
output "client_public_ips" {

View File

@@ -5,7 +5,7 @@
"source_ami": "ami-80861296",
"instance_type": "t2.medium",
"ssh_username": "ubuntu",
"ami_name": "nomad-packer {{timestamp}}",
"ami_name": "hashistack {{timestamp}}",
"ami_groups": ["all"]
}],
"provisioners": [
@@ -18,16 +18,16 @@
},
{
"type": "file",
"source": "../../shared",
"source": "../shared",
"destination": "/ops"
},
{
"type": "file",
"source": "../../examples",
"source": "../examples",
"destination": "/ops"
},
{
"type": "shell",
"script": "../../shared/scripts/setup.sh"
"script": "../shared/scripts/setup.sh"
}]
}

View File

@@ -1,31 +0,0 @@
# Build an Amazon machine image with Packer
[Packer](https://www.packer.io/intro/index.html) is HashiCorp's open source tool
for creating identical machine images for multiple platforms from a single
source configuration. The Terraform templates included in this repo reference a
publicly avaialble Amazon machine image (AMI) by default. The Packer build
configuration used to create the public AMI is included [here](./packer.json).
If you wish to customize it and build your own private AMI, follow the
instructions below.
## Pre-requisites
See the pre-requisites listed [here](../../README.md). If you did not use the
included `Vagrantfile` to bootstrap a staging environment, you will need to
[install Packer](https://www.packer.io/intro/getting-started/install.html).
Set environment variables for your AWS credentials if you haven't already:
```bash
$ export AWS_ACCESS_KEY_ID=[ACCESS_KEY_ID]
$ export AWS_SECRET_ACCESS_KEY=[SECRET_ACCESS_KEY]
```
After you make your modifications to `packer.json`, execute the following
command to build the AMI:
```bash
$ packer build packer.json
```
Don't forget to copy the AMI ID to your [terraform.tfvars file](../env/us-east/terraform.tfvars).

174
terraform/azure/README.md Normal file
View File

@@ -0,0 +1,174 @@
# Provision a Nomad cluster on Azure
## Pre-requisites
To get started, you will need to [create an Azure account](https://azure.microsoft.com/en-us/free/).
## Install the Azure CLI
Run the following commands to install the Azure CLI. Note that you can use the
[Vagrant](../Vagrantfile) included in this repository to bootstrap a staging
environment that pre-installs the Azure CLI.
```bash
$ echo "deb [arch=amd64] https://packages.microsoft.com/repos/azure-cli/ wheezy main" | /
sudo tee /etc/apt/sources.list.d/azure-cli.list
$ sudo apt-key adv --keyserver packages.microsoft.com --recv-keys 417A0893
$ sudo apt-get install apt-transport-https
$ sudo apt-get update && sudo apt-get install azure-cli
```
## Login to Azure
Use the `az login` CLI command to log in to Azure:
```bash
$ az login
```
After completing the login process, take note of the `SUBSCRIPTION_ID` and the `TENANT_ID`
that are included in the output:
[
{
"cloudName": "AzureCloud",
"id": "SUBSCRIPTION_ID",
"isDefault": true,
"name": "Free Trial",
"state": "Enabled",
"tenantId": "TENANT_ID",
"user": {
"name": "rob@hashicorp.com",
"type": "user"
}
}
]
These will be used to set the `ARM_SUBSCRIPTION_ID` and `ARM_TENANT_ID` environment
variables for Packer and Terraform.
## Create an Application Id and Password
Run the following CLI command to create an application Id and password:
```bash
$ az ad sp create-for-rbac --role="Contributor" --scopes="/subscriptions/${SUBSCRIPTION_ID}"
```
{
"appId": "CLIENT_ID",
"displayName": "azure-cli-...",
"name": "http://azure-cli-...",
"password": "CLIENT_SECRET",
"tenant": "TENANT_ID"
}
`appId` and `password` above will be used for the `ARM_CLIENT_ID` and `ARM_CLIENT_SECRET`
environment variables.
## Create an Azure Resource Group
Use the following command to create an Azure [resource group](https://docs.microsoft.com/en-us/azure/azure-resource-manager/xplat-cli-azure-resource-manager#create-a-resource-group) for Packer:
```bash
$ az group create --name packer --location "East US"
```
## Set the Azure Environment Variables
export ARM_SUBSCRIPTION_ID=[ARM_SUBSCRIPTION_ID]
export ARM_CLIENT_ID=[ARM_CLIENT_ID]
export ARM_CLIENT_SECRET=[ARM_CLIENT_SECRET]
export ARM_TENANT_ID=[ARM_TENANT_ID]
export AZURE_RESOURCE_GROUP=packer
## Build an Azure machine image with Packer
[Packer](https://www.packer.io/intro/index.html) is HashiCorp's open source tool
for creating identical machine images for multiple platforms from a single
source configuration. The AMI can be customized through modifications to the
[build configuration script](../shared/scripts/setup.sh) and [packer.json](packer.json).
Use the following command to build the AMI:
```bash
$ packer build packer.json
```
After the Packer build process completes, you can output the image Id using the
following CLI command:
```bash
$ az image list --query "[?tags.Product=='Hashistack'].id"
```
[
"/subscriptions/SUBSCRIPTION_ID/resourceGroups/PACKER/providers/Microsoft.Compute/images/hashistack"
]
The following CLI command will delete the image, if you need to delete and recreate it:
```bash
$ az image delete --name hashistack --resource-group packer
```
## Provision a cluster with Terraform
`cd` to an environment subdirectory:
```bash
$ cd env/EastUS
```
Consul supports a cloud-based auto join feature which includes support for Azure.
The feature requires that we create a service principal with the `Reader` role.
Run the following command to create an Azure service principal for Consul auto join:
```bash
$ az ad sp create-for-rbac --role="Reader" --scopes="/subscriptions/[SUBSCRIPTION_ID]"
```
{
"appId": "CLIENT_ID",
"displayName": "azure-cli-...",
"name": "http://azure-cli-...",
"password": "CLIENT_SECRET",
"tenant": "TENANT_ID"
}
Update `terraform.tfvars` with you SUBSCRIPTION_ID, TENANT_ID, CLIENT_ID and CLIENT_SECRET. Use the CLIENT_ID and CLIENT_SECRET created above for the service principal:
```bash
location = "East US"
image_id = "/subscriptions/SUBSCRIPTION_ID/resourceGroups/PACKER/providers/Microsoft.Compute/images/hashistack"
vm_size = "Standard_DS1_v2"
server_count = 1
client_count = 4
retry_join = "provider=azure tag_name=ConsulAutoJoin tag_value=auto-join subscription_id=SUBSCRIPTION_ID tenant_id=TENANT_ID client_id=CLIENT_ID secret_access_key=CLIENT_SECRET"
```
Provision the cluster:
```bash
$ terraform init
$ terraform get
$ terraform plan
$ terraform apply
```
## Access the cluster
SSH to one of the servers using its public IP:
```bash
$ ssh -i azure-hashistack.pem ubuntu@PUBLIC_IP
```
`azure-hashistack.pem` above is auto-created during the provisioning process. The
infrastructure that is provisioned for this test environment is configured to
allow all traffic over port 22. This is obviously not recommended for production
deployments.
## Next steps
Click [here](../README.md#test) for next steps.

70
terraform/azure/env/EastUS/main.tf vendored Normal file
View File

@@ -0,0 +1,70 @@
variable "location" {
description = "The Azure location to deploy to."
default = "East US"
}
variable "image_id" {}
variable "vm_size" {
description = "The Azure VM size to use for both clients and servers."
default = "Standard_DS1_v2"
}
variable "server_count" {
description = "The number of servers to provision."
default = "3"
}
variable "client_count" {
description = "The number of clients to provision."
default = "4"
}
variable "retry_join" {
description = "Used by Consul to automatically form a cluster."
}
terraform {
required_version = ">= 0.10.1"
}
provider "azurerm" {}
module "hashistack" {
source = "../../modules/hashistack"
location = "${var.location}"
image_id = "${var.image_id}"
vm_size = "${var.vm_size}"
server_count = "${var.server_count}"
client_count = "${var.client_count}"
retry_join = "${var.retry_join}"
}
output "IP_Addresses" {
value = <<CONFIGURATION
Client public IPs: ${join(", ", module.hashistack.client_public_ips)}
Server public IPs: ${join(", ", module.hashistack.server_public_ips)}
To connect, add your private key and SSH into any client or server with
`ssh ubuntu@PUBLIC_IP`. You can test the integrity of the cluster by running:
$ consul members
$ nomad server-members
$ nomad node-status
If you see an error message like the following when running any of the above
commands, it usuallly indicates that the configuration script has not finished
executing:
"Error querying servers: Get http://127.0.0.1:4646/v1/agent/members: dial tcp
127.0.0.1:4646: getsockopt: connection refused"
Simply wait a few seconds and rerun the command if this occurs.
The Nomad UI can be accessed at http://PUBLIC_IP:4646/ui.
The Consul UI can be accessed at http://PUBLIC_IP:8500/ui.
CONFIGURATION
}

View File

@@ -0,0 +1,6 @@
location = "East US"
image_id = "/subscriptions/SUBSCRIPTION_ID/resourceGroups/PACKER/providers/Microsoft.Compute/images/hashistack"
vm_size = "Standard_DS1_v2"
server_count = 1
client_count = 4
retry_join = "provider=azure tag_name=ConsulAutoJoin tag_value=auto-join subscription_id=SUBSCRIPTION_ID tenant_id=TENANT_ID client_id=CLIENT_ID secret_access_key=CLIENT_SECRET"

View File

@@ -0,0 +1,6 @@
#!/bin/bash
set -e
exec > >(sudo tee /var/log/user-data.log|logger -t user-data -s 2>/dev/console) 2>&1
sudo bash /ops/shared/scripts/client.sh "azure" "${retry_join}"

View File

@@ -0,0 +1,6 @@
#!/bin/bash
set -e
exec > >(sudo tee /var/log/user-data.log|logger -t user-data -s 2>/dev/console) 2>&1
sudo bash /ops/shared/scripts/server.sh "azure" "${server_count}" "${retry_join}"

View File

@@ -0,0 +1,257 @@
variable "location" {}
variable "image_id" {}
variable "vm_size" {}
variable "server_count" {}
variable "client_count" {}
variable "retry_join" {}
resource "tls_private_key" "main" {
algorithm = "RSA"
}
resource "null_resource" "main" {
provisioner "local-exec" {
command = "echo \"${tls_private_key.main.private_key_pem}\" > azure-hashistack.pem"
}
provisioner "local-exec" {
command = "chmod 600 azure-hashistack.pem"
}
}
resource "azurerm_resource_group" "hashistack" {
name = "hashistack"
location = "${var.location}"
}
resource "azurerm_virtual_network" "hashistack-vn" {
name = "hashistack-vn"
address_space = ["10.0.0.0/16"]
location = "${var.location}"
resource_group_name = "${azurerm_resource_group.hashistack.name}"
}
resource "azurerm_subnet" "hashistack-sn" {
name = "hashistack-sn"
resource_group_name = "${azurerm_resource_group.hashistack.name}"
virtual_network_name = "${azurerm_virtual_network.hashistack-vn.name}"
address_prefix = "10.0.2.0/24"
}
resource "azurerm_network_security_group" "hashistack-sg" {
name = "hashistack-sg"
location = "${var.location}"
resource_group_name = "${azurerm_resource_group.hashistack.name}"
}
resource "azurerm_network_security_rule" "hashistack-sgr-22" {
name = "hashistack-sgr-22"
resource_group_name = "${azurerm_resource_group.hashistack.name}"
network_security_group_name = "${azurerm_network_security_group.hashistack-sg.name}"
priority = 100
direction = "Inbound"
access = "Allow"
protocol = "Tcp"
source_address_prefix = "*"
source_port_range = "*"
destination_port_range = "22"
destination_address_prefix = "*"
}
resource "azurerm_network_security_rule" "hashistack-sgr-4646" {
name = "hashistack-sgr-4646"
resource_group_name = "${azurerm_resource_group.hashistack.name}"
network_security_group_name = "${azurerm_network_security_group.hashistack-sg.name}"
priority = 101
direction = "Inbound"
access = "Allow"
protocol = "Tcp"
source_address_prefix = "*"
source_port_range = "*"
destination_port_range = "4646"
destination_address_prefix = "*"
}
resource "azurerm_network_security_rule" "hashistack-sgr-8500" {
name = "hashistack-sgr-8500"
resource_group_name = "${azurerm_resource_group.hashistack.name}"
network_security_group_name = "${azurerm_network_security_group.hashistack-sg.name}"
priority = 102
direction = "Inbound"
access = "Allow"
protocol = "Tcp"
source_address_prefix = "*"
source_port_range = "*"
destination_port_range = "8500"
destination_address_prefix = "*"
}
resource "azurerm_public_ip" "hashistack-server-public-ip" {
count = "${var.server_count}"
name = "hashistack-server-ip-${count.index}"
location = "${var.location}"
resource_group_name = "${azurerm_resource_group.hashistack.name}"
public_ip_address_allocation = "static"
}
resource "azurerm_network_interface" "hashistack-server-ni" {
count = "${var.server_count}"
name = "hashistack-server-ni-${count.index}"
location = "${var.location}"
resource_group_name = "${azurerm_resource_group.hashistack.name}"
network_security_group_id = "${azurerm_network_security_group.hashistack-sg.id}"
ip_configuration {
name = "hashistack-ipc"
subnet_id = "${azurerm_subnet.hashistack-sn.id}"
private_ip_address_allocation = "dynamic"
public_ip_address_id = "${element(azurerm_public_ip.hashistack-server-public-ip.*.id,count.index)}"
}
tags {
ConsulAutoJoin = "auto-join"
}
}
resource "azurerm_virtual_machine" "server" {
name = "hashistack-server-${count.index}"
location = "${var.location}"
resource_group_name = "${azurerm_resource_group.hashistack.name}"
network_interface_ids = ["${element(azurerm_network_interface.hashistack-server-ni.*.id,count.index)}"]
vm_size = "${var.vm_size}"
count = "${var.server_count}"
# Uncomment this line to delete the OS disk automatically when deleting the VM
delete_os_disk_on_termination = true
# Uncomment this line to delete the data disks automatically when deleting the VM
delete_data_disks_on_termination = true
storage_image_reference {
id = "${var.image_id}"
}
storage_os_disk {
name = "hashistack-server-osdisk-${count.index}"
caching = "ReadWrite"
create_option = "FromImage"
managed_disk_type = "Standard_LRS"
}
os_profile {
computer_name = "hashistack-server-${count.index}"
admin_username = "ubuntu"
admin_password = "none"
custom_data = "${base64encode(data.template_file.user_data_server.rendered)}"
}
os_profile_linux_config {
disable_password_authentication = true
ssh_keys {
path = "/home/ubuntu/.ssh/authorized_keys"
key_data = "${tls_private_key.main.public_key_openssh}"
}
}
}
data "template_file" "user_data_server" {
template = "${file("${path.root}/user-data-server.sh")}"
vars {
server_count = "${var.server_count}"
retry_join = "${var.retry_join}"
}
}
resource "azurerm_public_ip" "hashistack-client-public-ip" {
count = "${var.client_count}"
name = "hashistack-client-ip-${count.index}"
location = "${var.location}"
resource_group_name = "${azurerm_resource_group.hashistack.name}"
public_ip_address_allocation = "static"
}
resource "azurerm_network_interface" "hashistack-client-ni" {
count = "${var.client_count}"
name = "hashistack-client-ni-${count.index}"
location = "${var.location}"
resource_group_name = "${azurerm_resource_group.hashistack.name}"
network_security_group_id = "${azurerm_network_security_group.hashistack-sg.id}"
ip_configuration {
name = "hashistack-ipc"
subnet_id = "${azurerm_subnet.hashistack-sn.id}"
private_ip_address_allocation = "dynamic"
public_ip_address_id = "${element(azurerm_public_ip.hashistack-client-public-ip.*.id,count.index)}"
}
tags {
ConsulAutoJoin = "auto-join"
}
}
resource "azurerm_virtual_machine" "client" {
name = "hashistack-client-${count.index}"
location = "${var.location}"
resource_group_name = "${azurerm_resource_group.hashistack.name}"
network_interface_ids = ["${element(azurerm_network_interface.hashistack-client-ni.*.id,count.index)}"]
vm_size = "${var.vm_size}"
count = "${var.client_count}"
depends_on = ["azurerm_virtual_machine.server"]
# Uncomment this line to delete the OS disk automatically when deleting the VM
delete_os_disk_on_termination = true
# Uncomment this line to delete the data disks automatically when deleting the VM
delete_data_disks_on_termination = true
storage_image_reference {
id = "${var.image_id}"
}
storage_os_disk {
name = "hashistack-client-osdisk-${count.index}"
caching = "ReadWrite"
create_option = "FromImage"
managed_disk_type = "Standard_LRS"
}
os_profile {
computer_name = "hashistack-client-${count.index}"
admin_username = "ubuntu"
admin_password = "none"
custom_data = "${base64encode(data.template_file.user_data_client.rendered)}"
}
os_profile_linux_config {
disable_password_authentication = true
ssh_keys {
path = "/home/ubuntu/.ssh/authorized_keys"
key_data = "${tls_private_key.main.public_key_openssh}"
}
}
}
data "template_file" "user_data_client" {
template = "${file("${path.root}/user-data-client.sh")}"
vars {
retry_join = "${var.retry_join}"
}
}
output "server_public_ips" {
value = ["${azurerm_public_ip.hashistack-server-public-ip.*.ip_address}"]
}
output "client_public_ips" {
value = ["${azurerm_public_ip.hashistack-client-public-ip.*.ip_address}"]
}

View File

@@ -0,0 +1,58 @@
{
"variables": {
"azure_client_id": "{{ env `ARM_CLIENT_ID` }}",
"azure_client_secret": "{{ env `ARM_CLIENT_SECRET` }}",
"azure_subscription_id": "{{ env `ARM_SUBSCRIPTION_ID` }}",
"azure_resource_group": "{{ env `AZURE_RESOURCE_GROUP` }}"
},
"builders": [
{
"type": "azure-arm",
"client_id": "{{ user `azure_client_id` }}",
"client_secret": "{{ user `azure_client_secret` }}",
"subscription_id": "{{ user `azure_subscription_id` }}",
"managed_image_resource_group_name": "{{ user `azure_resource_group` }}",
"location": "East US",
"image_publisher": "Canonical",
"image_offer": "UbuntuServer",
"image_sku": "16.04-LTS",
"os_type": "Linux",
"ssh_username": "packer",
"managed_image_name": "hashistack",
"azure_tags": {
"Product": "Hashistack"
}
}],
"provisioners": [
{
"type": "shell",
"inline": [
"sudo mkdir /ops",
"sudo chmod 777 /ops"
]
},
{
"type": "file",
"source": "../shared",
"destination": "/ops"
},
{
"type": "file",
"source": "../examples",
"destination": "/ops"
},
{
"type": "shell",
"script": "../shared/scripts/setup.sh"
},
{
"execute_command": "chmod +x {{ .Path }}; {{ .Vars }} sudo -E sh '{{ .Path }}'",
"inline": [
"apt-get update -qq -y",
"apt-get upgrade -qq -y",
"/usr/sbin/waagent -force -deprovision+user && export HISTSIZE=0 && sync"
],
"inline_shebang": "/bin/sh -x",
"type": "shell"
}]
}

View File

@@ -10,9 +10,5 @@
"service": {
"name": "consul"
},
"retry_join_ec2": {
"tag_key": "ConsulAutoJoin",
"tag_value": "CLUSTER_TAG_VALUE",
"region": "REGION"
}
"retry_join": ["RETRY_JOIN"]
}

View File

@@ -0,0 +1,16 @@
[Unit]
Description=Consul Agent
Requires=network-online.target
After=network-online.target
[Service]
Restart=on-failure
Environment=CONSUL_ALLOW_PRIVILEGED_PORTS=true
ExecStart=/usr/local/bin/consul agent -config-dir="/etc/consul.d" -dns-port="53" -recursor="172.31.0.2"
ExecReload=/bin/kill -HUP $MAINPID
KillSignal=SIGTERM
User=root
Group=root
[Install]
WantedBy=multi-user.target

View File

@@ -0,0 +1,16 @@
[Unit]
Description=Consul Agent
Requires=network-online.target
After=network-online.target
[Service]
Restart=on-failure
Environment=CONSUL_ALLOW_PRIVILEGED_PORTS=true
ExecStart=/usr/local/bin/consul agent -config-dir="/etc/consul.d" -dns-port="53" -recursor="168.63.129.16"
ExecReload=/bin/kill -HUP $MAINPID
KillSignal=SIGTERM
User=root
Group=root
[Install]
WantedBy=multi-user.target

View File

@@ -5,9 +5,5 @@
"bind_addr": "0.0.0.0",
"client_addr": "0.0.0.0",
"advertise_addr": "IP_ADDRESS",
"retry_join_ec2": {
"tag_key": "ConsulAutoJoin",
"tag_value": "CLUSTER_TAG_VALUE",
"region": "REGION"
}
"retry_join": ["RETRY_JOIN"]
}

View File

@@ -1,24 +0,0 @@
description "Consul"
start on runlevel [2345]
stop on runlevel [!2345]
respawn
console log
script
if [ -f "/etc/service/consul" ]; then
. /etc/service/consul
fi
# Allow Consul to use privileged ports
export CONSUL_ALLOW_PRIVILEGED_PORTS=true
exec /usr/local/bin/consul agent \
-config-dir="/etc/consul.d" \
-dns-port="53" \
-recursor="172.31.0.2" \
\$${CONSUL_FLAGS} \
>>/var/log/consul.log 2>&1
end script

View File

@@ -7,8 +7,6 @@ server {
bootstrap_expect = SERVER_COUNT
}
name = "nomad@IP_ADDRESS"
consul {
address = "127.0.0.1:8500"
}

View File

@@ -0,0 +1,15 @@
[Unit]
Description=Nomad Agent
Requires=network-online.target
After=network-online.target
[Service]
Restart=on-failure
ExecStart=/usr/local/bin/nomad agent -config="/etc/nomad.d/nomad.hcl"
ExecReload=/bin/kill -HUP $MAINPID
KillSignal=SIGTERM
User=root
Group=root
[Install]
WantedBy=multi-user.target

View File

@@ -1,6 +1,5 @@
data_dir = "/opt/nomad/data"
bind_addr = "0.0.0.0"
name = "nomad@IP_ADDRESS"
# Enable the client
client {

View File

@@ -1,19 +0,0 @@
description "Nomad"
start on runlevel [2345]
stop on runlevel [!2345]
respawn
console log
script
if [ -f "/etc/service/nomad" ]; then
. /etc/service/nomad
fi
exec /usr/local/bin/nomad agent \
-config="/etc/nomad.d/nomad.hcl" \
\$${NOMAD_FLAGS} \
>>/var/log/nomad.log 2>&1
end script

View File

@@ -0,0 +1,16 @@
[Unit]
Description=Vault Agent
Requires=network-online.target
After=network-online.target
[Service]
Restart=on-failure
Environment=GOMAXPROCS=nproc
ExecStart=/usr/local/bin/vault server -config="/etc/vault.d/vault.hcl"
ExecReload=/bin/kill -HUP $MAINPID
KillSignal=SIGTERM
User=root
Group=root
[Install]
WantedBy=multi-user.target

View File

@@ -1,22 +0,0 @@
description "Vault"
start on runlevel [2345]
stop on runlevel [!2345]
respawn
console log
script
if [ -f "/etc/service/vault" ]; then
. /etc/service/vault
fi
# Make sure to use all our CPUs, because Vault can block a scheduler thread
export GOMAXPROCS=`nproc`
exec /usr/local/bin/vault server \
-config="/etc/vault.d/vault.hcl" \
\$${VAULT_FLAGS} \
>>/var/log/vault.log 2>&1
end script

View File

@@ -6,34 +6,33 @@ CONFIGDIR=/ops/shared/config
CONSULCONFIGDIR=/etc/consul.d
NOMADCONFIGDIR=/etc/nomad.d
HADOOP_VERSION=hadoop-2.7.3
HADOOP_VERSION=hadoop-2.7.4
HADOOPCONFIGDIR=/usr/local/$HADOOP_VERSION/etc/hadoop
HOME_DIR=ubuntu
# Wait for network
sleep 15
IP_ADDRESS=$(curl http://instance-data/latest/meta-data/local-ipv4)
# IP_ADDRESS=$(curl http://instance-data/latest/meta-data/local-ipv4)
IP_ADDRESS="$(/sbin/ifconfig eth0 | grep 'inet addr:' | cut -d: -f2 | awk '{ print $1}')"
DOCKER_BRIDGE_IP_ADDRESS=(`ifconfig docker0 2>/dev/null|awk '/inet addr:/ {print $2}'|sed 's/addr://'`)
REGION=$1
CLUSTER_TAG_VALUE=$2
CLOUD=$1
RETRY_JOIN=$2
# Consul
sed -i "s/IP_ADDRESS/$IP_ADDRESS/g" $CONFIGDIR/consul_client.json
sed -i "s/REGION/$REGION/g" $CONFIGDIR/consul_client.json
sed -i "s/CLUSTER_TAG_VALUE/$CLUSTER_TAG_VALUE/g" $CONFIGDIR/consul_client.json
sed -i "s/RETRY_JOIN/$RETRY_JOIN/g" $CONFIGDIR/consul_client.json
sudo cp $CONFIGDIR/consul_client.json $CONSULCONFIGDIR/consul.json
sudo cp $CONFIGDIR/consul_upstart.conf /etc/init/consul.conf
sudo cp $CONFIGDIR/consul_$CLOUD.service /etc/systemd/system/consul.service
sudo service consul start
sudo systemctl start consul.service
sleep 10
# Nomad
sed -i "s/IP_ADDRESS/$IP_ADDRESS/g" $CONFIGDIR/nomad_client.hcl
sudo cp $CONFIGDIR/nomad_client.hcl $NOMADCONFIGDIR/nomad.hcl
sudo cp $CONFIGDIR/nomad_upstart.conf /etc/init/nomad.conf
sudo cp $CONFIGDIR/nomad.service /etc/systemd/system/nomad.service
sudo service nomad start
sudo systemctl start nomad.service
sleep 10
export NOMAD_ADDR=http://$IP_ADDRESS:4646

View File

@@ -7,28 +7,28 @@ CONFIGDIR=/ops/shared/config
CONSULCONFIGDIR=/etc/consul.d
VAULTCONFIGDIR=/etc/vault.d
NOMADCONFIGDIR=/etc/nomad.d
HADOOP_VERSION=hadoop-2.7.3
HADOOP_VERSION=hadoop-2.7.4
HADOOPCONFIGDIR=/usr/local/$HADOOP_VERSION/etc/hadoop
HOME_DIR=ubuntu
# Wait for network
sleep 15
IP_ADDRESS=$(curl http://instance-data/latest/meta-data/local-ipv4)
# IP_ADDRESS=$(curl http://instance-data/latest/meta-data/local-ipv4)
IP_ADDRESS="$(/sbin/ifconfig eth0 | grep 'inet addr:' | cut -d: -f2 | awk '{ print $1}')"
DOCKER_BRIDGE_IP_ADDRESS=(`ifconfig docker0 2>/dev/null|awk '/inet addr:/ {print $2}'|sed 's/addr://'`)
SERVER_COUNT=$1
REGION=$2
CLUSTER_TAG_VALUE=$3
CLOUD=$1
SERVER_COUNT=$2
RETRY_JOIN=$3
# Consul
sed -i "s/IP_ADDRESS/$IP_ADDRESS/g" $CONFIGDIR/consul.json
sed -i "s/SERVER_COUNT/$SERVER_COUNT/g" $CONFIGDIR/consul.json
sed -i "s/REGION/$REGION/g" $CONFIGDIR/consul.json
sed -i "s/CLUSTER_TAG_VALUE/$CLUSTER_TAG_VALUE/g" $CONFIGDIR/consul.json
sed -i "s/RETRY_JOIN/$RETRY_JOIN/g" $CONFIGDIR/consul.json
sudo cp $CONFIGDIR/consul.json $CONSULCONFIGDIR
sudo cp $CONFIGDIR/consul_upstart.conf /etc/init/consul.conf
sudo cp $CONFIGDIR/consul_$CLOUD.service /etc/systemd/system/consul.service
sudo service consul start
sudo systemctl start consul.service
sleep 10
export CONSUL_HTTP_ADDR=$IP_ADDRESS:8500
export CONSUL_RPC_ADDR=$IP_ADDRESS:8400
@@ -36,17 +36,16 @@ export CONSUL_RPC_ADDR=$IP_ADDRESS:8400
# Vault
sed -i "s/IP_ADDRESS/$IP_ADDRESS/g" $CONFIGDIR/vault.hcl
sudo cp $CONFIGDIR/vault.hcl $VAULTCONFIGDIR
sudo cp $CONFIGDIR/vault_upstart.conf /etc/init/vault.conf
sudo cp $CONFIGDIR/vault.service /etc/systemd/system/vault.service
sudo service vault start
sudo systemctl start vault.service
# Nomad
sed -i "s/IP_ADDRESS/$IP_ADDRESS/g" $CONFIGDIR/nomad.hcl
sed -i "s/SERVER_COUNT/$SERVER_COUNT/g" $CONFIGDIR/nomad.hcl
sudo cp $CONFIGDIR/nomad.hcl $NOMADCONFIGDIR
sudo cp $CONFIGDIR/nomad_upstart.conf /etc/init/nomad.conf
sudo cp $CONFIGDIR/nomad.service /etc/systemd/system/nomad.service
sudo service nomad start
sudo systemctl start nomad.service
sleep 10
export NOMAD_ADDR=http://$IP_ADDRESS:4646

View File

@@ -6,29 +6,27 @@ cd /ops
CONFIGDIR=/ops/shared/config
CONSULVERSION=0.9.0
CONSULVERSION=1.0.0
CONSULDOWNLOAD=https://releases.hashicorp.com/consul/${CONSULVERSION}/consul_${CONSULVERSION}_linux_amd64.zip
CONSULCONFIGDIR=/etc/consul.d
CONSULDIR=/opt/consul
VAULTVERSION=0.7.3
VAULTVERSION=0.8.3
VAULTDOWNLOAD=https://releases.hashicorp.com/vault/${VAULTVERSION}/vault_${VAULTVERSION}_linux_amd64.zip
VAULTCONFIGDIR=/etc/vault.d
VAULTDIR=/opt/vault
NOMADVERSION=0.6.0
NOMADVERSION=0.7.0
NOMADDOWNLOAD=https://releases.hashicorp.com/nomad/${NOMADVERSION}/nomad_${NOMADVERSION}_linux_amd64.zip
NOMADCONFIGDIR=/etc/nomad.d
NOMADDIR=/opt/nomad
HADOOP_VERSION=2.7.3
HADOOP_VERSION=2.7.4
# Dependencies
sudo apt-get install -y software-properties-common
sudo apt-get update
sudo apt-get install -y unzip tree redis-tools jq
sudo apt-get install -y upstart-sysv
sudo update-initramfs -u
# Numpy (for Spark)
sudo apt-get install -y python-setuptools
@@ -97,10 +95,10 @@ sudo apt-get install -y openjdk-8-jdk
JAVA_HOME=$(readlink -f /usr/bin/java | sed "s:bin/java::")
# Spark
sudo wget -P /ops/examples/spark https://s3.amazonaws.com/nomad-spark/spark-2.1.0-bin-nomad.tgz
sudo tar -xvf /ops/examples/spark/spark-2.1.0-bin-nomad.tgz --directory /ops/examples/spark
sudo mv /ops/examples/spark/spark-2.1.0-bin-nomad /usr/local/bin/spark
sudo wget -P /ops/examples/spark https://s3.amazonaws.com/nomad-spark/spark-2.2.0-bin-nomad-0.7.0.tgz
sudo tar -xvf /ops/examples/spark/spark-2.2.0-bin-nomad-0.7.0.tgz --directory /ops/examples/spark
sudo mv /ops/examples/spark/spark-2.2.0-bin-nomad-0.7.0 /usr/local/bin/spark
sudo chown -R root:root /usr/local/bin/spark
# Hadoop (to enable the HDFS CLI)
wget -O - http://apache.mirror.iphh.net/hadoop/common/hadoop-$HADOOP_VERSION/hadoop-$HADOOP_VERSION.tar.gz | sudo tar xz -C /usr/local/
wget -O - http://apache.mirror.iphh.net/hadoop/common/hadoop-${HADOOP_VERSION}/hadoop-${HADOOP_VERSION}.tar.gz | sudo tar xz -C /usr/local/