Merge branch 'master' into f-cpu_hard_limit

This commit is contained in:
Michael Schurter
2018-02-08 20:14:29 -08:00
160 changed files with 4004 additions and 1796 deletions

View File

@@ -6,24 +6,31 @@ __BACKWARDS INCOMPATIBILITIES:__
in HTTP check paths will now fail to validate. [[GH-3685](https://github.com/hashicorp/nomad/issues/3685)]
IMPROVEMENTS:
* core: Allow upgrading/downgrading TLS via SIGHUP on both servers and clients [[GH-3492](https://github.com/hashicorp/nomad/issues/3492)]
* core: A set of features (Autopilot) has been added to allow for automatic operator-friendly management of Nomad servers. For more information about Autopilot, see the [Autopilot Guide](https://www.nomadproject.io/guides/cluster/autopilot.html). [[GH-3670](https://github.com/hashicorp/nomad/pull/3670)]
* cli: Use ISO_8601 time format for cli output
[[GH-3814](https://github.com/hashicorp/nomad/pull/3814)]
* client: Allow '.' in environment variable names [[GH-3760](https://github.com/hashicorp/nomad/issues/3760)]
* client: Refactor client fingerprint methods to a request/response format
[[GH-3781](https://github.com/hashicorp/nomad/issues/3781)]
* discovery: Allow `check_restart` to be specified in the `service` stanza.
[[GH-3718](https://github.com/hashicorp/nomad/issues/3718)]
* driver/docker: Support advertising IPv6 addresses [[GH-3790](https://github.com/hashicorp/nomad/issues/3790)]
* driver/docker; Support overriding image entrypoint [[GH-3788](https://github.com/hashicorp/nomad/issues/3788)]
* driver/docker: Support adding or dropping capabilities [[GH-3754](https://github.com/hashicorp/nomad/issues/3754)]
* driver/docker: Support mounting root filesystem as read-only [[GH-3802](https://github.com/hashicorp/nomad/issues/3802)]
* driver/lxc: Add volumes config to LXC driver [[GH-3687](https://github.com/hashicorp/nomad/issues/3687)]
* telemetry: Support DataDog tags [[GH-3839](https://github.com/hashicorp/nomad/issues/3839)]
BUG FIXES:
* core: Fix search endpoint forwarding for multi-region clusters [[GH-3680](https://github.com/hashicorp/nomad/issues/3680)]
* core: Allow upgrading/downgrading TLS via SIGHUP on both servers and clients [[GH-3492](https://github.com/hashicorp/nomad/issues/3492)]
* core: Fix an issue in which batch jobs with queued placements and lost
allocations could result in improper placement counts [[GH-3717](https://github.com/hashicorp/nomad/issues/3717)]
* client: Migrated ephemeral_disk's maintain directory permissions [[GH-3723](https://github.com/hashicorp/nomad/issues/3723)]
* client: Always advertise driver IP when in driver address mode [[GH-3682](https://github.com/hashicorp/nomad/issues/3682)]
* client/vault: Recognize renewing non-renewable Vault lease as fatal [[GH-3727](https://github.com/hashicorp/nomad/issues/3727)]
* config: Revert minimum CPU limit back to 20 from 100.
* driver/lxc: Cleanup LXC containers after errors on container startup. [[GH-3773](https://github.com/hashicorp/nomad/issues/3773)]
* ui: Fix ui on non-leaders when ACLs are enabled [[GH-3722](https://github.com/hashicorp/nomad/issues/3722)]
* ui: Fix requests using client-side certificates in Firefox. [[GH-3728](https://github.com/hashicorp/nomad/pull/3728)]
@@ -163,8 +170,7 @@ BUG FIXES:
change [[GH-3214](https://github.com/hashicorp/nomad/issues/3214)]
* api: Fix search handling of jobs with more than four hyphens and case were
length could cause lookup error [[GH-3203](https://github.com/hashicorp/nomad/issues/3203)]
* client: Improve the speed at which clients detect garbage collection events
[GH_-3452]
* client: Improve the speed at which clients detect garbage collection events [[GH-3452](https://github.com/hashicorp/nomad/issues/3452)]
* client: Fix lock contention that could cause a node to miss a heartbeat and
be marked as down [[GH-3195](https://github.com/hashicorp/nomad/issues/3195)]
* client: Fix data race that could lead to concurrent map read/writes during

View File

@@ -148,7 +148,7 @@ deps: ## Install build and development dependencies
@echo "==> Updating build dependencies..."
go get -u github.com/kardianos/govendor
go get -u github.com/ugorji/go/codec/codecgen
go get -u github.com/jteeuwen/go-bindata/...
go get -u github.com/hashicorp/go-bindata/...
go get -u github.com/elazarl/go-bindata-assetfs/...
go get -u github.com/a8m/tree/cmd/tree
go get -u github.com/magiconair/vendorfmt/cmd/vendorfmt

View File

@@ -151,6 +151,7 @@ type HostDiskStats struct {
// NodeListStub is a subset of information returned during
// node list operations.
type NodeListStub struct {
Address string
ID string
Datacenter string
Name string

View File

@@ -2,6 +2,7 @@ package api
import (
"bytes"
"encoding/json"
"fmt"
"io"
"strconv"
@@ -19,7 +20,7 @@ type AutopilotConfiguration struct {
// LastContactThreshold is the limit on the amount of time a server can go
// without leader contact before being considered unhealthy.
LastContactThreshold *ReadableDuration
LastContactThreshold time.Duration
// MaxTrailingLogs is the amount of entries in the Raft Log that a server can
// be behind before being considered unhealthy.
@@ -28,20 +29,19 @@ type AutopilotConfiguration struct {
// ServerStabilizationTime is the minimum amount of time a server must be
// in a stable, healthy state before it can be added to the cluster. Only
// applicable with Raft protocol version 3 or higher.
ServerStabilizationTime *ReadableDuration
ServerStabilizationTime time.Duration
// (Enterprise-only) RedundancyZoneTag is the node tag to use for separating
// servers into zones for redundancy. If left blank, this feature will be disabled.
RedundancyZoneTag string
// (Enterprise-only) EnableRedundancyZones specifies whether to enable redundancy zones.
EnableRedundancyZones bool
// (Enterprise-only) DisableUpgradeMigration will disable Autopilot's upgrade migration
// strategy of waiting until enough newer-versioned servers have been added to the
// cluster before promoting them to voters.
DisableUpgradeMigration bool
// (Enterprise-only) UpgradeVersionTag is the node tag to use for version info when
// performing upgrade migrations. If left blank, the Nomad version will be used.
UpgradeVersionTag string
// (Enterprise-only) EnableCustomUpgrades specifies whether to enable using custom
// upgrade versions when performing migrations.
EnableCustomUpgrades bool
// CreateIndex holds the index corresponding the creation of this configuration.
// This is a read-only field.
@@ -54,6 +54,45 @@ type AutopilotConfiguration struct {
ModifyIndex uint64
}
func (u *AutopilotConfiguration) MarshalJSON() ([]byte, error) {
type Alias AutopilotConfiguration
return json.Marshal(&struct {
LastContactThreshold string
ServerStabilizationTime string
*Alias
}{
LastContactThreshold: u.LastContactThreshold.String(),
ServerStabilizationTime: u.ServerStabilizationTime.String(),
Alias: (*Alias)(u),
})
}
func (u *AutopilotConfiguration) UnmarshalJSON(data []byte) error {
type Alias AutopilotConfiguration
aux := &struct {
LastContactThreshold string
ServerStabilizationTime string
*Alias
}{
Alias: (*Alias)(u),
}
if err := json.Unmarshal(data, &aux); err != nil {
return err
}
var err error
if aux.LastContactThreshold != "" {
if u.LastContactThreshold, err = time.ParseDuration(aux.LastContactThreshold); err != nil {
return err
}
}
if aux.ServerStabilizationTime != "" {
if u.ServerStabilizationTime, err = time.ParseDuration(aux.ServerStabilizationTime); err != nil {
return err
}
}
return nil
}
// ServerHealth is the health (from the leader's point of view) of a server.
type ServerHealth struct {
// ID is the raft ID of the server.
@@ -75,7 +114,7 @@ type ServerHealth struct {
Leader bool
// LastContact is the time since this node's last contact with the leader.
LastContact *ReadableDuration
LastContact time.Duration
// LastTerm is the highest leader term this server has a record of in its Raft log.
LastTerm uint64
@@ -94,6 +133,37 @@ type ServerHealth struct {
StableSince time.Time
}
func (u *ServerHealth) MarshalJSON() ([]byte, error) {
type Alias ServerHealth
return json.Marshal(&struct {
LastContact string
*Alias
}{
LastContact: u.LastContact.String(),
Alias: (*Alias)(u),
})
}
func (u *ServerHealth) UnmarshalJSON(data []byte) error {
type Alias ServerHealth
aux := &struct {
LastContact string
*Alias
}{
Alias: (*Alias)(u),
}
if err := json.Unmarshal(data, &aux); err != nil {
return err
}
var err error
if aux.LastContact != "" {
if u.LastContact, err = time.ParseDuration(aux.LastContact); err != nil {
return err
}
}
return nil
}
// OperatorHealthReply is a representation of the overall health of the cluster
type OperatorHealthReply struct {
// Healthy is true if all the servers in the cluster are healthy.
@@ -107,46 +177,6 @@ type OperatorHealthReply struct {
Servers []ServerHealth
}
// ReadableDuration is a duration type that is serialized to JSON in human readable format.
type ReadableDuration time.Duration
func NewReadableDuration(dur time.Duration) *ReadableDuration {
d := ReadableDuration(dur)
return &d
}
func (d *ReadableDuration) String() string {
return d.Duration().String()
}
func (d *ReadableDuration) Duration() time.Duration {
if d == nil {
return time.Duration(0)
}
return time.Duration(*d)
}
func (d *ReadableDuration) MarshalJSON() ([]byte, error) {
return []byte(fmt.Sprintf(`"%s"`, d.Duration().String())), nil
}
func (d *ReadableDuration) UnmarshalJSON(raw []byte) error {
if d == nil {
return fmt.Errorf("cannot unmarshal to nil pointer")
}
str := string(raw)
if len(str) < 2 || str[0] != '"' || str[len(str)-1] != '"' {
return fmt.Errorf("must be enclosed with quotes: %s", str)
}
dur, err := time.ParseDuration(str[1 : len(str)-1])
if err != nil {
return err
}
*d = ReadableDuration(dur)
return nil
}
// AutopilotGetConfiguration is used to query the current Autopilot configuration.
func (op *Operator) AutopilotGetConfiguration(q *QueryOptions) (*AutopilotConfiguration, error) {
r, err := op.c.newRequest("GET", "/v1/operator/autopilot/configuration")

View File

@@ -17,13 +17,17 @@ func TestAPI_OperatorAutopilotGetSetConfiguration(t *testing.T) {
defer s.Stop()
operator := c.Operator()
config, err := operator.AutopilotGetConfiguration(nil)
assert.Nil(err)
var config *AutopilotConfiguration
retry.Run(t, func(r *retry.R) {
var err error
config, err = operator.AutopilotGetConfiguration(nil)
r.Check(err)
})
assert.True(config.CleanupDeadServers)
// Change a config setting
newConf := &AutopilotConfiguration{CleanupDeadServers: false}
err = operator.AutopilotSetConfiguration(newConf, nil)
err := operator.AutopilotSetConfiguration(newConf, nil)
assert.Nil(err)
config, err = operator.AutopilotGetConfiguration(nil)

View File

@@ -23,6 +23,7 @@ import (
"github.com/hashicorp/nomad/client/driver"
"github.com/hashicorp/nomad/client/fingerprint"
"github.com/hashicorp/nomad/client/stats"
cstructs "github.com/hashicorp/nomad/client/structs"
"github.com/hashicorp/nomad/client/vaultclient"
"github.com/hashicorp/nomad/command/agent/consul"
"github.com/hashicorp/nomad/helper"
@@ -931,33 +932,42 @@ func (c *Client) fingerprint() error {
c.logger.Printf("[DEBUG] client: built-in fingerprints: %v", fingerprint.BuiltinFingerprints())
var applied []string
var skipped []string
var detectedFingerprints []string
var skippedFingerprints []string
for _, name := range fingerprint.BuiltinFingerprints() {
// Skip modules that are not in the whitelist if it is enabled.
if _, ok := whitelist[name]; whitelistEnabled && !ok {
skipped = append(skipped, name)
skippedFingerprints = append(skippedFingerprints, name)
continue
}
// Skip modules that are in the blacklist
if _, ok := blacklist[name]; ok {
skipped = append(skipped, name)
skippedFingerprints = append(skippedFingerprints, name)
continue
}
f, err := fingerprint.NewFingerprint(name, c.logger)
if err != nil {
return err
}
c.configLock.Lock()
applies, err := f.Fingerprint(c.config, c.config.Node)
request := &cstructs.FingerprintRequest{Config: c.config, Node: c.config.Node}
var response cstructs.FingerprintResponse
err = f.Fingerprint(request, &response)
c.configLock.Unlock()
if err != nil {
return err
}
if applies {
applied = append(applied, name)
// log the fingerprinters which have been applied
if response.Detected {
detectedFingerprints = append(detectedFingerprints, name)
}
// add the diff found from each fingerprinter
c.updateNodeFromFingerprint(&response)
p, period := f.Periodic()
if p {
// TODO: If more periodic fingerprinters are added, then
@@ -966,9 +976,10 @@ func (c *Client) fingerprint() error {
go c.fingerprintPeriodic(name, f, period)
}
}
c.logger.Printf("[DEBUG] client: applied fingerprints %v", applied)
if len(skipped) != 0 {
c.logger.Printf("[DEBUG] client: fingerprint modules skipped due to white/blacklist: %v", skipped)
c.logger.Printf("[DEBUG] client: detected fingerprints %v", detectedFingerprints)
if len(skippedFingerprints) != 0 {
c.logger.Printf("[DEBUG] client: fingerprint modules skipped due to white/blacklist: %v", skippedFingerprints)
}
return nil
}
@@ -980,10 +991,17 @@ func (c *Client) fingerprintPeriodic(name string, f fingerprint.Fingerprint, d t
select {
case <-time.After(d):
c.configLock.Lock()
if _, err := f.Fingerprint(c.config, c.config.Node); err != nil {
c.logger.Printf("[DEBUG] client: periodic fingerprinting for %v failed: %v", name, err)
}
request := &cstructs.FingerprintRequest{Config: c.config, Node: c.config.Node}
var response cstructs.FingerprintResponse
err := f.Fingerprint(request, &response)
c.configLock.Unlock()
if err != nil {
c.logger.Printf("[DEBUG] client: periodic fingerprinting for %v failed: %v", name, err)
} else {
c.updateNodeFromFingerprint(&response)
}
case <-c.shutdownCh:
return
}
@@ -997,19 +1015,19 @@ func (c *Client) setupDrivers() error {
whitelistEnabled := len(whitelist) > 0
blacklist := c.config.ReadStringListToMap("driver.blacklist")
var avail []string
var skipped []string
var detectedDrivers []string
var skippedDrivers []string
driverCtx := driver.NewDriverContext("", "", c.config, c.config.Node, c.logger, nil)
for name := range driver.BuiltinDrivers {
// Skip fingerprinting drivers that are not in the whitelist if it is
// enabled.
if _, ok := whitelist[name]; whitelistEnabled && !ok {
skipped = append(skipped, name)
skippedDrivers = append(skippedDrivers, name)
continue
}
// Skip fingerprinting drivers that are in the blacklist
if _, ok := blacklist[name]; ok {
skipped = append(skipped, name)
skippedDrivers = append(skippedDrivers, name)
continue
}
@@ -1017,16 +1035,23 @@ func (c *Client) setupDrivers() error {
if err != nil {
return err
}
c.configLock.Lock()
applies, err := d.Fingerprint(c.config, c.config.Node)
request := &cstructs.FingerprintRequest{Config: c.config, Node: c.config.Node}
var response cstructs.FingerprintResponse
err = d.Fingerprint(request, &response)
c.configLock.Unlock()
if err != nil {
return err
}
if applies {
avail = append(avail, name)
// log the fingerprinters which have been applied
if response.Detected {
detectedDrivers = append(detectedDrivers, name)
}
c.updateNodeFromFingerprint(&response)
p, period := d.Periodic()
if p {
go c.fingerprintPeriodic(name, d, period)
@@ -1034,15 +1059,42 @@ func (c *Client) setupDrivers() error {
}
c.logger.Printf("[DEBUG] client: available drivers %v", avail)
if len(skipped) != 0 {
c.logger.Printf("[DEBUG] client: drivers skipped due to white/blacklist: %v", skipped)
c.logger.Printf("[DEBUG] client: detected drivers %v", detectedDrivers)
if len(skippedDrivers) > 0 {
c.logger.Printf("[DEBUG] client: drivers skipped due to white/blacklist: %v", skippedDrivers)
}
return nil
}
// updateNodeFromFingerprint updates the node with the result of
// fingerprinting the node from the diff that was created
func (c *Client) updateNodeFromFingerprint(response *cstructs.FingerprintResponse) {
c.configLock.Lock()
defer c.configLock.Unlock()
for name, val := range response.Attributes {
if val == "" {
delete(c.config.Node.Attributes, name)
} else {
c.config.Node.Attributes[name] = val
}
}
// update node links and resources from the diff created from
// fingerprinting
for name, val := range response.Links {
if val == "" {
delete(c.config.Node.Links, name)
} else {
c.config.Node.Links[name] = val
}
}
if response.Resources != nil {
c.config.Node.Resources.Merge(response.Resources)
}
}
// retryIntv calculates a retry interval value given the base
func (c *Client) retryIntv(base time.Duration) time.Duration {
if c.config.DevMode {

View File

@@ -14,6 +14,7 @@ import (
"github.com/hashicorp/consul/lib/freeport"
memdb "github.com/hashicorp/go-memdb"
"github.com/hashicorp/nomad/client/config"
"github.com/hashicorp/nomad/client/driver"
"github.com/hashicorp/nomad/client/fingerprint"
"github.com/hashicorp/nomad/command/agent/consul"
"github.com/hashicorp/nomad/helper"
@@ -252,6 +253,48 @@ func TestClient_HasNodeChanged(t *testing.T) {
}
}
func TestClient_Fingerprint_Periodic(t *testing.T) {
if _, ok := driver.BuiltinDrivers["mock_driver"]; !ok {
t.Skip(`test requires mock_driver; run with "-tags nomad_test"`)
}
t.Parallel()
// these constants are only defined when nomad_test is enabled, so these fail
// our linter without explicit disabling.
c1 := testClient(t, func(c *config.Config) {
c.Options = map[string]string{
driver.ShutdownPeriodicAfter: "true", // nolint: varcheck
driver.ShutdownPeriodicDuration: "3", // nolint: varcheck
}
})
defer c1.Shutdown()
node := c1.config.Node
mockDriverName := "driver.mock_driver"
// Ensure the mock driver is registered on the client
testutil.WaitForResult(func() (bool, error) {
mockDriverStatus := node.Attributes[mockDriverName]
if mockDriverStatus == "" {
return false, fmt.Errorf("mock driver attribute should be set on the client")
}
return true, nil
}, func(err error) {
t.Fatalf("err: %v", err)
})
// Ensure that the client fingerprinter eventually removes this attribute
testutil.WaitForResult(func() (bool, error) {
mockDriverStatus := node.Attributes[mockDriverName]
if mockDriverStatus != "" {
return false, fmt.Errorf("mock driver attribute should not be set on the client")
}
return true, nil
}, func(err error) {
t.Fatalf("err: %v", err)
})
}
func TestClient_Fingerprint_InWhitelist(t *testing.T) {
t.Parallel()
c := testClient(t, func(c *config.Config) {

View File

@@ -26,7 +26,6 @@ import (
"github.com/hashicorp/go-multierror"
"github.com/hashicorp/go-plugin"
"github.com/hashicorp/nomad/client/allocdir"
"github.com/hashicorp/nomad/client/config"
"github.com/hashicorp/nomad/client/driver/env"
"github.com/hashicorp/nomad/client/driver/executor"
dstructs "github.com/hashicorp/nomad/client/driver/structs"
@@ -180,51 +179,52 @@ type DockerVolumeDriverConfig struct {
// DockerDriverConfig defines the user specified config block in a jobspec
type DockerDriverConfig struct {
ImageName string `mapstructure:"image"` // Container's Image Name
LoadImage string `mapstructure:"load"` // LoadImage is a path to an image archive file
Command string `mapstructure:"command"` // The Command to run when the container starts up
Args []string `mapstructure:"args"` // The arguments to the Command
Entrypoint []string `mapstructure:"entrypoint"` // Override the containers entrypoint
IpcMode string `mapstructure:"ipc_mode"` // The IPC mode of the container - host and none
NetworkMode string `mapstructure:"network_mode"` // The network mode of the container - host, nat and none
NetworkAliases []string `mapstructure:"network_aliases"` // The network-scoped alias for the container
IPv4Address string `mapstructure:"ipv4_address"` // The container ipv4 address
IPv6Address string `mapstructure:"ipv6_address"` // the container ipv6 address
PidMode string `mapstructure:"pid_mode"` // The PID mode of the container - host and none
UTSMode string `mapstructure:"uts_mode"` // The UTS mode of the container - host and none
UsernsMode string `mapstructure:"userns_mode"` // The User namespace mode of the container - host and none
PortMapRaw []map[string]string `mapstructure:"port_map"` //
PortMap map[string]int `mapstructure:"-"` // A map of host port labels and the ports exposed on the container
Privileged bool `mapstructure:"privileged"` // Flag to run the container in privileged mode
SysctlRaw []map[string]string `mapstructure:"sysctl"` //
Sysctl map[string]string `mapstructure:"-"` // The sysctl custom configurations
UlimitRaw []map[string]string `mapstructure:"ulimit"` //
Ulimit []docker.ULimit `mapstructure:"-"` // The ulimit custom configurations
DNSServers []string `mapstructure:"dns_servers"` // DNS Server for containers
DNSSearchDomains []string `mapstructure:"dns_search_domains"` // DNS Search domains for containers
DNSOptions []string `mapstructure:"dns_options"` // DNS Options
ExtraHosts []string `mapstructure:"extra_hosts"` // Add host to /etc/hosts (host:IP)
Hostname string `mapstructure:"hostname"` // Hostname for containers
LabelsRaw []map[string]string `mapstructure:"labels"` //
Labels map[string]string `mapstructure:"-"` // Labels to set when the container starts up
Auth []DockerDriverAuth `mapstructure:"auth"` // Authentication credentials for a private Docker registry
AuthSoftFail bool `mapstructure:"auth_soft_fail"` // Soft-fail if auth creds are provided but fail
TTY bool `mapstructure:"tty"` // Allocate a Pseudo-TTY
Interactive bool `mapstructure:"interactive"` // Keep STDIN open even if not attached
ShmSize int64 `mapstructure:"shm_size"` // Size of /dev/shm of the container in bytes
WorkDir string `mapstructure:"work_dir"` // Working directory inside the container
Logging []DockerLoggingOpts `mapstructure:"logging"` // Logging options for syslog server
Volumes []string `mapstructure:"volumes"` // Host-Volumes to mount in, syntax: /path/to/host/directory:/destination/path/in/container
Mounts []DockerMount `mapstructure:"mounts"` // Docker volumes to mount
VolumeDriver string `mapstructure:"volume_driver"` // Docker volume driver used for the container's volumes
ForcePull bool `mapstructure:"force_pull"` // Always force pull before running image, useful if your tags are mutable
MacAddress string `mapstructure:"mac_address"` // Pin mac address to container
SecurityOpt []string `mapstructure:"security_opt"` // Flags to pass directly to security-opt
Devices []DockerDevice `mapstructure:"devices"` // To allow mounting USB or other serial control devices
CapAdd []string `mapstructure:"cap_add"` // Flags to pass directly to cap-add
CapDrop []string `mapstructure:"cap_drop"` // Flags to pass directly to cap-drop
ReadonlyRootfs bool `mapstructure:"readonly_rootfs"` // Mount the containers root filesystem as read only
CPUHardLimit bool `mapstructure:"cpu_hard_limit"` // Enforce CPU hard limit.
ImageName string `mapstructure:"image"` // Container's Image Name
LoadImage string `mapstructure:"load"` // LoadImage is a path to an image archive file
Command string `mapstructure:"command"` // The Command to run when the container starts up
Args []string `mapstructure:"args"` // The arguments to the Command
Entrypoint []string `mapstructure:"entrypoint"` // Override the containers entrypoint
IpcMode string `mapstructure:"ipc_mode"` // The IPC mode of the container - host and none
NetworkMode string `mapstructure:"network_mode"` // The network mode of the container - host, nat and none
NetworkAliases []string `mapstructure:"network_aliases"` // The network-scoped alias for the container
IPv4Address string `mapstructure:"ipv4_address"` // The container ipv4 address
IPv6Address string `mapstructure:"ipv6_address"` // the container ipv6 address
PidMode string `mapstructure:"pid_mode"` // The PID mode of the container - host and none
UTSMode string `mapstructure:"uts_mode"` // The UTS mode of the container - host and none
UsernsMode string `mapstructure:"userns_mode"` // The User namespace mode of the container - host and none
PortMapRaw []map[string]string `mapstructure:"port_map"` //
PortMap map[string]int `mapstructure:"-"` // A map of host port labels and the ports exposed on the container
Privileged bool `mapstructure:"privileged"` // Flag to run the container in privileged mode
SysctlRaw []map[string]string `mapstructure:"sysctl"` //
Sysctl map[string]string `mapstructure:"-"` // The sysctl custom configurations
UlimitRaw []map[string]string `mapstructure:"ulimit"` //
Ulimit []docker.ULimit `mapstructure:"-"` // The ulimit custom configurations
DNSServers []string `mapstructure:"dns_servers"` // DNS Server for containers
DNSSearchDomains []string `mapstructure:"dns_search_domains"` // DNS Search domains for containers
DNSOptions []string `mapstructure:"dns_options"` // DNS Options
ExtraHosts []string `mapstructure:"extra_hosts"` // Add host to /etc/hosts (host:IP)
Hostname string `mapstructure:"hostname"` // Hostname for containers
LabelsRaw []map[string]string `mapstructure:"labels"` //
Labels map[string]string `mapstructure:"-"` // Labels to set when the container starts up
Auth []DockerDriverAuth `mapstructure:"auth"` // Authentication credentials for a private Docker registry
AuthSoftFail bool `mapstructure:"auth_soft_fail"` // Soft-fail if auth creds are provided but fail
TTY bool `mapstructure:"tty"` // Allocate a Pseudo-TTY
Interactive bool `mapstructure:"interactive"` // Keep STDIN open even if not attached
ShmSize int64 `mapstructure:"shm_size"` // Size of /dev/shm of the container in bytes
WorkDir string `mapstructure:"work_dir"` // Working directory inside the container
Logging []DockerLoggingOpts `mapstructure:"logging"` // Logging options for syslog server
Volumes []string `mapstructure:"volumes"` // Host-Volumes to mount in, syntax: /path/to/host/directory:/destination/path/in/container
Mounts []DockerMount `mapstructure:"mounts"` // Docker volumes to mount
VolumeDriver string `mapstructure:"volume_driver"` // Docker volume driver used for the container's volumes
ForcePull bool `mapstructure:"force_pull"` // Always force pull before running image, useful if your tags are mutable
MacAddress string `mapstructure:"mac_address"` // Pin mac address to container
SecurityOpt []string `mapstructure:"security_opt"` // Flags to pass directly to security-opt
Devices []DockerDevice `mapstructure:"devices"` // To allow mounting USB or other serial control devices
CapAdd []string `mapstructure:"cap_add"` // Flags to pass directly to cap-add
CapDrop []string `mapstructure:"cap_drop"` // Flags to pass directly to cap-drop
ReadonlyRootfs bool `mapstructure:"readonly_rootfs"` // Mount the containers root filesystem as read only
AdvertiseIPv6Address bool `mapstructure:"advertise_ipv6_address"` // Flag to use the GlobalIPv6Address from the container as the detected IP
CPUHardLimit bool `mapstructure:"cpu_hard_limit"` // Enforce CPU hard limit.
}
func sliceMergeUlimit(ulimitsRaw map[string]string) ([]docker.ULimit, error) {
@@ -485,16 +485,15 @@ func NewDockerDriver(ctx *DriverContext) Driver {
return &DockerDriver{DriverContext: *ctx}
}
func (d *DockerDriver) Fingerprint(cfg *config.Config, node *structs.Node) (bool, error) {
// Initialize docker API clients
func (d *DockerDriver) Fingerprint(req *cstructs.FingerprintRequest, resp *cstructs.FingerprintResponse) error {
client, _, err := d.dockerClients()
if err != nil {
if d.fingerprintSuccess == nil || *d.fingerprintSuccess {
d.logger.Printf("[INFO] driver.docker: failed to initialize client: %s", err)
}
delete(node.Attributes, dockerDriverAttr)
d.fingerprintSuccess = helper.BoolToPtr(false)
return false, nil
resp.RemoveAttribute(dockerDriverAttr)
return nil
}
// This is the first operation taken on the client so we'll try to
@@ -502,25 +501,26 @@ func (d *DockerDriver) Fingerprint(cfg *config.Config, node *structs.Node) (bool
// Docker isn't available so we'll simply disable the docker driver.
env, err := client.Version()
if err != nil {
delete(node.Attributes, dockerDriverAttr)
if d.fingerprintSuccess == nil || *d.fingerprintSuccess {
d.logger.Printf("[DEBUG] driver.docker: could not connect to docker daemon at %s: %s", client.Endpoint(), err)
}
d.fingerprintSuccess = helper.BoolToPtr(false)
return false, nil
resp.RemoveAttribute(dockerDriverAttr)
return nil
}
node.Attributes[dockerDriverAttr] = "1"
node.Attributes["driver.docker.version"] = env.Get("Version")
resp.AddAttribute(dockerDriverAttr, "1")
resp.AddAttribute("driver.docker.version", env.Get("Version"))
resp.Detected = true
privileged := d.config.ReadBoolDefault(dockerPrivilegedConfigOption, false)
if privileged {
node.Attributes[dockerPrivilegedConfigOption] = "1"
resp.AddAttribute(dockerPrivilegedConfigOption, "1")
}
// Advertise if this node supports Docker volumes
if d.config.ReadBoolDefault(dockerVolumesConfigOption, dockerVolumesConfigDefault) {
node.Attributes["driver."+dockerVolumesConfigOption] = "1"
resp.AddAttribute("driver."+dockerVolumesConfigOption, "1")
}
// Detect bridge IP address - #2785
@@ -538,7 +538,7 @@ func (d *DockerDriver) Fingerprint(cfg *config.Config, node *structs.Node) (bool
}
if n.IPAM.Config[0].Gateway != "" {
node.Attributes["driver.docker.bridge_ip"] = n.IPAM.Config[0].Gateway
resp.AddAttribute("driver.docker.bridge_ip", n.IPAM.Config[0].Gateway)
} else if d.fingerprintSuccess == nil {
// Docker 17.09.0-ce dropped the Gateway IP from the bridge network
// See https://github.com/moby/moby/issues/32648
@@ -549,7 +549,7 @@ func (d *DockerDriver) Fingerprint(cfg *config.Config, node *structs.Node) (bool
}
d.fingerprintSuccess = helper.BoolToPtr(true)
return true, nil
return nil
}
// Validate is used to validate the driver configuration
@@ -682,6 +682,9 @@ func (d *DockerDriver) Validate(config map[string]interface{}) error {
"readonly_rootfs": {
Type: fields.TypeBool,
},
"advertise_ipv6_address": {
Type: fields.TypeBool,
},
"cpu_hard_limit": {
Type: fields.TypeBool,
},
@@ -895,6 +898,10 @@ func (d *DockerDriver) detectIP(c *docker.Container) (string, bool) {
}
ip = net.IPAddress
if d.driverConfig.AdvertiseIPv6Address {
ip = net.GlobalIPv6Address
auto = true
}
ipName = name
// Don't auto-advertise IPs for default networks (bridge on

View File

@@ -171,17 +171,31 @@ func TestDockerDriver_Fingerprint(t *testing.T) {
node := &structs.Node{
Attributes: make(map[string]string),
}
apply, err := d.Fingerprint(&config.Config{}, node)
request := &cstructs.FingerprintRequest{Config: &config.Config{}, Node: node}
var response cstructs.FingerprintResponse
err := d.Fingerprint(request, &response)
if err != nil {
t.Fatalf("err: %v", err)
}
if apply != testutil.DockerIsConnected(t) {
attributes := response.Attributes
if testutil.DockerIsConnected(t) && attributes["driver.docker"] == "" {
t.Fatalf("Fingerprinter should detect when docker is available")
}
if node.Attributes["driver.docker"] != "1" {
if attributes["driver.docker"] != "1" {
t.Log("Docker daemon not available. The remainder of the docker tests will be skipped.")
} else {
// if docker is available, make sure that the response is tagged as
// applicable
if !response.Detected {
t.Fatalf("expected response to be applicable")
}
}
t.Logf("Found docker version %s", node.Attributes["driver.docker.version"])
t.Logf("Found docker version %s", attributes["driver.docker.version"])
}
// TestDockerDriver_Fingerprint_Bridge asserts that if Docker is running we set
@@ -210,18 +224,31 @@ func TestDockerDriver_Fingerprint_Bridge(t *testing.T) {
conf := testConfig(t)
conf.Node = mock.Node()
dd := NewDockerDriver(NewDriverContext("", "", conf, conf.Node, testLogger(), nil))
ok, err := dd.Fingerprint(conf, conf.Node)
request := &cstructs.FingerprintRequest{Config: conf, Node: conf.Node}
var response cstructs.FingerprintResponse
err = dd.Fingerprint(request, &response)
if err != nil {
t.Fatalf("error fingerprinting docker: %v", err)
}
if !ok {
if !response.Detected {
t.Fatalf("expected response to be applicable")
}
attributes := response.Attributes
if attributes == nil {
t.Fatalf("expected attributes to be set")
}
if attributes["driver.docker"] == "" {
t.Fatalf("expected Docker to be enabled but false was returned")
}
if found := conf.Node.Attributes["driver.docker.bridge_ip"]; found != expectedAddr {
if found := attributes["driver.docker.bridge_ip"]; found != expectedAddr {
t.Fatalf("expected bridge ip %q but found: %q", expectedAddr, found)
}
t.Logf("docker bridge ip: %q", conf.Node.Attributes["driver.docker.bridge_ip"])
t.Logf("docker bridge ip: %q", attributes["driver.docker.bridge_ip"])
}
func TestDockerDriver_StartOpen_Wait(t *testing.T) {
@@ -2269,3 +2296,85 @@ func TestDockerDriver_ReadonlyRootfs(t *testing.T) {
assert.True(t, container.HostConfig.ReadonlyRootfs, "ReadonlyRootfs option not set")
}
func TestDockerDriver_AdvertiseIPv6Address(t *testing.T) {
if !tu.IsTravis() {
t.Parallel()
}
if !testutil.DockerIsConnected(t) {
t.Skip("Docker not connected")
}
expectedPrefix := "2001:db8:1::242:ac11"
expectedAdvertise := true
task := &structs.Task{
Name: "nc-demo",
Driver: "docker",
Config: map[string]interface{}{
"image": "busybox",
"load": "busybox.tar",
"command": "/bin/nc",
"args": []string{"-l", "127.0.0.1", "-p", "0"},
"advertise_ipv6_address": expectedAdvertise,
},
Resources: &structs.Resources{
MemoryMB: 256,
CPU: 512,
},
LogConfig: &structs.LogConfig{
MaxFiles: 10,
MaxFileSizeMB: 10,
},
}
client := newTestDockerClient(t)
// Make sure IPv6 is enabled
net, err := client.NetworkInfo("bridge")
if err != nil {
t.Skip("error retrieving bridge network information, skipping")
}
if net == nil || !net.EnableIPv6 {
t.Skip("IPv6 not enabled on bridge network, skipping")
}
tctx := testDockerDriverContexts(t, task)
driver := NewDockerDriver(tctx.DriverCtx)
copyImage(t, tctx.ExecCtx.TaskDir, "busybox.tar")
defer tctx.AllocDir.Destroy()
presp, err := driver.Prestart(tctx.ExecCtx, task)
defer driver.Cleanup(tctx.ExecCtx, presp.CreatedResources)
if err != nil {
t.Fatalf("Error in prestart: %v", err)
}
sresp, err := driver.Start(tctx.ExecCtx, task)
if err != nil {
t.Fatalf("Error in start: %v", err)
}
if sresp.Handle == nil {
t.Fatalf("handle is nil\nStack\n%s", debug.Stack())
}
assert.Equal(t, expectedAdvertise, sresp.Network.AutoAdvertise, "Wrong autoadvertise. Expect: %s, got: %s", expectedAdvertise, sresp.Network.AutoAdvertise)
if !strings.HasPrefix(sresp.Network.IP, expectedPrefix) {
t.Fatalf("Got IP address %q want ip address with prefix %q", sresp.Network.IP, expectedPrefix)
}
defer sresp.Handle.Kill()
handle := sresp.Handle.(*DockerHandle)
waitForExist(t, client, handle)
container, err := client.InspectContainer(handle.ContainerID())
if err != nil {
t.Fatalf("Error inspecting container: %v", err)
}
if !strings.HasPrefix(container.NetworkSettings.GlobalIPv6Address, expectedPrefix) {
t.Fatalf("Got GlobalIPv6address %s want GlobalIPv6address with prefix %s", expectedPrefix, container.NetworkSettings.GlobalIPv6Address)
}
}

View File

@@ -3,12 +3,12 @@
package driver
import (
"github.com/hashicorp/nomad/client/config"
cstructs "github.com/hashicorp/nomad/client/structs"
"github.com/hashicorp/nomad/helper"
"github.com/hashicorp/nomad/nomad/structs"
)
func (d *ExecDriver) Fingerprint(cfg *config.Config, node *structs.Node) (bool, error) {
func (d *ExecDriver) Fingerprint(req *cstructs.FingerprintRequest, resp *cstructs.FingerprintResponse) error {
d.fingerprintSuccess = helper.BoolToPtr(false)
return false, nil
resp.Detected = true
return nil
}

View File

@@ -1,9 +1,8 @@
package driver
import (
"github.com/hashicorp/nomad/client/config"
cstructs "github.com/hashicorp/nomad/client/structs"
"github.com/hashicorp/nomad/helper"
"github.com/hashicorp/nomad/nomad/structs"
"golang.org/x/sys/unix"
)
@@ -13,28 +12,31 @@ const (
execDriverAttr = "driver.exec"
)
func (d *ExecDriver) Fingerprint(cfg *config.Config, node *structs.Node) (bool, error) {
func (d *ExecDriver) Fingerprint(req *cstructs.FingerprintRequest, resp *cstructs.FingerprintResponse) error {
// The exec driver will be detected in every case
resp.Detected = true
// Only enable if cgroups are available and we are root
if !cgroupsMounted(node) {
if !cgroupsMounted(req.Node) {
if d.fingerprintSuccess == nil || *d.fingerprintSuccess {
d.logger.Printf("[DEBUG] driver.exec: cgroups unavailable, disabling")
d.logger.Printf("[INFO] driver.exec: cgroups unavailable, disabling")
}
d.fingerprintSuccess = helper.BoolToPtr(false)
delete(node.Attributes, execDriverAttr)
return false, nil
resp.RemoveAttribute(execDriverAttr)
return nil
} else if unix.Geteuid() != 0 {
if d.fingerprintSuccess == nil || *d.fingerprintSuccess {
d.logger.Printf("[DEBUG] driver.exec: must run as root user, disabling")
}
delete(node.Attributes, execDriverAttr)
d.fingerprintSuccess = helper.BoolToPtr(false)
return false, nil
resp.RemoveAttribute(execDriverAttr)
return nil
}
if d.fingerprintSuccess == nil || !*d.fingerprintSuccess {
d.logger.Printf("[DEBUG] driver.exec: exec driver is enabled")
}
node.Attributes[execDriverAttr] = "1"
resp.AddAttribute(execDriverAttr, "1")
d.fingerprintSuccess = helper.BoolToPtr(true)
return true, nil
return nil
}

View File

@@ -13,6 +13,7 @@ import (
"github.com/hashicorp/nomad/client/config"
"github.com/hashicorp/nomad/client/driver/env"
cstructs "github.com/hashicorp/nomad/client/structs"
"github.com/hashicorp/nomad/nomad/structs"
"github.com/hashicorp/nomad/testutil"
@@ -37,14 +38,19 @@ func TestExecDriver_Fingerprint(t *testing.T) {
"unique.cgroup.mountpoint": "/sys/fs/cgroup",
},
}
apply, err := d.Fingerprint(&config.Config{}, node)
request := &cstructs.FingerprintRequest{Config: &config.Config{}, Node: node}
var response cstructs.FingerprintResponse
err := d.Fingerprint(request, &response)
if err != nil {
t.Fatalf("err: %v", err)
}
if !apply {
t.Fatalf("should apply")
if !response.Detected {
t.Fatalf("expected response to be applicable")
}
if node.Attributes["driver.exec"] == "" {
if response.Attributes == nil || response.Attributes["driver.exec"] == "" {
t.Fatalf("missing driver")
}
}

View File

@@ -18,7 +18,6 @@ import (
"github.com/hashicorp/go-plugin"
"github.com/mitchellh/mapstructure"
"github.com/hashicorp/nomad/client/config"
"github.com/hashicorp/nomad/client/driver/env"
"github.com/hashicorp/nomad/client/driver/executor"
dstructs "github.com/hashicorp/nomad/client/driver/structs"
@@ -112,15 +111,16 @@ func (d *JavaDriver) Abilities() DriverAbilities {
}
}
func (d *JavaDriver) Fingerprint(cfg *config.Config, node *structs.Node) (bool, error) {
func (d *JavaDriver) Fingerprint(req *cstructs.FingerprintRequest, resp *cstructs.FingerprintResponse) error {
// Only enable if we are root and cgroups are mounted when running on linux systems.
if runtime.GOOS == "linux" && (syscall.Geteuid() != 0 || !cgroupsMounted(node)) {
if runtime.GOOS == "linux" && (syscall.Geteuid() != 0 || !cgroupsMounted(req.Node)) {
if d.fingerprintSuccess == nil || *d.fingerprintSuccess {
d.logger.Printf("[DEBUG] driver.java: root privileges and mounted cgroups required on linux, disabling")
d.logger.Printf("[INFO] driver.java: root privileges and mounted cgroups required on linux, disabling")
}
delete(node.Attributes, "driver.java")
d.fingerprintSuccess = helper.BoolToPtr(false)
return false, nil
resp.RemoveAttribute(javaDriverAttr)
resp.Detected = true
return nil
}
// Find java version
@@ -132,9 +132,9 @@ func (d *JavaDriver) Fingerprint(cfg *config.Config, node *structs.Node) (bool,
err := cmd.Run()
if err != nil {
// assume Java wasn't found
delete(node.Attributes, javaDriverAttr)
d.fingerprintSuccess = helper.BoolToPtr(false)
return false, nil
resp.RemoveAttribute(javaDriverAttr)
return nil
}
// 'java -version' returns output on Stderr typically.
@@ -152,9 +152,9 @@ func (d *JavaDriver) Fingerprint(cfg *config.Config, node *structs.Node) (bool,
if d.fingerprintSuccess == nil || *d.fingerprintSuccess {
d.logger.Println("[WARN] driver.java: error parsing Java version information, aborting")
}
delete(node.Attributes, javaDriverAttr)
d.fingerprintSuccess = helper.BoolToPtr(false)
return false, nil
resp.RemoveAttribute(javaDriverAttr)
return nil
}
// Assume 'java -version' returns 3 lines:
@@ -166,13 +166,14 @@ func (d *JavaDriver) Fingerprint(cfg *config.Config, node *structs.Node) (bool,
versionString := info[0]
versionString = strings.TrimPrefix(versionString, "java version ")
versionString = strings.Trim(versionString, "\"")
node.Attributes[javaDriverAttr] = "1"
node.Attributes["driver.java.version"] = versionString
node.Attributes["driver.java.runtime"] = info[1]
node.Attributes["driver.java.vm"] = info[2]
resp.AddAttribute(javaDriverAttr, "1")
resp.AddAttribute("driver.java.version", versionString)
resp.AddAttribute("driver.java.runtime", info[1])
resp.AddAttribute("driver.java.vm", info[2])
d.fingerprintSuccess = helper.BoolToPtr(true)
resp.Detected = true
return true, nil
return nil
}
func (d *JavaDriver) Prestart(*ExecContext, *structs.Task) (*PrestartResponse, error) {

View File

@@ -11,6 +11,7 @@ import (
"time"
"github.com/hashicorp/nomad/client/config"
cstructs "github.com/hashicorp/nomad/client/structs"
"github.com/hashicorp/nomad/nomad/structs"
"github.com/hashicorp/nomad/testutil"
"github.com/stretchr/testify/assert"
@@ -49,14 +50,19 @@ func TestJavaDriver_Fingerprint(t *testing.T) {
"unique.cgroup.mountpoint": "/sys/fs/cgroups",
},
}
apply, err := d.Fingerprint(&config.Config{}, node)
request := &cstructs.FingerprintRequest{Config: &config.Config{}, Node: node}
var response cstructs.FingerprintResponse
err := d.Fingerprint(request, &response)
if err != nil {
t.Fatalf("err: %v", err)
}
if apply != javaLocated() {
t.Fatalf("Fingerprinter should detect Java when it is installed")
if !response.Detected {
t.Fatalf("expected response to be applicable")
}
if node.Attributes["driver.java"] != "1" {
if response.Attributes["driver.java"] != "1" && javaLocated() {
if v, ok := osJavaDriverSupport[runtime.GOOS]; v && ok {
t.Fatalf("missing java driver")
} else {
@@ -64,7 +70,7 @@ func TestJavaDriver_Fingerprint(t *testing.T) {
}
}
for _, key := range []string{"driver.java.version", "driver.java.runtime", "driver.java.vm"} {
if node.Attributes[key] == "" {
if response.Attributes[key] == "" {
t.Fatalf("missing driver key (%s)", key)
}
}

View File

@@ -14,7 +14,6 @@ import (
"syscall"
"time"
"github.com/hashicorp/nomad/client/config"
"github.com/hashicorp/nomad/client/fingerprint"
"github.com/hashicorp/nomad/client/stats"
"github.com/hashicorp/nomad/helper/fields"
@@ -184,24 +183,27 @@ func (d *LxcDriver) FSIsolation() cstructs.FSIsolation {
}
// Fingerprint fingerprints the lxc driver configuration
func (d *LxcDriver) Fingerprint(cfg *config.Config, node *structs.Node) (bool, error) {
func (d *LxcDriver) Fingerprint(req *cstructs.FingerprintRequest, resp *cstructs.FingerprintResponse) error {
cfg := req.Config
enabled := cfg.ReadBoolDefault(lxcConfigOption, true)
if !enabled && !cfg.DevMode {
return false, nil
return nil
}
version := lxc.Version()
if version == "" {
return false, nil
return nil
}
node.Attributes["driver.lxc.version"] = version
node.Attributes["driver.lxc"] = "1"
resp.AddAttribute("driver.lxc.version", version)
resp.AddAttribute("driver.lxc", "1")
resp.Detected = true
// Advertise if this node supports lxc volumes
if d.config.ReadBoolDefault(lxcVolumesConfigOption, lxcVolumesConfigDefault) {
node.Attributes["driver."+lxcVolumesConfigOption] = "1"
resp.AddAttribute("driver."+lxcVolumesConfigOption, "1")
}
return true, nil
return nil
}
func (d *LxcDriver) Prestart(*ExecContext, *structs.Task) (*PrestartResponse, error) {
@@ -210,9 +212,20 @@ func (d *LxcDriver) Prestart(*ExecContext, *structs.Task) (*PrestartResponse, er
// Start starts the LXC Driver
func (d *LxcDriver) Start(ctx *ExecContext, task *structs.Task) (*StartResponse, error) {
sresp, err, errCleanup := d.startWithCleanup(ctx, task)
if err != nil {
if cleanupErr := errCleanup(); cleanupErr != nil {
d.logger.Printf("[ERR] error occurred while cleaning up from error in Start: %v", cleanupErr)
}
}
return sresp, err
}
func (d *LxcDriver) startWithCleanup(ctx *ExecContext, task *structs.Task) (*StartResponse, error, func() error) {
noCleanup := func() error { return nil }
var driverConfig LxcDriverConfig
if err := mapstructure.WeakDecode(task.Config, &driverConfig); err != nil {
return nil, err
return nil, err, noCleanup
}
lxcPath := lxc.DefaultConfigPath()
if path := d.config.Read("driver.lxc.path"); path != "" {
@@ -222,7 +235,7 @@ func (d *LxcDriver) Start(ctx *ExecContext, task *structs.Task) (*StartResponse,
containerName := fmt.Sprintf("%s-%s", task.Name, d.DriverContext.allocID)
c, err := lxc.NewContainer(containerName, lxcPath)
if err != nil {
return nil, fmt.Errorf("unable to initialize container: %v", err)
return nil, fmt.Errorf("unable to initialize container: %v", err), noCleanup
}
var verbosity lxc.Verbosity
@@ -232,7 +245,7 @@ func (d *LxcDriver) Start(ctx *ExecContext, task *structs.Task) (*StartResponse,
case "", "quiet":
verbosity = lxc.Quiet
default:
return nil, fmt.Errorf("lxc driver config 'verbosity' can only be either quiet or verbose")
return nil, fmt.Errorf("lxc driver config 'verbosity' can only be either quiet or verbose"), noCleanup
}
c.SetVerbosity(verbosity)
@@ -249,7 +262,7 @@ func (d *LxcDriver) Start(ctx *ExecContext, task *structs.Task) (*StartResponse,
case "", "error":
logLevel = lxc.ERROR
default:
return nil, fmt.Errorf("lxc driver config 'log_level' can only be trace, debug, info, warn or error")
return nil, fmt.Errorf("lxc driver config 'log_level' can only be trace, debug, info, warn or error"), noCleanup
}
c.SetLogLevel(logLevel)
@@ -267,12 +280,12 @@ func (d *LxcDriver) Start(ctx *ExecContext, task *structs.Task) (*StartResponse,
}
if err := c.Create(options); err != nil {
return nil, fmt.Errorf("unable to create container: %v", err)
return nil, fmt.Errorf("unable to create container: %v", err), noCleanup
}
// Set the network type to none
if err := c.SetConfigItem("lxc.network.type", "none"); err != nil {
return nil, fmt.Errorf("error setting network type configuration: %v", err)
return nil, fmt.Errorf("error setting network type configuration: %v", err), c.Destroy
}
// Bind mount the shared alloc dir and task local dir in the container
@@ -290,7 +303,7 @@ func (d *LxcDriver) Start(ctx *ExecContext, task *structs.Task) (*StartResponse,
if filepath.IsAbs(paths[0]) {
if !volumesEnabled {
return nil, fmt.Errorf("absolute bind-mount volume in config but '%v' is false", lxcVolumesConfigOption)
return nil, fmt.Errorf("absolute bind-mount volume in config but '%v' is false", lxcVolumesConfigOption), c.Destroy
}
} else {
// Relative source paths are treated as relative to alloc dir
@@ -302,21 +315,28 @@ func (d *LxcDriver) Start(ctx *ExecContext, task *structs.Task) (*StartResponse,
for _, mnt := range mounts {
if err := c.SetConfigItem("lxc.mount.entry", mnt); err != nil {
return nil, fmt.Errorf("error setting bind mount %q error: %v", mnt, err)
return nil, fmt.Errorf("error setting bind mount %q error: %v", mnt, err), c.Destroy
}
}
// Start the container
if err := c.Start(); err != nil {
return nil, fmt.Errorf("unable to start container: %v", err)
return nil, fmt.Errorf("unable to start container: %v", err), c.Destroy
}
stopAndDestroyCleanup := func() error {
if err := c.Stop(); err != nil {
return err
}
return c.Destroy()
}
// Set the resource limits
if err := c.SetMemoryLimit(lxc.ByteSize(task.Resources.MemoryMB) * lxc.MB); err != nil {
return nil, fmt.Errorf("unable to set memory limits: %v", err)
return nil, fmt.Errorf("unable to set memory limits: %v", err), stopAndDestroyCleanup
}
if err := c.SetCgroupItem("cpu.shares", strconv.Itoa(task.Resources.CPU)); err != nil {
return nil, fmt.Errorf("unable to set cpu shares: %v", err)
return nil, fmt.Errorf("unable to set cpu shares: %v", err), stopAndDestroyCleanup
}
h := lxcDriverHandle{
@@ -335,7 +355,7 @@ func (d *LxcDriver) Start(ctx *ExecContext, task *structs.Task) (*StartResponse,
go h.run()
return &StartResponse{Handle: &h}, nil
return &StartResponse{Handle: &h}, nil, noCleanup
}
func (d *LxcDriver) Cleanup(*ExecContext, *CreatedResources) error { return nil }

View File

@@ -13,6 +13,7 @@ import (
"time"
"github.com/hashicorp/nomad/client/config"
cstructs "github.com/hashicorp/nomad/client/structs"
ctestutil "github.com/hashicorp/nomad/client/testutil"
"github.com/hashicorp/nomad/nomad/structs"
"github.com/hashicorp/nomad/testutil"
@@ -38,23 +39,34 @@ func TestLxcDriver_Fingerprint(t *testing.T) {
node := &structs.Node{
Attributes: map[string]string{},
}
apply, err := d.Fingerprint(&config.Config{}, node)
if err != nil {
t.Fatalf("err: %v", err)
}
if !apply {
t.Fatalf("should apply by default")
// test with an empty config
{
request := &cstructs.FingerprintRequest{Config: &config.Config{}, Node: node}
var response cstructs.FingerprintResponse
err := d.Fingerprint(request, &response)
if err != nil {
t.Fatalf("err: %v", err)
}
}
apply, err = d.Fingerprint(&config.Config{Options: map[string]string{lxcConfigOption: "0"}}, node)
if err != nil {
t.Fatalf("err: %v", err)
}
if apply {
t.Fatalf("should not apply with config")
}
if node.Attributes["driver.lxc"] == "" {
t.Fatalf("missing driver")
// test when lxc is enable din the config
{
conf := &config.Config{Options: map[string]string{lxcConfigOption: "1"}}
request := &cstructs.FingerprintRequest{Config: conf, Node: node}
var response cstructs.FingerprintResponse
err := d.Fingerprint(request, &response)
if err != nil {
t.Fatalf("err: %v", err)
}
if !response.Detected {
t.Fatalf("expected response to be applicable")
}
if response.Attributes["driver.lxc"] == "" {
t.Fatalf("missing driver")
}
}
}
@@ -310,6 +322,7 @@ func TestLxcDriver_Start_NoVolumes(t *testing.T) {
ctx := testDriverContexts(t, task)
defer ctx.AllocDir.Destroy()
// set lxcVolumesConfigOption to false to disallow absolute paths as the source for the bind mount
ctx.DriverCtx.config.Options = map[string]string{lxcVolumesConfigOption: "false"}
d := NewLxcDriver(ctx.DriverCtx)
@@ -317,8 +330,19 @@ func TestLxcDriver_Start_NoVolumes(t *testing.T) {
if _, err := d.Prestart(ctx.ExecCtx, task); err != nil {
t.Fatalf("prestart err: %v", err)
}
// expect the "absolute bind-mount volume in config.. " error
_, err := d.Start(ctx.ExecCtx, task)
if err == nil {
t.Fatalf("expected error in start, got nil.")
}
// Because the container was created but not started before
// the expected error, we can test that the destroy-only
// cleanup is done here.
containerName := fmt.Sprintf("%s-%s", task.Name, ctx.DriverCtx.allocID)
if err := exec.Command("bash", "-c", fmt.Sprintf("lxc-ls -1 | grep -q %s", containerName)).Run(); err == nil {
t.Fatalf("error, container '%s' is still around", containerName)
}
}

View File

@@ -15,13 +15,23 @@ import (
"github.com/mitchellh/mapstructure"
"github.com/hashicorp/nomad/client/config"
dstructs "github.com/hashicorp/nomad/client/driver/structs"
"github.com/hashicorp/nomad/client/fingerprint"
cstructs "github.com/hashicorp/nomad/client/structs"
"github.com/hashicorp/nomad/nomad/structs"
)
const (
// ShutdownPeriodicAfter is a config key that can be used during tests to
// "stop" a previously-functioning driver, allowing for testing of periodic
// drivers and fingerprinters
ShutdownPeriodicAfter = "test.shutdown_periodic_after"
// ShutdownPeriodicDuration is a config option that can be used during tests
// to "stop" a previously functioning driver after the specified duration
// (specified in seconds) for testing of periodic drivers and fingerprinters.
ShutdownPeriodicDuration = "test.shutdown_periodic_duration"
)
// Add the mock driver to the list of builtin drivers
func init() {
BuiltinDrivers["mock_driver"] = NewMockDriver
@@ -78,14 +88,29 @@ type MockDriverConfig struct {
// MockDriver is a driver which is used for testing purposes
type MockDriver struct {
DriverContext
fingerprint.StaticFingerprinter
cleanupFailNum int
// shutdownFingerprintTime is the time up to which the driver will be up
shutdownFingerprintTime time.Time
}
// NewMockDriver is a factory method which returns a new Mock Driver
func NewMockDriver(ctx *DriverContext) Driver {
return &MockDriver{DriverContext: *ctx}
md := &MockDriver{DriverContext: *ctx}
// if the shutdown configuration options are set, start the timer here.
// This config option defaults to false
if ctx.config != nil && ctx.config.ReadBoolDefault(ShutdownPeriodicAfter, false) {
duration, err := ctx.config.ReadInt(ShutdownPeriodicDuration)
if err != nil {
errMsg := fmt.Sprintf("unable to read config option for shutdown_periodic_duration %s, got err %s", duration, err.Error())
panic(errMsg)
}
md.shutdownFingerprintTime = time.Now().Add(time.Second * time.Duration(duration))
}
return md
}
func (d *MockDriver) Abilities() DriverAbilities {
@@ -194,9 +219,18 @@ func (m *MockDriver) Validate(map[string]interface{}) error {
}
// Fingerprint fingerprints a node and returns if MockDriver is enabled
func (m *MockDriver) Fingerprint(cfg *config.Config, node *structs.Node) (bool, error) {
node.Attributes["driver.mock_driver"] = "1"
return true, nil
func (m *MockDriver) Fingerprint(req *cstructs.FingerprintRequest, resp *cstructs.FingerprintResponse) error {
switch {
// If the driver is configured to shut down after a period of time, and the
// current time is after the time which the node should shut down, simulate
// driver failure
case !m.shutdownFingerprintTime.IsZero() && time.Now().After(m.shutdownFingerprintTime):
resp.RemoveAttribute("driver.mock_driver")
default:
resp.AddAttribute("driver.mock_driver", "1")
resp.Detected = true
}
return nil
}
// MockDriverHandle is a driver handler which supervises a mock task
@@ -339,3 +373,8 @@ func (h *mockDriverHandle) run() {
}
}
}
// When testing, poll for updates
func (m *MockDriver) Periodic() (bool, time.Duration) {
return true, 500 * time.Millisecond
}

View File

@@ -17,7 +17,6 @@ import (
"github.com/coreos/go-semver/semver"
plugin "github.com/hashicorp/go-plugin"
"github.com/hashicorp/nomad/client/config"
"github.com/hashicorp/nomad/client/driver/executor"
dstructs "github.com/hashicorp/nomad/client/driver/structs"
"github.com/hashicorp/nomad/client/fingerprint"
@@ -155,7 +154,7 @@ func (d *QemuDriver) FSIsolation() cstructs.FSIsolation {
return cstructs.FSIsolationImage
}
func (d *QemuDriver) Fingerprint(cfg *config.Config, node *structs.Node) (bool, error) {
func (d *QemuDriver) Fingerprint(req *cstructs.FingerprintRequest, resp *cstructs.FingerprintResponse) error {
bin := "qemu-system-x86_64"
if runtime.GOOS == "windows" {
// On windows, the "qemu-system-x86_64" command does not respond to the
@@ -164,22 +163,24 @@ func (d *QemuDriver) Fingerprint(cfg *config.Config, node *structs.Node) (bool,
}
outBytes, err := exec.Command(bin, "--version").Output()
if err != nil {
delete(node.Attributes, qemuDriverAttr)
return false, nil
// return no error, as it isn't an error to not find qemu, it just means we
// can't use it.
return nil
}
out := strings.TrimSpace(string(outBytes))
matches := reQemuVersion.FindStringSubmatch(out)
if len(matches) != 2 {
delete(node.Attributes, qemuDriverAttr)
return false, fmt.Errorf("Unable to parse Qemu version string: %#v", matches)
resp.RemoveAttribute(qemuDriverAttr)
return fmt.Errorf("Unable to parse Qemu version string: %#v", matches)
}
currentQemuVersion := matches[1]
node.Attributes[qemuDriverAttr] = "1"
node.Attributes[qemuDriverVersionAttr] = currentQemuVersion
resp.AddAttribute(qemuDriverAttr, "1")
resp.AddAttribute(qemuDriverVersionAttr, currentQemuVersion)
resp.Detected = true
return true, nil
return nil
}
func (d *QemuDriver) Prestart(_ *ExecContext, task *structs.Task) (*PrestartResponse, error) {
@@ -246,7 +247,7 @@ func (d *QemuDriver) Start(ctx *ExecContext, task *structs.Task) (*StartResponse
}
// This socket will be used to manage the virtual machine (for example,
// to perform graceful shutdowns)
monitorPath, err := d.getMonitorPath(ctx.TaskDir.Dir)
monitorPath, err = d.getMonitorPath(ctx.TaskDir.Dir)
if err != nil {
d.logger.Printf("[ERR] driver.qemu: could not get qemu monitor path: %s", err)
return nil, err
@@ -464,6 +465,7 @@ func (h *qemuHandle) Kill() error {
// If Nomad did not send a graceful shutdown signal, issue an interrupt to
// the qemu process as a last resort
if gracefulShutdownSent == false {
h.logger.Printf("[DEBUG] driver.qemu: graceful shutdown is not enabled, sending an interrupt signal to pid: %d", h.userPid)
if err := h.executor.ShutDown(); err != nil {
if h.pluginClient.Exited() {
return nil

View File

@@ -10,6 +10,7 @@ import (
"time"
"github.com/hashicorp/nomad/client/config"
cstructs "github.com/hashicorp/nomad/client/structs"
"github.com/hashicorp/nomad/nomad/structs"
"github.com/hashicorp/nomad/testutil"
@@ -34,17 +35,28 @@ func TestQemuDriver_Fingerprint(t *testing.T) {
node := &structs.Node{
Attributes: make(map[string]string),
}
apply, err := d.Fingerprint(&config.Config{}, node)
request := &cstructs.FingerprintRequest{Config: &config.Config{}, Node: node}
var response cstructs.FingerprintResponse
err := d.Fingerprint(request, &response)
if err != nil {
t.Fatalf("err: %v", err)
}
if !apply {
t.Fatalf("should apply")
if !response.Detected {
t.Fatalf("expected response to be applicable")
}
if node.Attributes[qemuDriverAttr] == "" {
attributes := response.Attributes
if attributes == nil {
t.Fatalf("attributes should not be nil")
}
if attributes[qemuDriverAttr] == "" {
t.Fatalf("Missing Qemu driver")
}
if node.Attributes[qemuDriverVersionAttr] == "" {
if attributes[qemuDriverVersionAttr] == "" {
t.Fatalf("Missing Qemu driver version")
}
}
@@ -164,12 +176,15 @@ func TestQemuDriver_GracefulShutdown(t *testing.T) {
defer ctx.AllocDir.Destroy()
d := NewQemuDriver(ctx.DriverCtx)
apply, err := d.Fingerprint(&config.Config{}, ctx.DriverCtx.node)
request := &cstructs.FingerprintRequest{Config: &config.Config{}, Node: ctx.DriverCtx.node}
var response cstructs.FingerprintResponse
err := d.Fingerprint(request, &response)
if err != nil {
t.Fatalf("err: %v", err)
}
if !apply {
t.Fatalf("should apply")
for name, value := range response.Attributes {
ctx.DriverCtx.node.Attributes[name] = value
}
dst := ctx.ExecCtx.TaskDir.Dir

View File

@@ -11,7 +11,6 @@ import (
"github.com/hashicorp/go-plugin"
"github.com/hashicorp/nomad/client/allocdir"
"github.com/hashicorp/nomad/client/config"
"github.com/hashicorp/nomad/client/driver/env"
"github.com/hashicorp/nomad/client/driver/executor"
dstructs "github.com/hashicorp/nomad/client/driver/structs"
@@ -92,18 +91,19 @@ func (d *RawExecDriver) FSIsolation() cstructs.FSIsolation {
return cstructs.FSIsolationNone
}
func (d *RawExecDriver) Fingerprint(cfg *config.Config, node *structs.Node) (bool, error) {
func (d *RawExecDriver) Fingerprint(req *cstructs.FingerprintRequest, resp *cstructs.FingerprintResponse) error {
// Check that the user has explicitly enabled this executor.
enabled := cfg.ReadBoolDefault(rawExecConfigOption, false)
enabled := req.Config.ReadBoolDefault(rawExecConfigOption, false)
if enabled || cfg.DevMode {
if enabled || req.Config.DevMode {
d.logger.Printf("[WARN] driver.raw_exec: raw exec is enabled. Only enable if needed")
node.Attributes[rawExecDriverAttr] = "1"
return true, nil
resp.AddAttribute(rawExecDriverAttr, "1")
resp.Detected = true
return nil
}
delete(node.Attributes, rawExecDriverAttr)
return false, nil
resp.RemoveAttribute(rawExecDriverAttr)
return nil
}
func (d *RawExecDriver) Prestart(*ExecContext, *structs.Task) (*PrestartResponse, error) {

View File

@@ -12,6 +12,7 @@ import (
"github.com/hashicorp/nomad/client/config"
"github.com/hashicorp/nomad/client/driver/env"
cstructs "github.com/hashicorp/nomad/client/structs"
"github.com/hashicorp/nomad/helper/testtask"
"github.com/hashicorp/nomad/nomad/structs"
"github.com/hashicorp/nomad/testutil"
@@ -34,27 +35,29 @@ func TestRawExecDriver_Fingerprint(t *testing.T) {
// Disable raw exec.
cfg := &config.Config{Options: map[string]string{rawExecConfigOption: "false"}}
apply, err := d.Fingerprint(cfg, node)
request := &cstructs.FingerprintRequest{Config: cfg, Node: node}
var response cstructs.FingerprintResponse
err := d.Fingerprint(request, &response)
if err != nil {
t.Fatalf("err: %v", err)
}
if apply {
t.Fatalf("should not apply")
}
if node.Attributes["driver.raw_exec"] != "" {
if response.Attributes["driver.raw_exec"] != "" {
t.Fatalf("driver incorrectly enabled")
}
// Enable raw exec.
cfg.Options[rawExecConfigOption] = "true"
apply, err = d.Fingerprint(cfg, node)
request.Config.Options[rawExecConfigOption] = "true"
err = d.Fingerprint(request, &response)
if err != nil {
t.Fatalf("err: %v", err)
}
if !apply {
t.Fatalf("should apply")
if !response.Detected {
t.Fatalf("expected response to be applicable")
}
if node.Attributes["driver.raw_exec"] != "1" {
if response.Attributes["driver.raw_exec"] != "1" {
t.Fatalf("driver not enabled")
}
}

View File

@@ -311,31 +311,30 @@ func (d *RktDriver) Abilities() DriverAbilities {
}
}
func (d *RktDriver) Fingerprint(cfg *config.Config, node *structs.Node) (bool, error) {
func (d *RktDriver) Fingerprint(req *cstructs.FingerprintRequest, resp *cstructs.FingerprintResponse) error {
// Only enable if we are root when running on non-windows systems.
if runtime.GOOS != "windows" && syscall.Geteuid() != 0 {
if d.fingerprintSuccess == nil || *d.fingerprintSuccess {
d.logger.Printf("[DEBUG] driver.rkt: must run as root user, disabling")
}
delete(node.Attributes, rktDriverAttr)
d.fingerprintSuccess = helper.BoolToPtr(false)
return false, nil
resp.RemoveAttribute(rktDriverAttr)
return nil
}
outBytes, err := exec.Command(rktCmd, "version").Output()
if err != nil {
delete(node.Attributes, rktDriverAttr)
d.fingerprintSuccess = helper.BoolToPtr(false)
return false, nil
return nil
}
out := strings.TrimSpace(string(outBytes))
rktMatches := reRktVersion.FindStringSubmatch(out)
appcMatches := reAppcVersion.FindStringSubmatch(out)
if len(rktMatches) != 2 || len(appcMatches) != 2 {
delete(node.Attributes, rktDriverAttr)
d.fingerprintSuccess = helper.BoolToPtr(false)
return false, fmt.Errorf("Unable to parse Rkt version string: %#v", rktMatches)
resp.RemoveAttribute(rktDriverAttr)
return fmt.Errorf("Unable to parse Rkt version string: %#v", rktMatches)
}
minVersion, _ := version.NewVersion(minRktVersion)
@@ -347,21 +346,22 @@ func (d *RktDriver) Fingerprint(cfg *config.Config, node *structs.Node) (bool, e
d.logger.Printf("[WARN] driver.rkt: unsupported rkt version %s; please upgrade to >= %s",
currentVersion, minVersion)
}
delete(node.Attributes, rktDriverAttr)
d.fingerprintSuccess = helper.BoolToPtr(false)
return false, nil
resp.RemoveAttribute(rktDriverAttr)
return nil
}
node.Attributes[rktDriverAttr] = "1"
node.Attributes["driver.rkt.version"] = rktMatches[1]
node.Attributes["driver.rkt.appc.version"] = appcMatches[1]
resp.AddAttribute(rktDriverAttr, "1")
resp.AddAttribute("driver.rkt.version", rktMatches[1])
resp.AddAttribute("driver.rkt.appc.version", appcMatches[1])
resp.Detected = true
// Advertise if this node supports rkt volumes
if d.config.ReadBoolDefault(rktVolumesConfigOption, rktVolumesConfigDefault) {
node.Attributes["driver."+rktVolumesConfigOption] = "1"
resp.AddAttribute("driver."+rktVolumesConfigOption, "1")
}
d.fingerprintSuccess = helper.BoolToPtr(true)
return true, nil
return nil
}
func (d *RktDriver) Periodic() (bool, time.Duration) {

View File

@@ -5,7 +5,6 @@ package driver
import (
"time"
"github.com/hashicorp/nomad/client/config"
cstructs "github.com/hashicorp/nomad/client/structs"
"github.com/hashicorp/nomad/nomad/structs"
)
@@ -46,8 +45,8 @@ func (RktDriver) FSIsolation() cstructs.FSIsolation {
panic("not implemented")
}
func (RktDriver) Fingerprint(cfg *config.Config, node *structs.Node) (bool, error) {
return false, nil
func (RktDriver) Fingerprint(req *cstructs.FingerprintRequest, resp *cstructs.FingerprintResponse) error {
return nil
}
func (RktDriver) Periodic() (bool, time.Duration) {

View File

@@ -17,6 +17,7 @@ import (
"time"
"github.com/hashicorp/nomad/client/config"
cstructs "github.com/hashicorp/nomad/client/structs"
"github.com/hashicorp/nomad/nomad/structs"
"github.com/hashicorp/nomad/testutil"
"github.com/stretchr/testify/assert"
@@ -57,20 +58,29 @@ func TestRktDriver_Fingerprint(t *testing.T) {
node := &structs.Node{
Attributes: make(map[string]string),
}
apply, err := d.Fingerprint(&config.Config{}, node)
request := &cstructs.FingerprintRequest{Config: &config.Config{}, Node: node}
var response cstructs.FingerprintResponse
err := d.Fingerprint(request, &response)
if err != nil {
t.Fatalf("err: %v", err)
}
if !apply {
t.Fatalf("should apply")
if !response.Detected {
t.Fatalf("expected response to be applicable")
}
if node.Attributes["driver.rkt"] != "1" {
attributes := response.Attributes
if attributes == nil {
t.Fatalf("expected attributes to not equal nil")
}
if attributes["driver.rkt"] != "1" {
t.Fatalf("Missing Rkt driver")
}
if node.Attributes["driver.rkt.version"] == "" {
if attributes["driver.rkt.version"] == "" {
t.Fatalf("Missing Rkt driver version")
}
if node.Attributes["driver.rkt.appc.version"] == "" {
if attributes["driver.rkt.appc.version"] == "" {
t.Fatalf("Missing appc version for the Rkt driver")
}
}

View File

@@ -4,8 +4,7 @@ import (
"log"
"runtime"
client "github.com/hashicorp/nomad/client/config"
"github.com/hashicorp/nomad/nomad/structs"
cstructs "github.com/hashicorp/nomad/client/structs"
)
// ArchFingerprint is used to fingerprint the architecture
@@ -20,7 +19,8 @@ func NewArchFingerprint(logger *log.Logger) Fingerprint {
return f
}
func (f *ArchFingerprint) Fingerprint(config *client.Config, node *structs.Node) (bool, error) {
node.Attributes["cpu.arch"] = runtime.GOARCH
return true, nil
func (f *ArchFingerprint) Fingerprint(req *cstructs.FingerprintRequest, resp *cstructs.FingerprintResponse) error {
resp.AddAttribute("cpu.arch", runtime.GOARCH)
resp.Detected = true
return nil
}

View File

@@ -4,6 +4,7 @@ import (
"testing"
"github.com/hashicorp/nomad/client/config"
cstructs "github.com/hashicorp/nomad/client/structs"
"github.com/hashicorp/nomad/nomad/structs"
)
@@ -12,14 +13,17 @@ func TestArchFingerprint(t *testing.T) {
node := &structs.Node{
Attributes: make(map[string]string),
}
ok, err := f.Fingerprint(&config.Config{}, node)
request := &cstructs.FingerprintRequest{Config: &config.Config{}, Node: node}
var response cstructs.FingerprintResponse
err := f.Fingerprint(request, &response)
if err != nil {
t.Fatalf("err: %v", err)
}
if !ok {
t.Fatalf("should apply")
}
if node.Attributes["cpu.arch"] == "" {
t.Fatalf("missing arch")
if !response.Detected {
t.Fatalf("expected response to be applicable")
}
assertNodeAttributeContains(t, response.Attributes, "cpu.arch")
}

View File

@@ -6,7 +6,7 @@ import (
"log"
"time"
"github.com/hashicorp/nomad/nomad/structs"
cstructs "github.com/hashicorp/nomad/client/structs"
)
const (
@@ -49,8 +49,8 @@ func NewCGroupFingerprint(logger *log.Logger) Fingerprint {
// clearCGroupAttributes clears any node attributes related to cgroups that might
// have been set in a previous fingerprint run.
func (f *CGroupFingerprint) clearCGroupAttributes(n *structs.Node) {
delete(n.Attributes, "unique.cgroup.mountpoint")
func (f *CGroupFingerprint) clearCGroupAttributes(r *cstructs.FingerprintResponse) {
r.RemoveAttribute("unique.cgroup.mountpoint")
}
// Periodic determines the interval at which the periodic fingerprinter will run.

View File

@@ -5,8 +5,7 @@ package fingerprint
import (
"fmt"
client "github.com/hashicorp/nomad/client/config"
"github.com/hashicorp/nomad/nomad/structs"
cstructs "github.com/hashicorp/nomad/client/structs"
"github.com/opencontainers/runc/libcontainer/cgroups"
)
@@ -28,30 +27,31 @@ func FindCgroupMountpointDir() (string, error) {
}
// Fingerprint tries to find a valid cgroup moint point
func (f *CGroupFingerprint) Fingerprint(cfg *client.Config, node *structs.Node) (bool, error) {
func (f *CGroupFingerprint) Fingerprint(req *cstructs.FingerprintRequest, resp *cstructs.FingerprintResponse) error {
mount, err := f.mountPointDetector.MountPoint()
if err != nil {
f.clearCGroupAttributes(node)
return false, fmt.Errorf("Failed to discover cgroup mount point: %s", err)
f.clearCGroupAttributes(resp)
return fmt.Errorf("Failed to discover cgroup mount point: %s", err)
}
// Check if a cgroup mount point was found
if mount == "" {
// Clear any attributes from the previous fingerprint.
f.clearCGroupAttributes(node)
f.clearCGroupAttributes(resp)
if f.lastState == cgroupAvailable {
f.logger.Printf("[INFO] fingerprint.cgroups: cgroups are unavailable")
}
f.lastState = cgroupUnavailable
return true, nil
return nil
}
node.Attributes["unique.cgroup.mountpoint"] = mount
resp.AddAttribute("unique.cgroup.mountpoint", mount)
resp.Detected = true
if f.lastState == cgroupUnavailable {
f.logger.Printf("[INFO] fingerprint.cgroups: cgroups are available")
}
f.lastState = cgroupAvailable
return true, nil
return nil
}

View File

@@ -7,6 +7,7 @@ import (
"testing"
"github.com/hashicorp/nomad/client/config"
cstructs "github.com/hashicorp/nomad/client/structs"
"github.com/hashicorp/nomad/nomad/structs"
)
@@ -39,64 +40,91 @@ func (m *MountPointDetectorEmptyMountPoint) MountPoint() (string, error) {
}
func TestCGroupFingerprint(t *testing.T) {
f := &CGroupFingerprint{
logger: testLogger(),
lastState: cgroupUnavailable,
mountPointDetector: &MountPointDetectorMountPointFail{},
{
f := &CGroupFingerprint{
logger: testLogger(),
lastState: cgroupUnavailable,
mountPointDetector: &MountPointDetectorMountPointFail{},
}
node := &structs.Node{
Attributes: make(map[string]string),
}
request := &cstructs.FingerprintRequest{Config: &config.Config{}, Node: node}
var response cstructs.FingerprintResponse
err := f.Fingerprint(request, &response)
if err == nil {
t.Fatalf("expected an error")
}
if a, _ := response.Attributes["unique.cgroup.mountpoint"]; a != "" {
t.Fatalf("unexpected attribute found, %s", a)
}
}
node := &structs.Node{
Attributes: make(map[string]string),
{
f := &CGroupFingerprint{
logger: testLogger(),
lastState: cgroupUnavailable,
mountPointDetector: &MountPointDetectorValidMountPoint{},
}
node := &structs.Node{
Attributes: make(map[string]string),
}
request := &cstructs.FingerprintRequest{Config: &config.Config{}, Node: node}
var response cstructs.FingerprintResponse
err := f.Fingerprint(request, &response)
if err != nil {
t.Fatalf("unexpected error, %s", err)
}
if a, ok := response.Attributes["unique.cgroup.mountpoint"]; !ok {
t.Fatalf("unable to find attribute: %s", a)
}
}
ok, err := f.Fingerprint(&config.Config{}, node)
if err == nil {
t.Fatalf("expected an error")
}
if ok {
t.Fatalf("should not apply")
}
if a, ok := node.Attributes["unique.cgroup.mountpoint"]; ok {
t.Fatalf("unexpected attribute found, %s", a)
}
{
f := &CGroupFingerprint{
logger: testLogger(),
lastState: cgroupUnavailable,
mountPointDetector: &MountPointDetectorEmptyMountPoint{},
}
f = &CGroupFingerprint{
logger: testLogger(),
lastState: cgroupUnavailable,
mountPointDetector: &MountPointDetectorValidMountPoint{},
}
node := &structs.Node{
Attributes: make(map[string]string),
}
node = &structs.Node{
Attributes: make(map[string]string),
request := &cstructs.FingerprintRequest{Config: &config.Config{}, Node: node}
var response cstructs.FingerprintResponse
err := f.Fingerprint(request, &response)
if err != nil {
t.Fatalf("unexpected error, %s", err)
}
if a, _ := response.Attributes["unique.cgroup.mountpoint"]; a != "" {
t.Fatalf("unexpected attribute found, %s", a)
}
}
{
f := &CGroupFingerprint{
logger: testLogger(),
lastState: cgroupAvailable,
mountPointDetector: &MountPointDetectorValidMountPoint{},
}
ok, err = f.Fingerprint(&config.Config{}, node)
if err != nil {
t.Fatalf("unexpected error, %s", err)
}
if !ok {
t.Fatalf("should apply")
}
assertNodeAttributeContains(t, node, "unique.cgroup.mountpoint")
node := &structs.Node{
Attributes: make(map[string]string),
}
f = &CGroupFingerprint{
logger: testLogger(),
lastState: cgroupUnavailable,
mountPointDetector: &MountPointDetectorEmptyMountPoint{},
}
node = &structs.Node{
Attributes: make(map[string]string),
}
ok, err = f.Fingerprint(&config.Config{}, node)
if err != nil {
t.Fatalf("unexpected error, %s", err)
}
if !ok {
t.Fatalf("should apply")
}
if a, ok := node.Attributes["unique.cgroup.mountpoint"]; ok {
t.Fatalf("unexpected attribute found, %s", a)
request := &cstructs.FingerprintRequest{Config: &config.Config{}, Node: node}
var response cstructs.FingerprintResponse
err := f.Fingerprint(request, &response)
if err != nil {
t.Fatalf("unexpected error, %s", err)
}
if a, _ := response.Attributes["unique.cgroup.mountpoint"]; a == "" {
t.Fatalf("expected attribute to be found, %s", a)
}
}
}

View File

@@ -8,8 +8,7 @@ import (
consul "github.com/hashicorp/consul/api"
client "github.com/hashicorp/nomad/client/config"
"github.com/hashicorp/nomad/nomad/structs"
cstructs "github.com/hashicorp/nomad/client/structs"
)
const (
@@ -29,23 +28,18 @@ func NewConsulFingerprint(logger *log.Logger) Fingerprint {
return &ConsulFingerprint{logger: logger, lastState: consulUnavailable}
}
func (f *ConsulFingerprint) Fingerprint(config *client.Config, node *structs.Node) (bool, error) {
// Guard against uninitialized Links
if node.Links == nil {
node.Links = map[string]string{}
}
func (f *ConsulFingerprint) Fingerprint(req *cstructs.FingerprintRequest, resp *cstructs.FingerprintResponse) error {
// Only create the client once to avoid creating too many connections to
// Consul.
if f.client == nil {
consulConfig, err := config.ConsulConfig.ApiConfig()
consulConfig, err := req.Config.ConsulConfig.ApiConfig()
if err != nil {
return false, fmt.Errorf("Failed to initialize the Consul client config: %v", err)
return fmt.Errorf("Failed to initialize the Consul client config: %v", err)
}
f.client, err = consul.NewClient(consulConfig)
if err != nil {
return false, fmt.Errorf("Failed to initialize consul client: %s", err)
return fmt.Errorf("Failed to initialize consul client: %s", err)
}
}
@@ -53,8 +47,7 @@ func (f *ConsulFingerprint) Fingerprint(config *client.Config, node *structs.Nod
// If we can't hit this URL consul is probably not running on this machine.
info, err := f.client.Agent().Self()
if err != nil {
// Clear any attributes set by a previous fingerprint.
f.clearConsulAttributes(node)
f.clearConsulAttributes(resp)
// Print a message indicating that the Consul Agent is not available
// anymore
@@ -62,39 +55,39 @@ func (f *ConsulFingerprint) Fingerprint(config *client.Config, node *structs.Nod
f.logger.Printf("[INFO] fingerprint.consul: consul agent is unavailable")
}
f.lastState = consulUnavailable
return false, nil
return nil
}
if s, ok := info["Config"]["Server"].(bool); ok {
node.Attributes["consul.server"] = strconv.FormatBool(s)
resp.AddAttribute("consul.server", strconv.FormatBool(s))
} else {
f.logger.Printf("[WARN] fingerprint.consul: unable to fingerprint consul.server")
}
if v, ok := info["Config"]["Version"].(string); ok {
node.Attributes["consul.version"] = v
resp.AddAttribute("consul.version", v)
} else {
f.logger.Printf("[WARN] fingerprint.consul: unable to fingerprint consul.version")
}
if r, ok := info["Config"]["Revision"].(string); ok {
node.Attributes["consul.revision"] = r
resp.AddAttribute("consul.revision", r)
} else {
f.logger.Printf("[WARN] fingerprint.consul: unable to fingerprint consul.revision")
}
if n, ok := info["Config"]["NodeName"].(string); ok {
node.Attributes["unique.consul.name"] = n
resp.AddAttribute("unique.consul.name", n)
} else {
f.logger.Printf("[WARN] fingerprint.consul: unable to fingerprint unique.consul.name")
}
if d, ok := info["Config"]["Datacenter"].(string); ok {
node.Attributes["consul.datacenter"] = d
resp.AddAttribute("consul.datacenter", d)
} else {
f.logger.Printf("[WARN] fingerprint.consul: unable to fingerprint consul.datacenter")
}
if node.Attributes["consul.datacenter"] != "" || node.Attributes["unique.consul.name"] != "" {
node.Links["consul"] = fmt.Sprintf("%s.%s",
node.Attributes["consul.datacenter"],
node.Attributes["unique.consul.name"])
if dc, ok := resp.Attributes["consul.datacenter"]; ok {
if name, ok2 := resp.Attributes["unique.consul.name"]; ok2 {
resp.AddLink("consul", fmt.Sprintf("%s.%s", dc, name))
}
} else {
f.logger.Printf("[WARN] fingerprint.consul: malformed Consul response prevented linking")
}
@@ -105,18 +98,19 @@ func (f *ConsulFingerprint) Fingerprint(config *client.Config, node *structs.Nod
f.logger.Printf("[INFO] fingerprint.consul: consul agent is available")
}
f.lastState = consulAvailable
return true, nil
resp.Detected = true
return nil
}
// clearConsulAttributes removes consul attributes and links from the passed
// Node.
func (f *ConsulFingerprint) clearConsulAttributes(n *structs.Node) {
delete(n.Attributes, "consul.server")
delete(n.Attributes, "consul.version")
delete(n.Attributes, "consul.revision")
delete(n.Attributes, "unique.consul.name")
delete(n.Attributes, "consul.datacenter")
delete(n.Links, "consul")
func (f *ConsulFingerprint) clearConsulAttributes(r *cstructs.FingerprintResponse) {
r.RemoveAttribute("consul.server")
r.RemoveAttribute("consul.version")
r.RemoveAttribute("consul.revision")
r.RemoveAttribute("unique.consul.name")
r.RemoveAttribute("consul.datacenter")
r.RemoveLink("consul")
}
func (f *ConsulFingerprint) Periodic() (bool, time.Duration) {

View File

@@ -8,6 +8,7 @@ import (
"testing"
"github.com/hashicorp/nomad/client/config"
cstructs "github.com/hashicorp/nomad/client/structs"
"github.com/hashicorp/nomad/nomad/structs"
"github.com/stretchr/testify/assert"
)
@@ -24,24 +25,27 @@ func TestConsulFingerprint(t *testing.T) {
}))
defer ts.Close()
config := config.DefaultConfig()
config.ConsulConfig.Addr = strings.TrimPrefix(ts.URL, "http://")
conf := config.DefaultConfig()
conf.ConsulConfig.Addr = strings.TrimPrefix(ts.URL, "http://")
ok, err := fp.Fingerprint(config, node)
request := &cstructs.FingerprintRequest{Config: conf, Node: node}
var response cstructs.FingerprintResponse
err := fp.Fingerprint(request, &response)
if err != nil {
t.Fatalf("Failed to fingerprint: %s", err)
}
if !ok {
t.Fatalf("Failed to apply node attributes")
if !response.Detected {
t.Fatalf("expected response to be applicable")
}
assertNodeAttributeContains(t, node, "consul.server")
assertNodeAttributeContains(t, node, "consul.version")
assertNodeAttributeContains(t, node, "consul.revision")
assertNodeAttributeContains(t, node, "unique.consul.name")
assertNodeAttributeContains(t, node, "consul.datacenter")
assertNodeAttributeContains(t, response.Attributes, "consul.server")
assertNodeAttributeContains(t, response.Attributes, "consul.version")
assertNodeAttributeContains(t, response.Attributes, "consul.revision")
assertNodeAttributeContains(t, response.Attributes, "unique.consul.name")
assertNodeAttributeContains(t, response.Attributes, "consul.datacenter")
if _, ok := node.Links["consul"]; !ok {
if _, ok := response.Links["consul"]; !ok {
t.Errorf("Expected a link to consul, none found")
}
}
@@ -177,12 +181,17 @@ func TestConsulFingerprint_UnexpectedResponse(t *testing.T) {
}))
defer ts.Close()
config := config.DefaultConfig()
config.ConsulConfig.Addr = strings.TrimPrefix(ts.URL, "http://")
conf := config.DefaultConfig()
conf.ConsulConfig.Addr = strings.TrimPrefix(ts.URL, "http://")
ok, err := fp.Fingerprint(config, node)
request := &cstructs.FingerprintRequest{Config: conf, Node: node}
var response cstructs.FingerprintResponse
err := fp.Fingerprint(request, &response)
assert.Nil(err)
assert.True(ok)
if !response.Detected {
t.Fatalf("expected response to be applicable")
}
attrs := []string{
"consul.server",
@@ -191,13 +200,14 @@ func TestConsulFingerprint_UnexpectedResponse(t *testing.T) {
"unique.consul.name",
"consul.datacenter",
}
for _, attr := range attrs {
if v, ok := node.Attributes[attr]; ok {
if v, ok := response.Attributes[attr]; ok {
t.Errorf("unexpected node attribute %q with vlaue %q", attr, v)
}
}
if v, ok := node.Links["consul"]; ok {
if v, ok := response.Links["consul"]; ok {
t.Errorf("Unexpected link to consul: %v", v)
}
}

View File

@@ -4,7 +4,7 @@ import (
"fmt"
"log"
"github.com/hashicorp/nomad/client/config"
cstructs "github.com/hashicorp/nomad/client/structs"
"github.com/hashicorp/nomad/helper/stats"
"github.com/hashicorp/nomad/nomad/structs"
)
@@ -21,13 +21,12 @@ func NewCPUFingerprint(logger *log.Logger) Fingerprint {
return f
}
func (f *CPUFingerprint) Fingerprint(cfg *config.Config, node *structs.Node) (bool, error) {
setResources := func(totalCompute int) {
if node.Resources == nil {
node.Resources = &structs.Resources{}
func (f *CPUFingerprint) Fingerprint(req *cstructs.FingerprintRequest, resp *cstructs.FingerprintResponse) error {
cfg := req.Config
setResourcesCPU := func(totalCompute int) {
resp.Resources = &structs.Resources{
CPU: totalCompute,
}
node.Resources.CPU = totalCompute
}
if err := stats.Init(); err != nil {
@@ -35,21 +34,21 @@ func (f *CPUFingerprint) Fingerprint(cfg *config.Config, node *structs.Node) (bo
}
if cfg.CpuCompute != 0 {
setResources(cfg.CpuCompute)
return true, nil
setResourcesCPU(cfg.CpuCompute)
return nil
}
if modelName := stats.CPUModelName(); modelName != "" {
node.Attributes["cpu.modelname"] = modelName
resp.AddAttribute("cpu.modelname", modelName)
}
if mhz := stats.CPUMHzPerCore(); mhz > 0 {
node.Attributes["cpu.frequency"] = fmt.Sprintf("%.0f", mhz)
resp.AddAttribute("cpu.frequency", fmt.Sprintf("%.0f", mhz))
f.logger.Printf("[DEBUG] fingerprint.cpu: frequency: %.0f MHz", mhz)
}
if numCores := stats.CPUNumCores(); numCores > 0 {
node.Attributes["cpu.numcores"] = fmt.Sprintf("%d", numCores)
resp.AddAttribute("cpu.numcores", fmt.Sprintf("%d", numCores))
f.logger.Printf("[DEBUG] fingerprint.cpu: core count: %d", numCores)
}
@@ -62,17 +61,14 @@ func (f *CPUFingerprint) Fingerprint(cfg *config.Config, node *structs.Node) (bo
// Return an error if no cpu was detected or explicitly set as this
// node would be unable to receive any allocations.
if tt == 0 {
return false, fmt.Errorf("cannot detect cpu total compute. "+
return fmt.Errorf("cannot detect cpu total compute. "+
"CPU compute must be set manually using the client config option %q",
"cpu_total_compute")
}
node.Attributes["cpu.totalcompute"] = fmt.Sprintf("%d", tt)
resp.AddAttribute("cpu.totalcompute", fmt.Sprintf("%d", tt))
setResourcesCPU(tt)
resp.Detected = true
if node.Resources == nil {
node.Resources = &structs.Resources{}
}
node.Resources.CPU = tt
return true, nil
return nil
}

View File

@@ -4,6 +4,7 @@ import (
"testing"
"github.com/hashicorp/nomad/client/config"
cstructs "github.com/hashicorp/nomad/client/structs"
"github.com/hashicorp/nomad/nomad/structs"
)
@@ -12,33 +13,40 @@ func TestCPUFingerprint(t *testing.T) {
node := &structs.Node{
Attributes: make(map[string]string),
}
ok, err := f.Fingerprint(&config.Config{}, node)
request := &cstructs.FingerprintRequest{Config: &config.Config{}, Node: node}
var response cstructs.FingerprintResponse
err := f.Fingerprint(request, &response)
if err != nil {
t.Fatalf("err: %v", err)
}
if !ok {
t.Fatalf("should apply")
if !response.Detected {
t.Fatalf("expected response to be applicable")
}
// CPU info
if node.Attributes["cpu.numcores"] == "" {
attributes := response.Attributes
if attributes == nil {
t.Fatalf("expected attributes to be initialized")
}
if attributes["cpu.numcores"] == "" {
t.Fatalf("Missing Num Cores")
}
if node.Attributes["cpu.modelname"] == "" {
if attributes["cpu.modelname"] == "" {
t.Fatalf("Missing Model Name")
}
if node.Attributes["cpu.frequency"] == "" {
if attributes["cpu.frequency"] == "" {
t.Fatalf("Missing CPU Frequency")
}
if node.Attributes["cpu.totalcompute"] == "" {
if attributes["cpu.totalcompute"] == "" {
t.Fatalf("Missing CPU Total Compute")
}
if node.Resources == nil || node.Resources.CPU == 0 {
if response.Resources == nil || response.Resources.CPU == 0 {
t.Fatalf("Expected to find CPU Resources")
}
}
// TestCPUFingerprint_OverrideCompute asserts that setting cpu_total_compute in
@@ -49,30 +57,41 @@ func TestCPUFingerprint_OverrideCompute(t *testing.T) {
Attributes: make(map[string]string),
}
cfg := &config.Config{}
ok, err := f.Fingerprint(cfg, node)
if err != nil {
t.Fatalf("err: %v", err)
}
if !ok {
t.Fatalf("should apply")
var originalCPU int
{
request := &cstructs.FingerprintRequest{Config: cfg, Node: node}
var response cstructs.FingerprintResponse
err := f.Fingerprint(request, &response)
if err != nil {
t.Fatalf("err: %v", err)
}
if !response.Detected {
t.Fatalf("expected response to be applicable")
}
if response.Resources.CPU == 0 {
t.Fatalf("expected fingerprint of cpu of but found 0")
}
originalCPU = response.Resources.CPU
}
// Get actual system CPU
origCPU := node.Resources.CPU
{
// Override it with a setting
cfg.CpuCompute = originalCPU + 123
// Override it with a setting
cfg.CpuCompute = origCPU + 123
// Make sure the Fingerprinter applies the override to the node resources
request := &cstructs.FingerprintRequest{Config: cfg, Node: node}
var response cstructs.FingerprintResponse
err := f.Fingerprint(request, &response)
if err != nil {
t.Fatalf("err: %v", err)
}
// Make sure the Fingerprinter applies the override
ok, err = f.Fingerprint(cfg, node)
if err != nil {
t.Fatalf("err: %v", err)
}
if !ok {
t.Fatalf("should apply")
}
if node.Resources.CPU != cfg.CpuCompute {
t.Fatalf("expected override cpu of %d but found %d", cfg.CpuCompute, node.Resources.CPU)
if response.Resources.CPU != cfg.CpuCompute {
t.Fatalf("expected override cpu of %d but found %d", cfg.CpuCompute, response.Resources.CPU)
}
}
}

View File

@@ -12,7 +12,7 @@ import (
"time"
"github.com/hashicorp/go-cleanhttp"
"github.com/hashicorp/nomad/client/config"
cstructs "github.com/hashicorp/nomad/client/structs"
"github.com/hashicorp/nomad/nomad/structs"
)
@@ -63,14 +63,16 @@ func NewEnvAWSFingerprint(logger *log.Logger) Fingerprint {
return f
}
func (f *EnvAWSFingerprint) Fingerprint(cfg *config.Config, node *structs.Node) (bool, error) {
func (f *EnvAWSFingerprint) Fingerprint(request *cstructs.FingerprintRequest, response *cstructs.FingerprintResponse) error {
cfg := request.Config
// Check if we should tighten the timeout
if cfg.ReadBoolDefault(TightenNetworkTimeoutsConfig, false) {
f.timeout = 1 * time.Millisecond
}
if !f.isAWS() {
return false, nil
return nil
}
// newNetwork is populated and addded to the Nodes resources
@@ -78,9 +80,6 @@ func (f *EnvAWSFingerprint) Fingerprint(cfg *config.Config, node *structs.Node)
Device: "eth0",
}
if node.Links == nil {
node.Links = make(map[string]string)
}
metadataURL := os.Getenv("AWS_ENV_URL")
if metadataURL == "" {
metadataURL = DEFAULT_AWS_URL
@@ -115,10 +114,10 @@ func (f *EnvAWSFingerprint) Fingerprint(cfg *config.Config, node *structs.Node)
// if it's a URL error, assume we're not in an AWS environment
// TODO: better way to detect AWS? Check xen virtualization?
if _, ok := err.(*url.Error); ok {
return false, nil
return nil
}
// not sure what other errors it would return
return false, err
return err
}
resp, err := ioutil.ReadAll(res.Body)
res.Body.Close()
@@ -132,12 +131,12 @@ func (f *EnvAWSFingerprint) Fingerprint(cfg *config.Config, node *structs.Node)
key = structs.UniqueNamespace(key)
}
node.Attributes[key] = strings.Trim(string(resp), "\n")
response.AddAttribute(key, strings.Trim(string(resp), "\n"))
}
// copy over network specific information
if val := node.Attributes["unique.platform.aws.local-ipv4"]; val != "" {
node.Attributes["unique.network.ip-address"] = val
if val, ok := response.Attributes["unique.platform.aws.local-ipv4"]; ok && val != "" {
response.AddAttribute("unique.network.ip-address", val)
newNetwork.IP = val
newNetwork.CIDR = newNetwork.IP + "/32"
}
@@ -149,8 +148,8 @@ func (f *EnvAWSFingerprint) Fingerprint(cfg *config.Config, node *structs.Node)
} else if throughput == 0 {
// Failed to determine speed. Check if the network fingerprint got it
found := false
if node.Resources != nil && len(node.Resources.Networks) > 0 {
for _, n := range node.Resources.Networks {
if request.Node.Resources != nil && len(request.Node.Resources.Networks) > 0 {
for _, n := range request.Node.Resources.Networks {
if n.IP == newNetwork.IP {
throughput = n.MBits
found = true
@@ -165,19 +164,18 @@ func (f *EnvAWSFingerprint) Fingerprint(cfg *config.Config, node *structs.Node)
}
}
// populate Node Network Resources
if node.Resources == nil {
node.Resources = &structs.Resources{}
}
newNetwork.MBits = throughput
node.Resources.Networks = []*structs.NetworkResource{newNetwork}
response.Resources = &structs.Resources{
Networks: []*structs.NetworkResource{newNetwork},
}
// populate Links
node.Links["aws.ec2"] = fmt.Sprintf("%s.%s",
node.Attributes["platform.aws.placement.availability-zone"],
node.Attributes["unique.platform.aws.instance-id"])
response.AddLink("aws.ec2", fmt.Sprintf("%s.%s",
response.Attributes["platform.aws.placement.availability-zone"],
response.Attributes["unique.platform.aws.instance-id"]))
response.Detected = true
return true, nil
return nil
}
func (f *EnvAWSFingerprint) isAWS() bool {

View File

@@ -9,6 +9,7 @@ import (
"testing"
"github.com/hashicorp/nomad/client/config"
cstructs "github.com/hashicorp/nomad/client/structs"
"github.com/hashicorp/nomad/nomad/structs"
)
@@ -19,13 +20,15 @@ func TestEnvAWSFingerprint_nonAws(t *testing.T) {
Attributes: make(map[string]string),
}
ok, err := f.Fingerprint(&config.Config{}, node)
request := &cstructs.FingerprintRequest{Config: &config.Config{}, Node: node}
var response cstructs.FingerprintResponse
err := f.Fingerprint(request, &response)
if err != nil {
t.Fatalf("err: %v", err)
}
if ok {
t.Fatalf("Should be false without test server")
if len(response.Attributes) > 0 {
t.Fatalf("Should not apply")
}
}
@@ -51,15 +54,13 @@ func TestEnvAWSFingerprint_aws(t *testing.T) {
defer ts.Close()
os.Setenv("AWS_ENV_URL", ts.URL+"/latest/meta-data/")
ok, err := f.Fingerprint(&config.Config{}, node)
request := &cstructs.FingerprintRequest{Config: &config.Config{}, Node: node}
var response cstructs.FingerprintResponse
err := f.Fingerprint(request, &response)
if err != nil {
t.Fatalf("err: %v", err)
}
if !ok {
t.Fatalf("Expected AWS attributes and Links")
}
keys := []string{
"platform.aws.ami-id",
"unique.platform.aws.hostname",
@@ -74,16 +75,16 @@ func TestEnvAWSFingerprint_aws(t *testing.T) {
}
for _, k := range keys {
assertNodeAttributeContains(t, node, k)
assertNodeAttributeContains(t, response.Attributes, k)
}
if len(node.Links) == 0 {
if len(response.Links) == 0 {
t.Fatalf("Empty links for Node in AWS Fingerprint test")
}
// confirm we have at least instance-id and ami-id
for _, k := range []string{"aws.ec2"} {
assertNodeLinksContains(t, node, k)
assertNodeLinksContains(t, response.Links, k)
}
}
@@ -171,22 +172,21 @@ func TestNetworkFingerprint_AWS(t *testing.T) {
Attributes: make(map[string]string),
}
ok, err := f.Fingerprint(&config.Config{}, node)
request := &cstructs.FingerprintRequest{Config: &config.Config{}, Node: node}
var response cstructs.FingerprintResponse
err := f.Fingerprint(request, &response)
if err != nil {
t.Fatalf("err: %v", err)
}
if !ok {
t.Fatalf("should apply")
}
assertNodeAttributeContains(t, node, "unique.network.ip-address")
assertNodeAttributeContains(t, response.Attributes, "unique.network.ip-address")
if node.Resources == nil || len(node.Resources.Networks) == 0 {
if response.Resources == nil || len(response.Resources.Networks) == 0 {
t.Fatal("Expected to find Network Resources")
}
// Test at least the first Network Resource
net := node.Resources.Networks[0]
net := response.Resources.Networks[0]
if net.IP == "" {
t.Fatal("Expected Network Resource to have an IP")
}
@@ -217,73 +217,81 @@ func TestNetworkFingerprint_AWS_network(t *testing.T) {
os.Setenv("AWS_ENV_URL", ts.URL+"/latest/meta-data/")
f := NewEnvAWSFingerprint(testLogger())
node := &structs.Node{
Attributes: make(map[string]string),
}
{
node := &structs.Node{
Attributes: make(map[string]string),
}
cfg := &config.Config{}
ok, err := f.Fingerprint(cfg, node)
if err != nil {
t.Fatalf("err: %v", err)
}
if !ok {
t.Fatalf("should apply")
}
request := &cstructs.FingerprintRequest{Config: &config.Config{}, Node: node}
var response cstructs.FingerprintResponse
err := f.Fingerprint(request, &response)
if err != nil {
t.Fatalf("err: %v", err)
}
assertNodeAttributeContains(t, node, "unique.network.ip-address")
if !response.Detected {
t.Fatalf("expected response to be applicable")
}
if node.Resources == nil || len(node.Resources.Networks) == 0 {
t.Fatal("Expected to find Network Resources")
}
assertNodeAttributeContains(t, response.Attributes, "unique.network.ip-address")
// Test at least the first Network Resource
net := node.Resources.Networks[0]
if net.IP == "" {
t.Fatal("Expected Network Resource to have an IP")
}
if net.CIDR == "" {
t.Fatal("Expected Network Resource to have a CIDR")
}
if net.Device == "" {
t.Fatal("Expected Network Resource to have a Device Name")
}
if net.MBits != 1000 {
t.Fatalf("Expected Network Resource to have speed %d; got %d", 1000, net.MBits)
if response.Resources == nil || len(response.Resources.Networks) == 0 {
t.Fatal("Expected to find Network Resources")
}
// Test at least the first Network Resource
net := response.Resources.Networks[0]
if net.IP == "" {
t.Fatal("Expected Network Resource to have an IP")
}
if net.CIDR == "" {
t.Fatal("Expected Network Resource to have a CIDR")
}
if net.Device == "" {
t.Fatal("Expected Network Resource to have a Device Name")
}
if net.MBits != 1000 {
t.Fatalf("Expected Network Resource to have speed %d; got %d", 1000, net.MBits)
}
}
// Try again this time setting a network speed in the config
node = &structs.Node{
Attributes: make(map[string]string),
}
{
node := &structs.Node{
Attributes: make(map[string]string),
}
cfg.NetworkSpeed = 10
ok, err = f.Fingerprint(cfg, node)
if err != nil {
t.Fatalf("err: %v", err)
}
if !ok {
t.Fatalf("should apply")
}
cfg := &config.Config{
NetworkSpeed: 10,
}
assertNodeAttributeContains(t, node, "unique.network.ip-address")
request := &cstructs.FingerprintRequest{Config: cfg, Node: node}
var response cstructs.FingerprintResponse
err := f.Fingerprint(request, &response)
if err != nil {
t.Fatalf("err: %v", err)
}
if node.Resources == nil || len(node.Resources.Networks) == 0 {
t.Fatal("Expected to find Network Resources")
}
assertNodeAttributeContains(t, response.Attributes, "unique.network.ip-address")
// Test at least the first Network Resource
net = node.Resources.Networks[0]
if net.IP == "" {
t.Fatal("Expected Network Resource to have an IP")
}
if net.CIDR == "" {
t.Fatal("Expected Network Resource to have a CIDR")
}
if net.Device == "" {
t.Fatal("Expected Network Resource to have a Device Name")
}
if net.MBits != 10 {
t.Fatalf("Expected Network Resource to have speed %d; got %d", 10, net.MBits)
if response.Resources == nil || len(response.Resources.Networks) == 0 {
t.Fatal("Expected to find Network Resources")
}
// Test at least the first Network Resource
net := response.Resources.Networks[0]
if net.IP == "" {
t.Fatal("Expected Network Resource to have an IP")
}
if net.CIDR == "" {
t.Fatal("Expected Network Resource to have a CIDR")
}
if net.Device == "" {
t.Fatal("Expected Network Resource to have a Device Name")
}
if net.MBits != 10 {
t.Fatalf("Expected Network Resource to have speed %d; got %d", 10, net.MBits)
}
}
}
@@ -294,11 +302,14 @@ func TestNetworkFingerprint_notAWS(t *testing.T) {
Attributes: make(map[string]string),
}
ok, err := f.Fingerprint(&config.Config{}, node)
request := &cstructs.FingerprintRequest{Config: &config.Config{}, Node: node}
var response cstructs.FingerprintResponse
err := f.Fingerprint(request, &response)
if err != nil {
t.Fatalf("err: %v", err)
}
if ok {
if len(response.Attributes) > 0 {
t.Fatalf("Should not apply")
}
}

View File

@@ -14,7 +14,7 @@ import (
"time"
"github.com/hashicorp/go-cleanhttp"
"github.com/hashicorp/nomad/client/config"
cstructs "github.com/hashicorp/nomad/client/structs"
"github.com/hashicorp/nomad/nomad/structs"
)
@@ -131,18 +131,16 @@ func checkError(err error, logger *log.Logger, desc string) error {
return err
}
func (f *EnvGCEFingerprint) Fingerprint(cfg *config.Config, node *structs.Node) (bool, error) {
func (f *EnvGCEFingerprint) Fingerprint(req *cstructs.FingerprintRequest, resp *cstructs.FingerprintResponse) error {
cfg := req.Config
// Check if we should tighten the timeout
if cfg.ReadBoolDefault(TightenNetworkTimeoutsConfig, false) {
f.client.Timeout = 1 * time.Millisecond
}
if !f.isGCE() {
return false, nil
}
if node.Links == nil {
node.Links = make(map[string]string)
return nil
}
// Keys and whether they should be namespaced as unique. Any key whose value
@@ -159,7 +157,7 @@ func (f *EnvGCEFingerprint) Fingerprint(cfg *config.Config, node *structs.Node)
for k, unique := range keys {
value, err := f.Get(k, false)
if err != nil {
return false, checkError(err, f.logger, k)
return checkError(err, f.logger, k)
}
// assume we want blank entries
@@ -167,7 +165,7 @@ func (f *EnvGCEFingerprint) Fingerprint(cfg *config.Config, node *structs.Node)
if unique {
key = structs.UniqueNamespace(key)
}
node.Attributes[key] = strings.Trim(value, "\n")
resp.AddAttribute(key, strings.Trim(value, "\n"))
}
// These keys need everything before the final slash removed to be usable.
@@ -178,14 +176,14 @@ func (f *EnvGCEFingerprint) Fingerprint(cfg *config.Config, node *structs.Node)
for k, unique := range keys {
value, err := f.Get(k, false)
if err != nil {
return false, checkError(err, f.logger, k)
return checkError(err, f.logger, k)
}
key := "platform.gce." + k
if unique {
key = structs.UniqueNamespace(key)
}
node.Attributes[key] = strings.Trim(lastToken(value), "\n")
resp.AddAttribute(key, strings.Trim(lastToken(value), "\n"))
}
// Get internal and external IPs (if they exist)
@@ -202,10 +200,10 @@ func (f *EnvGCEFingerprint) Fingerprint(cfg *config.Config, node *structs.Node)
for _, intf := range interfaces {
prefix := "platform.gce.network." + lastToken(intf.Network)
uniquePrefix := "unique." + prefix
node.Attributes[prefix] = "true"
node.Attributes[uniquePrefix+".ip"] = strings.Trim(intf.Ip, "\n")
resp.AddAttribute(prefix, "true")
resp.AddAttribute(uniquePrefix+".ip", strings.Trim(intf.Ip, "\n"))
for index, accessConfig := range intf.AccessConfigs {
node.Attributes[uniquePrefix+".external-ip."+strconv.Itoa(index)] = accessConfig.ExternalIp
resp.AddAttribute(uniquePrefix+".external-ip."+strconv.Itoa(index), accessConfig.ExternalIp)
}
}
}
@@ -213,7 +211,7 @@ func (f *EnvGCEFingerprint) Fingerprint(cfg *config.Config, node *structs.Node)
var tagList []string
value, err = f.Get("tags", false)
if err != nil {
return false, checkError(err, f.logger, "tags")
return checkError(err, f.logger, "tags")
}
if err := json.Unmarshal([]byte(value), &tagList); err != nil {
f.logger.Printf("[WARN] fingerprint.env_gce: Error decoding instance tags: %s", err.Error())
@@ -231,13 +229,13 @@ func (f *EnvGCEFingerprint) Fingerprint(cfg *config.Config, node *structs.Node)
key = fmt.Sprintf("%s%s", attr, tag)
}
node.Attributes[key] = "true"
resp.AddAttribute(key, "true")
}
var attrDict map[string]string
value, err = f.Get("attributes/", true)
if err != nil {
return false, checkError(err, f.logger, "attributes/")
return checkError(err, f.logger, "attributes/")
}
if err := json.Unmarshal([]byte(value), &attrDict); err != nil {
f.logger.Printf("[WARN] fingerprint.env_gce: Error decoding instance attributes: %s", err.Error())
@@ -255,13 +253,17 @@ func (f *EnvGCEFingerprint) Fingerprint(cfg *config.Config, node *structs.Node)
key = fmt.Sprintf("%s%s", attr, k)
}
node.Attributes[key] = strings.Trim(v, "\n")
resp.AddAttribute(key, strings.Trim(v, "\n"))
}
// populate Links
node.Links["gce"] = node.Attributes["unique.platform.gce.id"]
if id, ok := resp.Attributes["unique.platform.gce.id"]; ok {
resp.AddLink("gce", id)
}
return true, nil
resp.Detected = true
return nil
}
func (f *EnvGCEFingerprint) isGCE() bool {

View File

@@ -9,6 +9,7 @@ import (
"testing"
"github.com/hashicorp/nomad/client/config"
cstructs "github.com/hashicorp/nomad/client/structs"
"github.com/hashicorp/nomad/nomad/structs"
)
@@ -19,13 +20,19 @@ func TestGCEFingerprint_nonGCE(t *testing.T) {
Attributes: make(map[string]string),
}
ok, err := f.Fingerprint(&config.Config{}, node)
request := &cstructs.FingerprintRequest{Config: &config.Config{}, Node: node}
var response cstructs.FingerprintResponse
err := f.Fingerprint(request, &response)
if err != nil {
t.Fatalf("err: %v", err)
}
if ok {
t.Fatalf("Should be false without test server")
if response.Detected {
t.Fatalf("expected response to not be applicable")
}
if len(response.Attributes) > 0 {
t.Fatalf("Should have zero attributes without test server")
}
}
@@ -76,13 +83,15 @@ func testFingerprint_GCE(t *testing.T, withExternalIp bool) {
os.Setenv("GCE_ENV_URL", ts.URL+"/computeMetadata/v1/instance/")
f := NewEnvGCEFingerprint(testLogger())
ok, err := f.Fingerprint(&config.Config{}, node)
request := &cstructs.FingerprintRequest{Config: &config.Config{}, Node: node}
var response cstructs.FingerprintResponse
err := f.Fingerprint(request, &response)
if err != nil {
t.Fatalf("err: %v", err)
}
if !ok {
t.Fatalf("should apply")
if !response.Detected {
t.Fatalf("expected response to be applicable")
}
keys := []string{
@@ -100,40 +109,40 @@ func testFingerprint_GCE(t *testing.T, withExternalIp bool) {
}
for _, k := range keys {
assertNodeAttributeContains(t, node, k)
assertNodeAttributeContains(t, response.Attributes, k)
}
if len(node.Links) == 0 {
if len(response.Links) == 0 {
t.Fatalf("Empty links for Node in GCE Fingerprint test")
}
// Make sure Links contains the GCE ID.
for _, k := range []string{"gce"} {
assertNodeLinksContains(t, node, k)
assertNodeLinksContains(t, response.Links, k)
}
assertNodeAttributeEquals(t, node, "unique.platform.gce.id", "12345")
assertNodeAttributeEquals(t, node, "unique.platform.gce.hostname", "instance-1.c.project.internal")
assertNodeAttributeEquals(t, node, "platform.gce.zone", "us-central1-f")
assertNodeAttributeEquals(t, node, "platform.gce.machine-type", "n1-standard-1")
assertNodeAttributeEquals(t, node, "platform.gce.network.default", "true")
assertNodeAttributeEquals(t, node, "unique.platform.gce.network.default.ip", "10.240.0.5")
assertNodeAttributeEquals(t, response.Attributes, "unique.platform.gce.id", "12345")
assertNodeAttributeEquals(t, response.Attributes, "unique.platform.gce.hostname", "instance-1.c.project.internal")
assertNodeAttributeEquals(t, response.Attributes, "platform.gce.zone", "us-central1-f")
assertNodeAttributeEquals(t, response.Attributes, "platform.gce.machine-type", "n1-standard-1")
assertNodeAttributeEquals(t, response.Attributes, "platform.gce.network.default", "true")
assertNodeAttributeEquals(t, response.Attributes, "unique.platform.gce.network.default.ip", "10.240.0.5")
if withExternalIp {
assertNodeAttributeEquals(t, node, "unique.platform.gce.network.default.external-ip.0", "104.44.55.66")
assertNodeAttributeEquals(t, node, "unique.platform.gce.network.default.external-ip.1", "104.44.55.67")
} else if _, ok := node.Attributes["unique.platform.gce.network.default.external-ip.0"]; ok {
assertNodeAttributeEquals(t, response.Attributes, "unique.platform.gce.network.default.external-ip.0", "104.44.55.66")
assertNodeAttributeEquals(t, response.Attributes, "unique.platform.gce.network.default.external-ip.1", "104.44.55.67")
} else if _, ok := response.Attributes["unique.platform.gce.network.default.external-ip.0"]; ok {
t.Fatal("unique.platform.gce.network.default.external-ip is set without an external IP")
}
assertNodeAttributeEquals(t, node, "platform.gce.scheduling.automatic-restart", "TRUE")
assertNodeAttributeEquals(t, node, "platform.gce.scheduling.on-host-maintenance", "MIGRATE")
assertNodeAttributeEquals(t, node, "platform.gce.cpu-platform", "Intel Ivy Bridge")
assertNodeAttributeEquals(t, node, "platform.gce.tag.abc", "true")
assertNodeAttributeEquals(t, node, "platform.gce.tag.def", "true")
assertNodeAttributeEquals(t, node, "unique.platform.gce.tag.foo", "true")
assertNodeAttributeEquals(t, node, "platform.gce.attr.ghi", "111")
assertNodeAttributeEquals(t, node, "platform.gce.attr.jkl", "222")
assertNodeAttributeEquals(t, node, "unique.platform.gce.attr.bar", "333")
assertNodeAttributeEquals(t, response.Attributes, "platform.gce.scheduling.automatic-restart", "TRUE")
assertNodeAttributeEquals(t, response.Attributes, "platform.gce.scheduling.on-host-maintenance", "MIGRATE")
assertNodeAttributeEquals(t, response.Attributes, "platform.gce.cpu-platform", "Intel Ivy Bridge")
assertNodeAttributeEquals(t, response.Attributes, "platform.gce.tag.abc", "true")
assertNodeAttributeEquals(t, response.Attributes, "platform.gce.tag.def", "true")
assertNodeAttributeEquals(t, response.Attributes, "unique.platform.gce.tag.foo", "true")
assertNodeAttributeEquals(t, response.Attributes, "platform.gce.attr.ghi", "111")
assertNodeAttributeEquals(t, response.Attributes, "platform.gce.attr.jkl", "222")
assertNodeAttributeEquals(t, response.Attributes, "unique.platform.gce.attr.bar", "333")
}
const GCE_routes = `

View File

@@ -6,8 +6,7 @@ import (
"sort"
"time"
"github.com/hashicorp/nomad/client/config"
"github.com/hashicorp/nomad/nomad/structs"
cstructs "github.com/hashicorp/nomad/client/structs"
)
// EmptyDuration is to be used by fingerprinters that are not periodic.
@@ -92,8 +91,8 @@ type Factory func(*log.Logger) Fingerprint
// many of them can be applied on a particular host.
type Fingerprint interface {
// Fingerprint is used to update properties of the Node,
// and returns if the fingerprint was applicable and a potential error.
Fingerprint(*config.Config, *structs.Node) (bool, error)
// and returns a diff of updated node attributes and a potential error.
Fingerprint(*cstructs.FingerprintRequest, *cstructs.FingerprintResponse) error
// Periodic is a mechanism for the fingerprinter to indicate that it should
// be run periodically. The return value is a boolean indicating if it

View File

@@ -8,6 +8,7 @@ import (
"testing"
"github.com/hashicorp/nomad/client/config"
cstructs "github.com/hashicorp/nomad/client/structs"
"github.com/hashicorp/nomad/nomad/structs"
)
@@ -15,45 +16,63 @@ func testLogger() *log.Logger {
return log.New(os.Stderr, "", log.LstdFlags)
}
func assertFingerprintOK(t *testing.T, fp Fingerprint, node *structs.Node) {
ok, err := fp.Fingerprint(new(config.Config), node)
func assertFingerprintOK(t *testing.T, fp Fingerprint, node *structs.Node) *cstructs.FingerprintResponse {
request := &cstructs.FingerprintRequest{Config: new(config.Config), Node: node}
var response cstructs.FingerprintResponse
err := fp.Fingerprint(request, &response)
if err != nil {
t.Fatalf("Failed to fingerprint: %s", err)
}
if !ok {
if len(response.Attributes) == 0 {
t.Fatalf("Failed to apply node attributes")
}
return &response
}
func assertNodeAttributeContains(t *testing.T, node *structs.Node, attribute string) {
actual, found := node.Attributes[attribute]
func assertNodeAttributeContains(t *testing.T, nodeAttributes map[string]string, attribute string) {
if nodeAttributes == nil {
t.Errorf("expected an initialized map for node attributes")
return
}
actual, found := nodeAttributes[attribute]
if !found {
t.Errorf("Expected to find Attribute `%s`\n\n[DEBUG] %#v", attribute, node)
t.Errorf("Expected to find Attribute `%s`\n\n[DEBUG] %#v", attribute, nodeAttributes)
return
}
if actual == "" {
t.Errorf("Expected non-empty Attribute value for `%s`\n\n[DEBUG] %#v", attribute, node)
t.Errorf("Expected non-empty Attribute value for `%s`\n\n[DEBUG] %#v", attribute, nodeAttributes)
}
}
func assertNodeAttributeEquals(t *testing.T, node *structs.Node, attribute string, expected string) {
actual, found := node.Attributes[attribute]
func assertNodeAttributeEquals(t *testing.T, nodeAttributes map[string]string, attribute string, expected string) {
if nodeAttributes == nil {
t.Errorf("expected an initialized map for node attributes")
return
}
actual, found := nodeAttributes[attribute]
if !found {
t.Errorf("Expected to find Attribute `%s`; unable to check value\n\n[DEBUG] %#v", attribute, node)
t.Errorf("Expected to find Attribute `%s`; unable to check value\n\n[DEBUG] %#v", attribute, nodeAttributes)
return
}
if expected != actual {
t.Errorf("Expected `%s` Attribute to be `%s`, found `%s`\n\n[DEBUG] %#v", attribute, expected, actual, node)
t.Errorf("Expected `%s` Attribute to be `%s`, found `%s`\n\n[DEBUG] %#v", attribute, expected, actual, nodeAttributes)
}
}
func assertNodeLinksContains(t *testing.T, node *structs.Node, link string) {
actual, found := node.Links[link]
func assertNodeLinksContains(t *testing.T, nodeLinks map[string]string, link string) {
if nodeLinks == nil {
t.Errorf("expected an initialized map for node links")
return
}
actual, found := nodeLinks[link]
if !found {
t.Errorf("Expected to find Link `%s`\n\n[DEBUG] %#v", link, node)
t.Errorf("Expected to find Link `%s`\n\n[DEBUG]", link)
return
}
if actual == "" {
t.Errorf("Expected non-empty Link value for `%s`\n\n[DEBUG] %#v", link, node)
t.Errorf("Expected non-empty Link value for `%s`\n\n[DEBUG]", link)
}
}

View File

@@ -4,8 +4,7 @@ import (
"log"
"runtime"
"github.com/hashicorp/nomad/client/config"
"github.com/hashicorp/nomad/nomad/structs"
cstructs "github.com/hashicorp/nomad/client/structs"
"github.com/shirou/gopsutil/host"
)
@@ -21,20 +20,21 @@ func NewHostFingerprint(logger *log.Logger) Fingerprint {
return f
}
func (f *HostFingerprint) Fingerprint(cfg *config.Config, node *structs.Node) (bool, error) {
func (f *HostFingerprint) Fingerprint(req *cstructs.FingerprintRequest, resp *cstructs.FingerprintResponse) error {
hostInfo, err := host.Info()
if err != nil {
f.logger.Println("[WARN] Error retrieving host information: ", err)
return false, err
return err
}
node.Attributes["os.name"] = hostInfo.Platform
node.Attributes["os.version"] = hostInfo.PlatformVersion
resp.AddAttribute("os.name", hostInfo.Platform)
resp.AddAttribute("os.version", hostInfo.PlatformVersion)
node.Attributes["kernel.name"] = runtime.GOOS
node.Attributes["kernel.version"] = hostInfo.KernelVersion
resp.AddAttribute("kernel.name", runtime.GOOS)
resp.AddAttribute("kernel.version", hostInfo.KernelVersion)
node.Attributes["unique.hostname"] = hostInfo.Hostname
resp.AddAttribute("unique.hostname", hostInfo.Hostname)
resp.Detected = true
return true, nil
return nil
}

View File

@@ -4,6 +4,7 @@ import (
"testing"
"github.com/hashicorp/nomad/client/config"
cstructs "github.com/hashicorp/nomad/client/structs"
"github.com/hashicorp/nomad/nomad/structs"
)
@@ -12,16 +13,24 @@ func TestHostFingerprint(t *testing.T) {
node := &structs.Node{
Attributes: make(map[string]string),
}
ok, err := f.Fingerprint(&config.Config{}, node)
request := &cstructs.FingerprintRequest{Config: &config.Config{}, Node: node}
var response cstructs.FingerprintResponse
err := f.Fingerprint(request, &response)
if err != nil {
t.Fatalf("err: %v", err)
}
if !ok {
t.Fatalf("should apply")
if !response.Detected {
t.Fatalf("expected response to be applicable")
}
if len(response.Attributes) == 0 {
t.Fatalf("should generate a diff of node attributes")
}
// Host info
for _, key := range []string{"os.name", "os.version", "unique.hostname", "kernel.name"} {
assertNodeAttributeContains(t, node, key)
assertNodeAttributeContains(t, response.Attributes, key)
}
}

View File

@@ -4,7 +4,7 @@ import (
"fmt"
"log"
"github.com/hashicorp/nomad/client/config"
cstructs "github.com/hashicorp/nomad/client/structs"
"github.com/hashicorp/nomad/nomad/structs"
"github.com/shirou/gopsutil/mem"
)
@@ -23,21 +23,20 @@ func NewMemoryFingerprint(logger *log.Logger) Fingerprint {
return f
}
func (f *MemoryFingerprint) Fingerprint(cfg *config.Config, node *structs.Node) (bool, error) {
func (f *MemoryFingerprint) Fingerprint(req *cstructs.FingerprintRequest, resp *cstructs.FingerprintResponse) error {
memInfo, err := mem.VirtualMemory()
if err != nil {
f.logger.Printf("[WARN] Error reading memory information: %s", err)
return false, err
return err
}
if memInfo.Total > 0 {
node.Attributes["memory.totalbytes"] = fmt.Sprintf("%d", memInfo.Total)
resp.AddAttribute("memory.totalbytes", fmt.Sprintf("%d", memInfo.Total))
if node.Resources == nil {
node.Resources = &structs.Resources{}
resp.Resources = &structs.Resources{
MemoryMB: int(memInfo.Total / 1024 / 1024),
}
node.Resources.MemoryMB = int(memInfo.Total / 1024 / 1024)
}
return true, nil
return nil
}

View File

@@ -4,6 +4,7 @@ import (
"testing"
"github.com/hashicorp/nomad/client/config"
cstructs "github.com/hashicorp/nomad/client/structs"
"github.com/hashicorp/nomad/nomad/structs"
)
@@ -12,21 +13,20 @@ func TestMemoryFingerprint(t *testing.T) {
node := &structs.Node{
Attributes: make(map[string]string),
}
ok, err := f.Fingerprint(&config.Config{}, node)
request := &cstructs.FingerprintRequest{Config: &config.Config{}, Node: node}
var response cstructs.FingerprintResponse
err := f.Fingerprint(request, &response)
if err != nil {
t.Fatalf("err: %v", err)
}
if !ok {
t.Fatalf("should apply")
}
assertNodeAttributeContains(t, node, "memory.totalbytes")
assertNodeAttributeContains(t, response.Attributes, "memory.totalbytes")
if node.Resources == nil {
t.Fatalf("Node Resources was nil")
if response.Resources == nil {
t.Fatalf("response resources should not be nil")
}
if node.Resources.MemoryMB == 0 {
t.Errorf("Expected node.Resources.MemoryMB to be non-zero")
if response.Resources.MemoryMB == 0 {
t.Fatalf("Expected node.Resources.MemoryMB to be non-zero")
}
}

View File

@@ -6,7 +6,7 @@ import (
"net"
sockaddr "github.com/hashicorp/go-sockaddr"
"github.com/hashicorp/nomad/client/config"
cstructs "github.com/hashicorp/nomad/client/structs"
"github.com/hashicorp/nomad/nomad/structs"
)
@@ -61,19 +61,17 @@ func NewNetworkFingerprint(logger *log.Logger) Fingerprint {
return f
}
func (f *NetworkFingerprint) Fingerprint(cfg *config.Config, node *structs.Node) (bool, error) {
if node.Resources == nil {
node.Resources = &structs.Resources{}
}
func (f *NetworkFingerprint) Fingerprint(req *cstructs.FingerprintRequest, resp *cstructs.FingerprintResponse) error {
cfg := req.Config
// Find the named interface
intf, err := f.findInterface(cfg.NetworkInterface)
switch {
case err != nil:
return false, fmt.Errorf("Error while detecting network interface during fingerprinting: %v", err)
return fmt.Errorf("Error while detecting network interface during fingerprinting: %v", err)
case intf == nil:
// No interface could be found
return false, nil
return nil
}
// Record the throughput of the interface
@@ -94,22 +92,23 @@ func (f *NetworkFingerprint) Fingerprint(cfg *config.Config, node *structs.Node)
disallowLinkLocal := cfg.ReadBoolDefault(networkDisallowLinkLocalOption, networkDisallowLinkLocalDefault)
nwResources, err := f.createNetworkResources(mbits, intf, disallowLinkLocal)
if err != nil {
return false, err
return err
}
// Add the network resources to the node
node.Resources.Networks = nwResources
resp.Resources = &structs.Resources{
Networks: nwResources,
}
for _, nwResource := range nwResources {
f.logger.Printf("[DEBUG] fingerprint.network: Detected interface %v with IP: %v", intf.Name, nwResource.IP)
}
// Deprecated, setting the first IP as unique IP for the node
if len(nwResources) > 0 {
node.Attributes["unique.network.ip-address"] = nwResources[0].IP
resp.AddAttribute("unique.network.ip-address", nwResources[0].IP)
}
resp.Detected = true
// return true, because we have a network connection
return true, nil
return nil
}
// createNetworkResources creates network resources for every IP

View File

@@ -7,6 +7,7 @@ import (
"testing"
"github.com/hashicorp/nomad/client/config"
cstructs "github.com/hashicorp/nomad/client/structs"
"github.com/hashicorp/nomad/nomad/structs"
)
@@ -189,28 +190,36 @@ func TestNetworkFingerprint_basic(t *testing.T) {
}
cfg := &config.Config{NetworkSpeed: 101}
ok, err := f.Fingerprint(cfg, node)
request := &cstructs.FingerprintRequest{Config: cfg, Node: node}
var response cstructs.FingerprintResponse
err := f.Fingerprint(request, &response)
if err != nil {
t.Fatalf("err: %v", err)
}
if !ok {
if !response.Detected {
t.Fatalf("expected response to be applicable")
}
attributes := response.Attributes
if len(attributes) == 0 {
t.Fatalf("should apply (HINT: working offline? Set env %q=y", skipOnlineTestsEnvVar)
}
assertNodeAttributeContains(t, node, "unique.network.ip-address")
assertNodeAttributeContains(t, attributes, "unique.network.ip-address")
ip := node.Attributes["unique.network.ip-address"]
ip := attributes["unique.network.ip-address"]
match := net.ParseIP(ip)
if match == nil {
t.Fatalf("Bad IP match: %s", ip)
}
if node.Resources == nil || len(node.Resources.Networks) == 0 {
if response.Resources == nil || len(response.Resources.Networks) == 0 {
t.Fatal("Expected to find Network Resources")
}
// Test at least the first Network Resource
net := node.Resources.Networks[0]
net := response.Resources.Networks[0]
if net.IP == "" {
t.Fatal("Expected Network Resource to not be empty")
}
@@ -232,13 +241,19 @@ func TestNetworkFingerprint_default_device_absent(t *testing.T) {
}
cfg := &config.Config{NetworkSpeed: 100, NetworkInterface: "eth0"}
ok, err := f.Fingerprint(cfg, node)
request := &cstructs.FingerprintRequest{Config: cfg, Node: node}
var response cstructs.FingerprintResponse
err := f.Fingerprint(request, &response)
if err == nil {
t.Fatalf("err: %v", err)
}
if ok {
t.Fatalf("ok: %v", ok)
if response.Detected {
t.Fatalf("expected response to not be applicable")
}
if len(response.Attributes) != 0 {
t.Fatalf("attributes should be zero but instead are: %v", response.Attributes)
}
}
@@ -249,28 +264,36 @@ func TestNetworkFingerPrint_default_device(t *testing.T) {
}
cfg := &config.Config{NetworkSpeed: 100, NetworkInterface: "lo"}
ok, err := f.Fingerprint(cfg, node)
request := &cstructs.FingerprintRequest{Config: cfg, Node: node}
var response cstructs.FingerprintResponse
err := f.Fingerprint(request, &response)
if err != nil {
t.Fatalf("err: %v", err)
}
if !ok {
if !response.Detected {
t.Fatalf("expected response to be applicable")
}
attributes := response.Attributes
if len(attributes) == 0 {
t.Fatalf("should apply")
}
assertNodeAttributeContains(t, node, "unique.network.ip-address")
assertNodeAttributeContains(t, attributes, "unique.network.ip-address")
ip := node.Attributes["unique.network.ip-address"]
ip := attributes["unique.network.ip-address"]
match := net.ParseIP(ip)
if match == nil {
t.Fatalf("Bad IP match: %s", ip)
}
if node.Resources == nil || len(node.Resources.Networks) == 0 {
if response.Resources == nil || len(response.Resources.Networks) == 0 {
t.Fatal("Expected to find Network Resources")
}
// Test at least the first Network Resource
net := node.Resources.Networks[0]
net := response.Resources.Networks[0]
if net.IP == "" {
t.Fatal("Expected Network Resource to not be empty")
}
@@ -292,28 +315,32 @@ func TestNetworkFingerPrint_LinkLocal_Allowed(t *testing.T) {
}
cfg := &config.Config{NetworkSpeed: 100, NetworkInterface: "eth3"}
ok, err := f.Fingerprint(cfg, node)
request := &cstructs.FingerprintRequest{Config: cfg, Node: node}
var response cstructs.FingerprintResponse
err := f.Fingerprint(request, &response)
if err != nil {
t.Fatalf("err: %v", err)
}
if !ok {
t.Fatalf("should apply")
if !response.Detected {
t.Fatalf("expected response to be applicable")
}
assertNodeAttributeContains(t, node, "unique.network.ip-address")
attributes := response.Attributes
assertNodeAttributeContains(t, attributes, "unique.network.ip-address")
ip := node.Attributes["unique.network.ip-address"]
ip := attributes["unique.network.ip-address"]
match := net.ParseIP(ip)
if match == nil {
t.Fatalf("Bad IP match: %s", ip)
}
if node.Resources == nil || len(node.Resources.Networks) == 0 {
if response.Resources == nil || len(response.Resources.Networks) == 0 {
t.Fatal("Expected to find Network Resources")
}
// Test at least the first Network Resource
net := node.Resources.Networks[0]
net := response.Resources.Networks[0]
if net.IP == "" {
t.Fatal("Expected Network Resource to not be empty")
}
@@ -335,28 +362,36 @@ func TestNetworkFingerPrint_LinkLocal_Allowed_MixedIntf(t *testing.T) {
}
cfg := &config.Config{NetworkSpeed: 100, NetworkInterface: "eth4"}
ok, err := f.Fingerprint(cfg, node)
request := &cstructs.FingerprintRequest{Config: cfg, Node: node}
var response cstructs.FingerprintResponse
err := f.Fingerprint(request, &response)
if err != nil {
t.Fatalf("err: %v", err)
}
if !ok {
t.Fatalf("should apply")
if !response.Detected {
t.Fatalf("expected response to be applicable")
}
assertNodeAttributeContains(t, node, "unique.network.ip-address")
attributes := response.Attributes
if len(attributes) == 0 {
t.Fatalf("should apply attributes")
}
ip := node.Attributes["unique.network.ip-address"]
assertNodeAttributeContains(t, attributes, "unique.network.ip-address")
ip := attributes["unique.network.ip-address"]
match := net.ParseIP(ip)
if match == nil {
t.Fatalf("Bad IP match: %s", ip)
}
if node.Resources == nil || len(node.Resources.Networks) == 0 {
if response.Resources == nil || len(response.Resources.Networks) == 0 {
t.Fatal("Expected to find Network Resources")
}
// Test at least the first Network Resource
net := node.Resources.Networks[0]
net := response.Resources.Networks[0]
if net.IP == "" {
t.Fatal("Expected Network Resource to not be empty")
}
@@ -387,11 +422,18 @@ func TestNetworkFingerPrint_LinkLocal_Disallowed(t *testing.T) {
},
}
ok, err := f.Fingerprint(cfg, node)
request := &cstructs.FingerprintRequest{Config: cfg, Node: node}
var response cstructs.FingerprintResponse
err := f.Fingerprint(request, &response)
if err != nil {
t.Fatalf("err: %v", err)
}
if !ok {
t.Fatalf("should not apply")
if !response.Detected {
t.Fatalf("expected response to be applicable")
}
if len(response.Attributes) != 0 {
t.Fatalf("should not apply attributes")
}
}

View File

@@ -3,8 +3,7 @@ package fingerprint
import (
"log"
client "github.com/hashicorp/nomad/client/config"
"github.com/hashicorp/nomad/nomad/structs"
cstructs "github.com/hashicorp/nomad/client/structs"
)
// NomadFingerprint is used to fingerprint the Nomad version
@@ -19,8 +18,9 @@ func NewNomadFingerprint(logger *log.Logger) Fingerprint {
return f
}
func (f *NomadFingerprint) Fingerprint(config *client.Config, node *structs.Node) (bool, error) {
node.Attributes["nomad.version"] = config.Version.VersionNumber()
node.Attributes["nomad.revision"] = config.Version.Revision
return true, nil
func (f *NomadFingerprint) Fingerprint(req *cstructs.FingerprintRequest, resp *cstructs.FingerprintResponse) error {
resp.AddAttribute("nomad.version", req.Config.Version.VersionNumber())
resp.AddAttribute("nomad.revision", req.Config.Version.Revision)
resp.Detected = true
return nil
}

View File

@@ -4,6 +4,7 @@ import (
"testing"
"github.com/hashicorp/nomad/client/config"
cstructs "github.com/hashicorp/nomad/client/structs"
"github.com/hashicorp/nomad/nomad/structs"
"github.com/hashicorp/nomad/version"
)
@@ -21,17 +22,27 @@ func TestNomadFingerprint(t *testing.T) {
Version: v,
},
}
ok, err := f.Fingerprint(c, node)
request := &cstructs.FingerprintRequest{Config: c, Node: node}
var response cstructs.FingerprintResponse
err := f.Fingerprint(request, &response)
if err != nil {
t.Fatalf("err: %v", err)
}
if !ok {
if !response.Detected {
t.Fatalf("expected response to be applicable")
}
if len(response.Attributes) == 0 {
t.Fatalf("should apply")
}
if node.Attributes["nomad.version"] != v {
if response.Attributes["nomad.version"] != v {
t.Fatalf("incorrect version")
}
if node.Attributes["nomad.revision"] != r {
if response.Attributes["nomad.revision"] != r {
t.Fatalf("incorrect revision")
}
}

View File

@@ -5,8 +5,7 @@ import (
"strings"
"github.com/hashicorp/consul-template/signals"
"github.com/hashicorp/nomad/client/config"
"github.com/hashicorp/nomad/nomad/structs"
cstructs "github.com/hashicorp/nomad/client/structs"
)
// SignalFingerprint is used to fingerprint the available signals
@@ -21,13 +20,14 @@ func NewSignalFingerprint(logger *log.Logger) Fingerprint {
return f
}
func (f *SignalFingerprint) Fingerprint(cfg *config.Config, node *structs.Node) (bool, error) {
func (f *SignalFingerprint) Fingerprint(req *cstructs.FingerprintRequest, resp *cstructs.FingerprintResponse) error {
// Build the list of available signals
sigs := make([]string, 0, len(signals.SignalLookup))
for signal := range signals.SignalLookup {
sigs = append(sigs, signal)
}
node.Attributes["os.signals"] = strings.Join(sigs, ",")
return true, nil
resp.AddAttribute("os.signals", strings.Join(sigs, ","))
resp.Detected = true
return nil
}

View File

@@ -12,6 +12,6 @@ func TestSignalFingerprint(t *testing.T) {
Attributes: make(map[string]string),
}
assertFingerprintOK(t, fp, node)
assertNodeAttributeContains(t, node, "os.signals")
response := assertFingerprintOK(t, fp, node)
assertNodeAttributeContains(t, response.Attributes, "os.signals")
}

View File

@@ -6,7 +6,7 @@ import (
"os"
"strconv"
"github.com/hashicorp/nomad/client/config"
cstructs "github.com/hashicorp/nomad/client/structs"
"github.com/hashicorp/nomad/nomad/structs"
)
@@ -24,15 +24,8 @@ func NewStorageFingerprint(logger *log.Logger) Fingerprint {
return fp
}
func (f *StorageFingerprint) Fingerprint(cfg *config.Config, node *structs.Node) (bool, error) {
// Initialize these to empty defaults
node.Attributes["unique.storage.volume"] = ""
node.Attributes["unique.storage.bytestotal"] = ""
node.Attributes["unique.storage.bytesfree"] = ""
if node.Resources == nil {
node.Resources = &structs.Resources{}
}
func (f *StorageFingerprint) Fingerprint(req *cstructs.FingerprintRequest, resp *cstructs.FingerprintResponse) error {
cfg := req.Config
// Guard against unset AllocDir
storageDir := cfg.AllocDir
@@ -40,20 +33,24 @@ func (f *StorageFingerprint) Fingerprint(cfg *config.Config, node *structs.Node)
var err error
storageDir, err = os.Getwd()
if err != nil {
return false, fmt.Errorf("unable to get CWD from filesystem: %s", err)
return fmt.Errorf("unable to get CWD from filesystem: %s", err)
}
}
volume, total, free, err := f.diskFree(storageDir)
if err != nil {
return false, fmt.Errorf("failed to determine disk space for %s: %v", storageDir, err)
return fmt.Errorf("failed to determine disk space for %s: %v", storageDir, err)
}
node.Attributes["unique.storage.volume"] = volume
node.Attributes["unique.storage.bytestotal"] = strconv.FormatUint(total, 10)
node.Attributes["unique.storage.bytesfree"] = strconv.FormatUint(free, 10)
resp.AddAttribute("unique.storage.volume", volume)
resp.AddAttribute("unique.storage.bytestotal", strconv.FormatUint(total, 10))
resp.AddAttribute("unique.storage.bytesfree", strconv.FormatUint(free, 10))
node.Resources.DiskMB = int(free / bytesPerMegabyte)
// set the disk size for the response
resp.Resources = &structs.Resources{
DiskMB: int(free / bytesPerMegabyte),
}
resp.Detected = true
return true, nil
return nil
}

View File

@@ -13,17 +13,21 @@ func TestStorageFingerprint(t *testing.T) {
Attributes: make(map[string]string),
}
assertFingerprintOK(t, fp, node)
response := assertFingerprintOK(t, fp, node)
assertNodeAttributeContains(t, node, "unique.storage.volume")
assertNodeAttributeContains(t, node, "unique.storage.bytestotal")
assertNodeAttributeContains(t, node, "unique.storage.bytesfree")
if !response.Detected {
t.Fatalf("expected response to be applicable")
}
total, err := strconv.ParseInt(node.Attributes["unique.storage.bytestotal"], 10, 64)
assertNodeAttributeContains(t, response.Attributes, "unique.storage.volume")
assertNodeAttributeContains(t, response.Attributes, "unique.storage.bytestotal")
assertNodeAttributeContains(t, response.Attributes, "unique.storage.bytesfree")
total, err := strconv.ParseInt(response.Attributes["unique.storage.bytestotal"], 10, 64)
if err != nil {
t.Fatalf("Failed to parse unique.storage.bytestotal: %s", err)
}
free, err := strconv.ParseInt(node.Attributes["unique.storage.bytesfree"], 10, 64)
free, err := strconv.ParseInt(response.Attributes["unique.storage.bytesfree"], 10, 64)
if err != nil {
t.Fatalf("Failed to parse unique.storage.bytesfree: %s", err)
}
@@ -32,10 +36,10 @@ func TestStorageFingerprint(t *testing.T) {
t.Fatalf("unique.storage.bytesfree %d is larger than unique.storage.bytestotal %d", free, total)
}
if node.Resources == nil {
if response.Resources == nil {
t.Fatalf("Node Resources was nil")
}
if node.Resources.DiskMB == 0 {
if response.Resources.DiskMB == 0 {
t.Errorf("Expected node.Resources.DiskMB to be non-zero")
}
}

View File

@@ -7,8 +7,7 @@ import (
"strings"
"time"
client "github.com/hashicorp/nomad/client/config"
"github.com/hashicorp/nomad/nomad/structs"
cstructs "github.com/hashicorp/nomad/client/structs"
vapi "github.com/hashicorp/vault/api"
)
@@ -29,9 +28,11 @@ func NewVaultFingerprint(logger *log.Logger) Fingerprint {
return &VaultFingerprint{logger: logger, lastState: vaultUnavailable}
}
func (f *VaultFingerprint) Fingerprint(config *client.Config, node *structs.Node) (bool, error) {
func (f *VaultFingerprint) Fingerprint(req *cstructs.FingerprintRequest, resp *cstructs.FingerprintResponse) error {
config := req.Config
if config.VaultConfig == nil || !config.VaultConfig.IsEnabled() {
return false, nil
return nil
}
// Only create the client once to avoid creating too many connections to
@@ -39,35 +40,33 @@ func (f *VaultFingerprint) Fingerprint(config *client.Config, node *structs.Node
if f.client == nil {
vaultConfig, err := config.VaultConfig.ApiConfig()
if err != nil {
return false, fmt.Errorf("Failed to initialize the Vault client config: %v", err)
return fmt.Errorf("Failed to initialize the Vault client config: %v", err)
}
f.client, err = vapi.NewClient(vaultConfig)
if err != nil {
return false, fmt.Errorf("Failed to initialize Vault client: %s", err)
return fmt.Errorf("Failed to initialize Vault client: %s", err)
}
}
// Connect to vault and parse its information
status, err := f.client.Sys().SealStatus()
if err != nil {
// Clear any attributes set by a previous fingerprint.
f.clearVaultAttributes(node)
f.clearVaultAttributes(resp)
// Print a message indicating that Vault is not available anymore
if f.lastState == vaultAvailable {
f.logger.Printf("[INFO] fingerprint.vault: Vault is unavailable")
}
f.lastState = vaultUnavailable
return false, nil
return nil
}
node.Attributes["vault.accessible"] = strconv.FormatBool(true)
resp.AddAttribute("vault.accessible", strconv.FormatBool(true))
// We strip the Vault prefix because < 0.6.2 the version looks like:
// status.Version = "Vault v0.6.1"
node.Attributes["vault.version"] = strings.TrimPrefix(status.Version, "Vault ")
node.Attributes["vault.cluster_id"] = status.ClusterID
node.Attributes["vault.cluster_name"] = status.ClusterName
resp.AddAttribute("vault.version", strings.TrimPrefix(status.Version, "Vault "))
resp.AddAttribute("vault.cluster_id", status.ClusterID)
resp.AddAttribute("vault.cluster_name", status.ClusterName)
// If Vault was previously unavailable print a message to indicate the Agent
// is available now
@@ -75,16 +74,17 @@ func (f *VaultFingerprint) Fingerprint(config *client.Config, node *structs.Node
f.logger.Printf("[INFO] fingerprint.vault: Vault is available")
}
f.lastState = vaultAvailable
return true, nil
}
func (f *VaultFingerprint) clearVaultAttributes(n *structs.Node) {
delete(n.Attributes, "vault.accessible")
delete(n.Attributes, "vault.version")
delete(n.Attributes, "vault.cluster_id")
delete(n.Attributes, "vault.cluster_name")
resp.Detected = true
return nil
}
func (f *VaultFingerprint) Periodic() (bool, time.Duration) {
return true, 15 * time.Second
}
func (f *VaultFingerprint) clearVaultAttributes(r *cstructs.FingerprintResponse) {
r.RemoveAttribute("vault.accessible")
r.RemoveAttribute("vault.version")
r.RemoveAttribute("vault.cluster_id")
r.RemoveAttribute("vault.cluster_name")
}

View File

@@ -4,6 +4,7 @@ import (
"testing"
"github.com/hashicorp/nomad/client/config"
cstructs "github.com/hashicorp/nomad/client/structs"
"github.com/hashicorp/nomad/nomad/structs"
"github.com/hashicorp/nomad/testutil"
)
@@ -17,19 +18,22 @@ func TestVaultFingerprint(t *testing.T) {
Attributes: make(map[string]string),
}
config := config.DefaultConfig()
config.VaultConfig = tv.Config
conf := config.DefaultConfig()
conf.VaultConfig = tv.Config
ok, err := fp.Fingerprint(config, node)
request := &cstructs.FingerprintRequest{Config: conf, Node: node}
var response cstructs.FingerprintResponse
err := fp.Fingerprint(request, &response)
if err != nil {
t.Fatalf("Failed to fingerprint: %s", err)
}
if !ok {
t.Fatalf("Failed to apply node attributes")
if !response.Detected {
t.Fatalf("expected response to be applicable")
}
assertNodeAttributeContains(t, node, "vault.accessible")
assertNodeAttributeContains(t, node, "vault.version")
assertNodeAttributeContains(t, node, "vault.cluster_id")
assertNodeAttributeContains(t, node, "vault.cluster_name")
assertNodeAttributeContains(t, response.Attributes, "vault.accessible")
assertNodeAttributeContains(t, response.Attributes, "vault.version")
assertNodeAttributeContains(t, response.Attributes, "vault.cluster_id")
assertNodeAttributeContains(t, response.Attributes, "vault.cluster_name")
}

View File

@@ -4,6 +4,9 @@ import (
"crypto/md5"
"io"
"strconv"
"github.com/hashicorp/nomad/client/config"
"github.com/hashicorp/nomad/nomad/structs"
)
// MemoryStats holds memory usage related stats
@@ -184,3 +187,65 @@ func (d *DriverNetwork) Hash() []byte {
}
return h.Sum(nil)
}
// FingerprintRequest is a request which a fingerprinter accepts to fingerprint
// the node
type FingerprintRequest struct {
Config *config.Config
Node *structs.Node
}
// FingerprintResponse is the response which a fingerprinter annotates with the
// results of the fingerprint method
type FingerprintResponse struct {
Attributes map[string]string
Links map[string]string
Resources *structs.Resources
// Detected is a boolean indicating whether the fingerprinter detected
// if the resource was available
Detected bool
}
// AddAttribute adds the name and value for a node attribute to the fingerprint
// response
func (f *FingerprintResponse) AddAttribute(name, value string) {
// initialize Attributes if it has not been already
if f.Attributes == nil {
f.Attributes = make(map[string]string, 0)
}
f.Attributes[name] = value
}
// RemoveAttribute sets the given attribute to empty, which will later remove
// it entirely from the node
func (f *FingerprintResponse) RemoveAttribute(name string) {
// initialize Attributes if it has not been already
if f.Attributes == nil {
f.Attributes = make(map[string]string, 0)
}
f.Attributes[name] = ""
}
// AddLink adds a link entry to the fingerprint response
func (f *FingerprintResponse) AddLink(name, value string) {
// initialize Links if it has not been already
if f.Links == nil {
f.Links = make(map[string]string, 0)
}
f.Links[name] = value
}
// RemoveLink removes a link entry from the fingerprint response. This will
// later remove it entirely from the node
func (f *FingerprintResponse) RemoveLink(name string) {
// initialize Links if it has not been already
if f.Links == nil {
f.Links = make(map[string]string, 0)
}
f.Links[name] = ""
}

View File

@@ -163,6 +163,12 @@ func convertServerConfig(agentConfig *Config, logOutput io.Writer) (*nomad.Confi
if agentConfig.Server.NonVotingServer {
conf.NonVoter = true
}
if agentConfig.Server.RedundancyZone != "" {
conf.RedundancyZone = agentConfig.Server.RedundancyZone
}
if agentConfig.Server.UpgradeVersion != "" {
conf.UpgradeVersion = agentConfig.Server.UpgradeVersion
}
if agentConfig.Autopilot != nil {
if agentConfig.Autopilot.CleanupDeadServers != nil {
conf.AutopilotConfig.CleanupDeadServers = *agentConfig.Autopilot.CleanupDeadServers
@@ -176,14 +182,14 @@ func convertServerConfig(agentConfig *Config, logOutput io.Writer) (*nomad.Confi
if agentConfig.Autopilot.MaxTrailingLogs != 0 {
conf.AutopilotConfig.MaxTrailingLogs = uint64(agentConfig.Autopilot.MaxTrailingLogs)
}
if agentConfig.Autopilot.RedundancyZoneTag != "" {
conf.AutopilotConfig.RedundancyZoneTag = agentConfig.Autopilot.RedundancyZoneTag
if agentConfig.Autopilot.EnableRedundancyZones != nil {
conf.AutopilotConfig.EnableRedundancyZones = *agentConfig.Autopilot.EnableRedundancyZones
}
if agentConfig.Autopilot.DisableUpgradeMigration != nil {
conf.AutopilotConfig.DisableUpgradeMigration = *agentConfig.Autopilot.DisableUpgradeMigration
}
if agentConfig.Autopilot.UpgradeVersionTag != "" {
conf.AutopilotConfig.UpgradeVersionTag = agentConfig.Autopilot.UpgradeVersionTag
if agentConfig.Autopilot.EnableCustomUpgrades != nil {
conf.AutopilotConfig.EnableCustomUpgrades = *agentConfig.Autopilot.EnableCustomUpgrades
}
}

View File

@@ -748,6 +748,7 @@ func (c *Command) setupTelemetry(config *Config) (*metrics.InmemSink, error) {
if err != nil {
return inm, err
}
sink.SetTags(telConfig.DataDogTags)
fanout = append(fanout, sink)
}

View File

@@ -83,7 +83,9 @@ server {
retry_interval = "15s"
rejoin_after_leave = true
non_voting_server = true
encrypt = "abc"
redundancy_zone = "foo"
upgrade_version = "0.8.0"
encrypt = "abc"
}
acl {
enabled = true
@@ -166,7 +168,7 @@ autopilot {
disable_upgrade_migration = true
last_contact_threshold = "12705s"
max_trailing_logs = 17849
redundancy_zone_tag = "foo"
enable_redundancy_zones = true
server_stabilization_time = "23057s"
upgrade_version_tag = "bar"
enable_custom_upgrades = true
}

View File

@@ -330,10 +330,17 @@ type ServerConfig struct {
// true, we ignore the leave, and rejoin the cluster on start.
RejoinAfterLeave bool `mapstructure:"rejoin_after_leave"`
// NonVotingServer is whether this server will act as a non-voting member
// of the cluster to help provide read scalability. (Enterprise-only)
// (Enterprise-only) NonVotingServer is whether this server will act as a
// non-voting member of the cluster to help provide read scalability.
NonVotingServer bool `mapstructure:"non_voting_server"`
// (Enterprise-only) RedundancyZone is the redundancy zone to use for this server.
RedundancyZone string `mapstructure:"redundancy_zone"`
// (Enterprise-only) UpgradeVersion is the custom upgrade version to use when
// performing upgrade migrations.
UpgradeVersion string `mapstructure:"upgrade_version"`
// Encryption key to use for the Serf communication
EncryptKey string `mapstructure:"encrypt" json:"-"`
}
@@ -348,6 +355,7 @@ type Telemetry struct {
StatsiteAddr string `mapstructure:"statsite_address"`
StatsdAddr string `mapstructure:"statsd_address"`
DataDogAddr string `mapstructure:"datadog_address"`
DataDogTags []string `mapstructure:"datadog_tags"`
PrometheusMetrics bool `mapstructure:"prometheus_metrics"`
DisableHostname bool `mapstructure:"disable_hostname"`
UseNodeName bool `mapstructure:"use_node_name"`
@@ -1034,6 +1042,12 @@ func (a *ServerConfig) Merge(b *ServerConfig) *ServerConfig {
if b.NonVotingServer {
result.NonVotingServer = true
}
if b.RedundancyZone != "" {
result.RedundancyZone = b.RedundancyZone
}
if b.UpgradeVersion != "" {
result.UpgradeVersion = b.UpgradeVersion
}
if b.EncryptKey != "" {
result.EncryptKey = b.EncryptKey
}
@@ -1157,6 +1171,9 @@ func (a *Telemetry) Merge(b *Telemetry) *Telemetry {
if b.DataDogAddr != "" {
result.DataDogAddr = b.DataDogAddr
}
if b.DataDogTags != nil {
result.DataDogTags = b.DataDogTags
}
if b.PrometheusMetrics {
result.PrometheusMetrics = b.PrometheusMetrics
}

View File

@@ -9,6 +9,7 @@ import (
"time"
multierror "github.com/hashicorp/go-multierror"
"github.com/hashicorp/go-version"
"github.com/hashicorp/hcl"
"github.com/hashicorp/hcl/hcl/ast"
"github.com/hashicorp/nomad/helper"
@@ -536,6 +537,8 @@ func parseServer(result **ServerConfig, list *ast.ObjectList) error {
"encrypt",
"authoritative_region",
"non_voting_server",
"redundancy_zone",
"upgrade_version",
}
if err := helper.CheckHCLKeys(listVal, valid); err != nil {
return err
@@ -559,6 +562,12 @@ func parseServer(result **ServerConfig, list *ast.ObjectList) error {
return err
}
if config.UpgradeVersion != "" {
if _, err := version.NewVersion(config.UpgradeVersion); err != nil {
return fmt.Errorf("error parsing upgrade_version: %v", err)
}
}
*result = &config
return nil
}
@@ -632,6 +641,7 @@ func parseTelemetry(result **Telemetry, list *ast.ObjectList) error {
"publish_allocation_metrics",
"publish_node_metrics",
"datadog_address",
"datadog_tags",
"prometheus_metrics",
"circonus_api_token",
"circonus_api_app",
@@ -865,9 +875,9 @@ func parseAutopilot(result **config.AutopilotConfig, list *ast.ObjectList) error
"server_stabilization_time",
"last_contact_threshold",
"max_trailing_logs",
"redundancy_zone_tag",
"enable_redundancy_zones",
"disable_upgrade_migration",
"upgrade_version_tag",
"enable_custom_upgrades",
}
if err := helper.CheckHCLKeys(listVal, valid); err != nil {

View File

@@ -104,6 +104,8 @@ func TestConfig_Parse(t *testing.T) {
RejoinAfterLeave: true,
RetryMaxAttempts: 3,
NonVotingServer: true,
RedundancyZone: "foo",
UpgradeVersion: "0.8.0",
EncryptKey: "abc",
},
ACL: &ACLConfig{
@@ -193,9 +195,9 @@ func TestConfig_Parse(t *testing.T) {
ServerStabilizationTime: 23057 * time.Second,
LastContactThreshold: 12705 * time.Second,
MaxTrailingLogs: 17849,
RedundancyZoneTag: "foo",
EnableRedundancyZones: &trueValue,
DisableUpgradeMigration: &trueValue,
UpgradeVersionTag: "bar",
EnableCustomUpgrades: &trueValue,
},
},
false,

View File

@@ -56,6 +56,7 @@ func TestConfig_Merge(t *testing.T) {
StatsiteAddr: "127.0.0.1:8125",
StatsdAddr: "127.0.0.1:8125",
DataDogAddr: "127.0.0.1:8125",
DataDogTags: []string{"cat1:tag1", "cat2:tag2"},
PrometheusMetrics: true,
DisableHostname: false,
DisableTaggedMetrics: true,
@@ -107,6 +108,8 @@ func TestConfig_Merge(t *testing.T) {
HeartbeatGrace: 30 * time.Second,
MinHeartbeatTTL: 30 * time.Second,
MaxHeartbeatsPerSecond: 30.0,
RedundancyZone: "foo",
UpgradeVersion: "foo",
},
ACL: &ACLConfig{
Enabled: true,
@@ -165,9 +168,9 @@ func TestConfig_Merge(t *testing.T) {
ServerStabilizationTime: 1 * time.Second,
LastContactThreshold: 1 * time.Second,
MaxTrailingLogs: 1,
RedundancyZoneTag: "1",
EnableRedundancyZones: &falseValue,
DisableUpgradeMigration: &falseValue,
UpgradeVersionTag: "1",
EnableCustomUpgrades: &falseValue,
},
}
@@ -189,6 +192,7 @@ func TestConfig_Merge(t *testing.T) {
StatsiteAddr: "127.0.0.2:8125",
StatsdAddr: "127.0.0.2:8125",
DataDogAddr: "127.0.0.1:8125",
DataDogTags: []string{"cat1:tag1", "cat2:tag2"},
PrometheusMetrics: true,
DisableHostname: true,
PublishNodeMetrics: true,
@@ -260,6 +264,8 @@ func TestConfig_Merge(t *testing.T) {
RetryInterval: "10s",
retryInterval: time.Second * 10,
NonVotingServer: true,
RedundancyZone: "bar",
UpgradeVersion: "bar",
},
ACL: &ACLConfig{
Enabled: true,
@@ -328,9 +334,9 @@ func TestConfig_Merge(t *testing.T) {
ServerStabilizationTime: 2 * time.Second,
LastContactThreshold: 2 * time.Second,
MaxTrailingLogs: 2,
RedundancyZoneTag: "2",
EnableRedundancyZones: &trueValue,
DisableUpgradeMigration: &trueValue,
UpgradeVersionTag: "2",
EnableCustomUpgrades: &trueValue,
},
}

View File

@@ -18,7 +18,6 @@ import (
assetfs "github.com/elazarl/go-bindata-assetfs"
"github.com/hashicorp/nomad/helper/tlsutil"
"github.com/hashicorp/nomad/nomad/structs"
"github.com/mitchellh/mapstructure"
"github.com/rs/cors"
"github.com/ugorji/go/codec"
)
@@ -346,24 +345,6 @@ func decodeBody(req *http.Request, out interface{}) error {
return dec.Decode(&out)
}
// decodeBodyFunc is used to decode a JSON request body invoking
// a given callback function
func decodeBodyFunc(req *http.Request, out interface{}, cb func(interface{}) error) error {
var raw interface{}
dec := json.NewDecoder(req.Body)
if err := dec.Decode(&raw); err != nil {
return err
}
// Invoke the callback prior to decode
if cb != nil {
if err := cb(raw); err != nil {
return err
}
}
return mapstructure.Decode(raw, out)
}
// setIndex is used to set the index response header
func setIndex(resp http.ResponseWriter, index uint64) {
resp.Header().Set("X-Nomad-Index", strconv.FormatUint(index, 10))

View File

@@ -104,19 +104,19 @@ func (s *HTTPServer) OperatorAutopilotConfiguration(resp http.ResponseWriter, re
return nil, nil
}
var reply autopilot.Config
var reply structs.AutopilotConfig
if err := s.agent.RPC("Operator.AutopilotGetConfiguration", &args, &reply); err != nil {
return nil, err
}
out := api.AutopilotConfiguration{
CleanupDeadServers: reply.CleanupDeadServers,
LastContactThreshold: api.NewReadableDuration(reply.LastContactThreshold),
LastContactThreshold: reply.LastContactThreshold,
MaxTrailingLogs: reply.MaxTrailingLogs,
ServerStabilizationTime: api.NewReadableDuration(reply.ServerStabilizationTime),
RedundancyZoneTag: reply.RedundancyZoneTag,
ServerStabilizationTime: reply.ServerStabilizationTime,
EnableRedundancyZones: reply.EnableRedundancyZones,
DisableUpgradeMigration: reply.DisableUpgradeMigration,
UpgradeVersionTag: reply.UpgradeVersionTag,
EnableCustomUpgrades: reply.EnableCustomUpgrades,
CreateIndex: reply.CreateIndex,
ModifyIndex: reply.ModifyIndex,
}
@@ -129,21 +129,20 @@ func (s *HTTPServer) OperatorAutopilotConfiguration(resp http.ResponseWriter, re
s.parseToken(req, &args.AuthToken)
var conf api.AutopilotConfiguration
durations := NewDurationFixer("lastcontactthreshold", "serverstabilizationtime")
if err := decodeBodyFunc(req, &conf, durations.FixupDurations); err != nil {
if err := decodeBody(req, &conf); err != nil {
resp.WriteHeader(http.StatusBadRequest)
fmt.Fprintf(resp, "Error parsing autopilot config: %v", err)
return nil, nil
}
args.Config = autopilot.Config{
args.Config = structs.AutopilotConfig{
CleanupDeadServers: conf.CleanupDeadServers,
LastContactThreshold: conf.LastContactThreshold.Duration(),
LastContactThreshold: conf.LastContactThreshold,
MaxTrailingLogs: conf.MaxTrailingLogs,
ServerStabilizationTime: conf.ServerStabilizationTime.Duration(),
RedundancyZoneTag: conf.RedundancyZoneTag,
ServerStabilizationTime: conf.ServerStabilizationTime,
EnableRedundancyZones: conf.EnableRedundancyZones,
DisableUpgradeMigration: conf.DisableUpgradeMigration,
UpgradeVersionTag: conf.UpgradeVersionTag,
EnableCustomUpgrades: conf.EnableCustomUpgrades,
}
// Check for cas value
@@ -210,7 +209,7 @@ func (s *HTTPServer) OperatorServerHealth(resp http.ResponseWriter, req *http.Re
Version: server.Version,
Leader: server.Leader,
SerfStatus: server.SerfStatus.String(),
LastContact: api.NewReadableDuration(server.LastContact),
LastContact: server.LastContact,
LastTerm: server.LastTerm,
LastIndex: server.LastIndex,
Healthy: server.Healthy,
@@ -221,56 +220,3 @@ func (s *HTTPServer) OperatorServerHealth(resp http.ResponseWriter, req *http.Re
return out, nil
}
type durationFixer map[string]bool
func NewDurationFixer(fields ...string) durationFixer {
d := make(map[string]bool)
for _, field := range fields {
d[field] = true
}
return d
}
// FixupDurations is used to handle parsing any field names in the map to time.Durations
func (d durationFixer) FixupDurations(raw interface{}) error {
rawMap, ok := raw.(map[string]interface{})
if !ok {
return nil
}
for key, val := range rawMap {
switch val.(type) {
case map[string]interface{}:
if err := d.FixupDurations(val); err != nil {
return err
}
case []interface{}:
for _, v := range val.([]interface{}) {
if err := d.FixupDurations(v); err != nil {
return err
}
}
case []map[string]interface{}:
for _, v := range val.([]map[string]interface{}) {
if err := d.FixupDurations(v); err != nil {
return err
}
}
default:
if d[strings.ToLower(key)] {
// Convert a string value into an integer
if vStr, ok := val.(string); ok {
dur, err := time.ParseDuration(vStr)
if err != nil {
return err
}
rawMap[key] = dur
}
}
}
}
return nil
}

View File

@@ -9,7 +9,6 @@ import (
"testing"
"time"
"github.com/hashicorp/consul/agent/consul/autopilot"
"github.com/hashicorp/consul/testutil/retry"
"github.com/hashicorp/nomad/api"
"github.com/hashicorp/nomad/nomad/structs"
@@ -112,7 +111,7 @@ func TestOperator_AutopilotSetConfiguration(t *testing.T) {
t.Fatalf("err: %v", err)
}
if resp.Code != 200 {
t.Fatalf("bad code: %d", resp.Code)
t.Fatalf("bad code: %d, %q", resp.Code, resp.Body.String())
}
args := structs.GenericRequest{
@@ -121,7 +120,7 @@ func TestOperator_AutopilotSetConfiguration(t *testing.T) {
},
}
var reply autopilot.Config
var reply structs.AutopilotConfig
if err := s.RPC("Operator.AutopilotGetConfiguration", &args, &reply); err != nil {
t.Fatalf("err: %v", err)
}
@@ -150,7 +149,7 @@ func TestOperator_AutopilotCASConfiguration(t *testing.T) {
},
}
var reply autopilot.Config
var reply structs.AutopilotConfig
if err := s.RPC("Operator.AutopilotGetConfiguration", &args, &reply); err != nil {
t.Fatalf("err: %v", err)
}
@@ -200,7 +199,6 @@ func TestOperator_AutopilotCASConfiguration(t *testing.T) {
}
func TestOperator_ServerHealth(t *testing.T) {
t.Parallel()
httpTest(t, func(c *Config) {
c.Server.RaftProtocol = 3
}, func(s *TestAgent) {
@@ -259,47 +257,3 @@ func TestOperator_ServerHealth_Unhealthy(t *testing.T) {
})
})
}
func TestDurationFixer(t *testing.T) {
assert := assert.New(t)
obj := map[string]interface{}{
"key1": []map[string]interface{}{
{
"subkey1": "10s",
},
{
"subkey2": "5d",
},
},
"key2": map[string]interface{}{
"subkey3": "30s",
"subkey4": "20m",
},
"key3": "11s",
"key4": "49h",
}
expected := map[string]interface{}{
"key1": []map[string]interface{}{
{
"subkey1": 10 * time.Second,
},
{
"subkey2": "5d",
},
},
"key2": map[string]interface{}{
"subkey3": "30s",
"subkey4": 20 * time.Minute,
},
"key3": "11s",
"key4": 49 * time.Hour,
}
fixer := NewDurationFixer("key4", "subkey1", "subkey4")
if err := fixer.FixupDurations(obj); err != nil {
t.Fatal(err)
}
// Ensure we only processed the intended fieldnames
assert.Equal(obj, expected)
}

View File

@@ -59,7 +59,8 @@ func formatTime(t time.Time) string {
// It's more confusing to display the UNIX epoch or a zero value than nothing
return ""
}
return t.Format("01/02/06 15:04:05 MST")
// Return ISO_8601 time format GH-3806
return t.Format("2006-01-02T15:04:05Z07:00")
}
// formatUnixNanoTime is a helper for formatting time for output.

View File

@@ -180,7 +180,7 @@ func (c *NodeStatusCommand) Run(args []string) int {
out[0] = "ID|DC|Name|Class|"
if c.verbose {
out[0] += "Version|"
out[0] += "Address|Version|"
}
out[0] += "Drain|Status"
@@ -196,8 +196,8 @@ func (c *NodeStatusCommand) Run(args []string) int {
node.Name,
node.NodeClass)
if c.verbose {
out[i+1] += fmt.Sprintf("|%s",
node.Version)
out[i+1] += fmt.Sprintf("|%s|%s",
node.Address, node.Version)
}
out[i+1] += fmt.Sprintf("|%v|%s",
node.Drain,

View File

@@ -45,9 +45,9 @@ func (c *OperatorAutopilotGetCommand) Run(args []string) int {
c.Ui.Output(fmt.Sprintf("LastContactThreshold = %v", config.LastContactThreshold.String()))
c.Ui.Output(fmt.Sprintf("MaxTrailingLogs = %v", config.MaxTrailingLogs))
c.Ui.Output(fmt.Sprintf("ServerStabilizationTime = %v", config.ServerStabilizationTime.String()))
c.Ui.Output(fmt.Sprintf("RedundancyZoneTag = %q", config.RedundancyZoneTag))
c.Ui.Output(fmt.Sprintf("EnableRedundancyZones = %v", config.EnableRedundancyZones))
c.Ui.Output(fmt.Sprintf("DisableUpgradeMigration = %v", config.DisableUpgradeMigration))
c.Ui.Output(fmt.Sprintf("UpgradeVersionTag = %q", config.UpgradeVersionTag))
c.Ui.Output(fmt.Sprintf("EnableCustomUpgrades = %v", config.EnableCustomUpgrades))
return 0
}

View File

@@ -3,10 +3,8 @@ package command
import (
"fmt"
"strings"
"time"
"github.com/hashicorp/consul/command/flags"
"github.com/hashicorp/nomad/api"
"github.com/posener/complete"
)
@@ -21,9 +19,9 @@ func (c *OperatorAutopilotSetCommand) AutocompleteFlags() complete.Flags {
"-max-trailing-logs": complete.PredictAnything,
"-last-contact-threshold": complete.PredictAnything,
"-server-stabilization-time": complete.PredictAnything,
"-redundancy-zone-tag": complete.PredictAnything,
"-disable-upgrade-migration": complete.PredictAnything,
"-upgrade-version-tag": complete.PredictAnything,
"-enable-redundancy-zones": complete.PredictNothing,
"-disable-upgrade-migration": complete.PredictNothing,
"-enable-custom-upgrades": complete.PredictNothing,
})
}
@@ -36,9 +34,9 @@ func (c *OperatorAutopilotSetCommand) Run(args []string) int {
var maxTrailingLogs flags.UintValue
var lastContactThreshold flags.DurationValue
var serverStabilizationTime flags.DurationValue
var redundancyZoneTag flags.StringValue
var enableRedundancyZones flags.BoolValue
var disableUpgradeMigration flags.BoolValue
var upgradeVersionTag flags.StringValue
var enableCustomUpgrades flags.BoolValue
f := c.Meta.FlagSet("autopilot", FlagSetClient)
f.Usage = func() { c.Ui.Output(c.Help()) }
@@ -47,9 +45,9 @@ func (c *OperatorAutopilotSetCommand) Run(args []string) int {
f.Var(&maxTrailingLogs, "max-trailing-logs", "")
f.Var(&lastContactThreshold, "last-contact-threshold", "")
f.Var(&serverStabilizationTime, "server-stabilization-time", "")
f.Var(&redundancyZoneTag, "redundancy-zone-tag", "")
f.Var(&enableRedundancyZones, "enable-redundancy-zones", "")
f.Var(&disableUpgradeMigration, "disable-upgrade-migration", "")
f.Var(&upgradeVersionTag, "upgrade-version-tag", "")
f.Var(&enableCustomUpgrades, "enable-custom-upgrades", "")
if err := f.Parse(args); err != nil {
c.Ui.Error(fmt.Sprintf("Failed to parse args: %v", err))
@@ -73,21 +71,15 @@ func (c *OperatorAutopilotSetCommand) Run(args []string) int {
// Update the config values based on the set flags.
cleanupDeadServers.Merge(&conf.CleanupDeadServers)
redundancyZoneTag.Merge(&conf.RedundancyZoneTag)
enableRedundancyZones.Merge(&conf.EnableRedundancyZones)
disableUpgradeMigration.Merge(&conf.DisableUpgradeMigration)
upgradeVersionTag.Merge(&conf.UpgradeVersionTag)
enableRedundancyZones.Merge(&conf.EnableCustomUpgrades)
trailing := uint(conf.MaxTrailingLogs)
maxTrailingLogs.Merge(&trailing)
conf.MaxTrailingLogs = uint64(trailing)
last := time.Duration(*conf.LastContactThreshold)
lastContactThreshold.Merge(&last)
conf.LastContactThreshold = api.NewReadableDuration(last)
stablization := time.Duration(*conf.ServerStabilizationTime)
serverStabilizationTime.Merge(&stablization)
conf.ServerStabilizationTime = api.NewReadableDuration(stablization)
lastContactThreshold.Merge(&conf.LastContactThreshold)
serverStabilizationTime.Merge(&conf.ServerStabilizationTime)
// Check-and-set the new configuration.
result, err := operator.AutopilotCASConfiguration(conf, nil)

View File

@@ -53,10 +53,10 @@ func TestOperatorAutopilotSetConfigCommmand(t *testing.T) {
if conf.MaxTrailingLogs != 99 {
t.Fatalf("bad: %#v", conf)
}
if conf.LastContactThreshold.Duration() != 123*time.Millisecond {
if conf.LastContactThreshold != 123*time.Millisecond {
t.Fatalf("bad: %#v", conf)
}
if conf.ServerStabilizationTime.Duration() != 123*time.Millisecond {
if conf.ServerStabilizationTime != 123*time.Millisecond {
t.Fatalf("bad: %#v", conf)
}
}

View File

@@ -10,13 +10,45 @@ import (
"github.com/hashicorp/serf/serf"
)
const (
// AutopilotRZTag is the Serf tag to use for the redundancy zone value
// when passing the server metadata to Autopilot.
AutopilotRZTag = "ap_zone"
// AutopilotRZTag is the Serf tag to use for the custom version value
// when passing the server metadata to Autopilot.
AutopilotVersionTag = "ap_version"
)
// AutopilotDelegate is a Nomad delegate for autopilot operations.
type AutopilotDelegate struct {
server *Server
}
func (d *AutopilotDelegate) AutopilotConfig() *autopilot.Config {
return d.server.getOrCreateAutopilotConfig()
c := d.server.getOrCreateAutopilotConfig()
if c == nil {
return nil
}
conf := &autopilot.Config{
CleanupDeadServers: c.CleanupDeadServers,
LastContactThreshold: c.LastContactThreshold,
MaxTrailingLogs: c.MaxTrailingLogs,
ServerStabilizationTime: c.ServerStabilizationTime,
DisableUpgradeMigration: c.DisableUpgradeMigration,
ModifyIndex: c.ModifyIndex,
CreateIndex: c.CreateIndex,
}
if c.EnableRedundancyZones {
conf.RedundancyZoneTag = AutopilotRZTag
}
if c.EnableCustomUpgrades {
conf.UpgradeVersionTag = AutopilotVersionTag
}
return conf
}
func (d *AutopilotDelegate) FetchStats(ctx context.Context, servers []serf.Member) map[string]*autopilot.ServerStats {

View File

@@ -270,8 +270,11 @@ func TestAutopilot_CleanupStaleRaftServer(t *testing.T) {
testutil.WaitForLeader(t, s1.RPC)
// Add s4 to peers directly
addr := fmt.Sprintf("127.0.0.1:%d", s4.config.SerfConfig.MemberlistConfig.BindPort)
s1.raft.AddVoter(raft.ServerID(s4.config.NodeID), raft.ServerAddress(addr), 0, 0)
addr := fmt.Sprintf("127.0.0.1:%d", s4.config.RPCAddr.Port)
future := s1.raft.AddVoter(raft.ServerID(s4.config.NodeID), raft.ServerAddress(addr), 0, 0)
if err := future.Error(); err != nil {
t.Fatal(err)
}
// Verify we have 4 peers
peers, err := s1.numPeers()

View File

@@ -8,7 +8,6 @@ import (
"runtime"
"time"
"github.com/hashicorp/consul/agent/consul/autopilot"
"github.com/hashicorp/memberlist"
"github.com/hashicorp/nomad/helper/tlsutil"
"github.com/hashicorp/nomad/helper/uuid"
@@ -98,6 +97,13 @@ type Config struct {
// as a voting member of the Raft cluster.
NonVoter bool
// (Enterprise-only) RedundancyZone is the redundancy zone to use for this server.
RedundancyZone string
// (Enterprise-only) UpgradeVersion is the custom upgrade version to use when
// performing upgrade migrations.
UpgradeVersion string
// SerfConfig is the configuration for the serf cluster
SerfConfig *serf.Config
@@ -269,7 +275,7 @@ type Config struct {
// AutopilotConfig is used to apply the initial autopilot config when
// bootstrapping.
AutopilotConfig *autopilot.Config
AutopilotConfig *structs.AutopilotConfig
// ServerHealthInterval is the frequency with which the health of the
// servers in the cluster will be updated.
@@ -339,7 +345,7 @@ func DefaultConfig() *Config {
TLSConfig: &config.TLSConfig{},
ReplicationBackoff: 30 * time.Second,
SentinelGCInterval: 30 * time.Second,
AutopilotConfig: &autopilot.Config{
AutopilotConfig: &structs.AutopilotConfig{
CleanupDeadServers: true,
LastContactThreshold: 200 * time.Millisecond,
MaxTrailingLogs: 250,

View File

@@ -10,7 +10,6 @@ import (
"time"
"github.com/google/go-cmp/cmp"
"github.com/hashicorp/consul/agent/consul/autopilot"
memdb "github.com/hashicorp/go-memdb"
"github.com/hashicorp/nomad/helper"
"github.com/hashicorp/nomad/nomad/mock"
@@ -2319,7 +2318,7 @@ func TestFSM_Autopilot(t *testing.T) {
// Set the autopilot config using a request.
req := structs.AutopilotSetConfigRequest{
Datacenter: "dc1",
Config: autopilot.Config{
Config: structs.AutopilotConfig{
CleanupDeadServers: true,
LastContactThreshold: 10 * time.Second,
MaxTrailingLogs: 300,

View File

@@ -13,7 +13,6 @@ import (
"golang.org/x/time/rate"
"github.com/armon/go-metrics"
"github.com/hashicorp/consul/agent/consul/autopilot"
memdb "github.com/hashicorp/go-memdb"
"github.com/hashicorp/go-version"
"github.com/hashicorp/nomad/helper/uuid"
@@ -1174,7 +1173,7 @@ func diffACLTokens(state *state.StateStore, minIndex uint64, remoteList []*struc
}
// getOrCreateAutopilotConfig is used to get the autopilot config, initializing it if necessary
func (s *Server) getOrCreateAutopilotConfig() *autopilot.Config {
func (s *Server) getOrCreateAutopilotConfig() *structs.AutopilotConfig {
state := s.fsm.State()
_, config, err := state.AutopilotConfig()
if err != nil {

View File

@@ -192,7 +192,7 @@ REMOVE:
}
// AutopilotGetConfiguration is used to retrieve the current Autopilot configuration.
func (op *Operator) AutopilotGetConfiguration(args *structs.GenericRequest, reply *autopilot.Config) error {
func (op *Operator) AutopilotGetConfiguration(args *structs.GenericRequest, reply *structs.AutopilotConfig) error {
if done, err := op.srv.forward("Operator.AutopilotGetConfiguration", args, args, reply); done {
return err
}

View File

@@ -1105,6 +1105,12 @@ func (s *Server) setupSerf(conf *serf.Config, ch chan serf.Event, path string) (
if s.config.NonVoter {
conf.Tags["nonvoter"] = "1"
}
if s.config.RedundancyZone != "" {
conf.Tags[AutopilotRZTag] = s.config.RedundancyZone
}
if s.config.UpgradeVersion != "" {
conf.Tags[AutopilotVersionTag] = s.config.UpgradeVersion
}
conf.MemberlistConfig.LogOutput = s.config.LogOutput
conf.LogOutput = s.config.LogOutput
conf.EventCh = ch

View File

@@ -3,8 +3,8 @@ package state
import (
"fmt"
"github.com/hashicorp/consul/agent/consul/autopilot"
"github.com/hashicorp/go-memdb"
"github.com/hashicorp/nomad/nomad/structs"
)
// autopilotConfigTableSchema returns a new table schema used for storing
@@ -26,7 +26,7 @@ func autopilotConfigTableSchema() *memdb.TableSchema {
}
// AutopilotConfig is used to get the current Autopilot configuration.
func (s *StateStore) AutopilotConfig() (uint64, *autopilot.Config, error) {
func (s *StateStore) AutopilotConfig() (uint64, *structs.AutopilotConfig, error) {
tx := s.db.Txn(false)
defer tx.Abort()
@@ -36,7 +36,7 @@ func (s *StateStore) AutopilotConfig() (uint64, *autopilot.Config, error) {
return 0, nil, fmt.Errorf("failed autopilot config lookup: %s", err)
}
config, ok := c.(*autopilot.Config)
config, ok := c.(*structs.AutopilotConfig)
if !ok {
return 0, nil, nil
}
@@ -45,7 +45,7 @@ func (s *StateStore) AutopilotConfig() (uint64, *autopilot.Config, error) {
}
// AutopilotSetConfig is used to set the current Autopilot configuration.
func (s *StateStore) AutopilotSetConfig(idx uint64, config *autopilot.Config) error {
func (s *StateStore) AutopilotSetConfig(idx uint64, config *structs.AutopilotConfig) error {
tx := s.db.Txn(true)
defer tx.Abort()
@@ -58,7 +58,7 @@ func (s *StateStore) AutopilotSetConfig(idx uint64, config *autopilot.Config) er
// AutopilotCASConfig is used to try updating the Autopilot configuration with a
// given Raft index. If the CAS index specified is not equal to the last observed index
// for the config, then the call is a noop,
func (s *StateStore) AutopilotCASConfig(idx, cidx uint64, config *autopilot.Config) (bool, error) {
func (s *StateStore) AutopilotCASConfig(idx, cidx uint64, config *structs.AutopilotConfig) (bool, error) {
tx := s.db.Txn(true)
defer tx.Abort()
@@ -71,7 +71,7 @@ func (s *StateStore) AutopilotCASConfig(idx, cidx uint64, config *autopilot.Conf
// If the existing index does not match the provided CAS
// index arg, then we shouldn't update anything and can safely
// return early here.
e, ok := existing.(*autopilot.Config)
e, ok := existing.(*structs.AutopilotConfig)
if !ok || e.ModifyIndex != cidx {
return false, nil
}
@@ -82,7 +82,7 @@ func (s *StateStore) AutopilotCASConfig(idx, cidx uint64, config *autopilot.Conf
return true, nil
}
func (s *StateStore) autopilotSetConfigTxn(idx uint64, tx *memdb.Txn, config *autopilot.Config) error {
func (s *StateStore) autopilotSetConfigTxn(idx uint64, tx *memdb.Txn, config *structs.AutopilotConfig) error {
// Check for an existing config
existing, err := tx.First("autopilot-config", "id")
if err != nil {
@@ -91,7 +91,7 @@ func (s *StateStore) autopilotSetConfigTxn(idx uint64, tx *memdb.Txn, config *au
// Set the indexes.
if existing != nil {
config.CreateIndex = existing.(*autopilot.Config).CreateIndex
config.CreateIndex = existing.(*structs.AutopilotConfig).CreateIndex
} else {
config.CreateIndex = idx
}

View File

@@ -5,20 +5,20 @@ import (
"testing"
"time"
"github.com/hashicorp/consul/agent/consul/autopilot"
"github.com/hashicorp/nomad/nomad/structs"
)
func TestStateStore_Autopilot(t *testing.T) {
s := testStateStore(t)
expected := &autopilot.Config{
expected := &structs.AutopilotConfig{
CleanupDeadServers: true,
LastContactThreshold: 5 * time.Second,
MaxTrailingLogs: 500,
ServerStabilizationTime: 100 * time.Second,
RedundancyZoneTag: "az",
EnableRedundancyZones: true,
DisableUpgradeMigration: true,
UpgradeVersionTag: "build",
EnableCustomUpgrades: true,
}
if err := s.AutopilotSetConfig(0, expected); err != nil {
@@ -40,7 +40,7 @@ func TestStateStore_Autopilot(t *testing.T) {
func TestStateStore_AutopilotCAS(t *testing.T) {
s := testStateStore(t)
expected := &autopilot.Config{
expected := &structs.AutopilotConfig{
CleanupDeadServers: true,
}
@@ -52,7 +52,7 @@ func TestStateStore_AutopilotCAS(t *testing.T) {
}
// Do a CAS with an index lower than the entry
ok, err := s.AutopilotCASConfig(2, 0, &autopilot.Config{
ok, err := s.AutopilotCASConfig(2, 0, &structs.AutopilotConfig{
CleanupDeadServers: false,
})
if ok || err != nil {
@@ -73,7 +73,7 @@ func TestStateStore_AutopilotCAS(t *testing.T) {
}
// Do another CAS, this time with the correct index
ok, err = s.AutopilotCASConfig(2, 1, &autopilot.Config{
ok, err = s.AutopilotCASConfig(2, 1, &structs.AutopilotConfig{
CleanupDeadServers: false,
})
if !ok || err != nil {

View File

@@ -24,25 +24,23 @@ type AutopilotConfig struct {
// be behind before being considered unhealthy.
MaxTrailingLogs int `mapstructure:"max_trailing_logs"`
// (Enterprise-only) RedundancyZoneTag is the node tag to use for separating
// servers into zones for redundancy. If left blank, this feature will be disabled.
RedundancyZoneTag string `mapstructure:"redundancy_zone_tag"`
// (Enterprise-only) EnableRedundancyZones specifies whether to enable redundancy zones.
EnableRedundancyZones *bool `mapstructure:"enable_redundancy_zones"`
// (Enterprise-only) DisableUpgradeMigration will disable Autopilot's upgrade migration
// strategy of waiting until enough newer-versioned servers have been added to the
// cluster before promoting them to voters.
DisableUpgradeMigration *bool `mapstructure:"disable_upgrade_migration"`
// (Enterprise-only) UpgradeVersionTag is the node tag to use for version info when
// performing upgrade migrations. If left blank, the Nomad version will be used.
UpgradeVersionTag string `mapstructure:"upgrade_version_tag"`
// (Enterprise-only) EnableCustomUpgrades specifies whether to enable using custom
// upgrade versions when performing migrations.
EnableCustomUpgrades *bool `mapstructure:"enable_custom_upgrades"`
}
// DefaultAutopilotConfig() returns the canonical defaults for the Nomad
// `autopilot` configuration.
func DefaultAutopilotConfig() *AutopilotConfig {
return &AutopilotConfig{
CleanupDeadServers: helper.BoolToPtr(true),
LastContactThreshold: 200 * time.Millisecond,
MaxTrailingLogs: 250,
ServerStabilizationTime: 10 * time.Second,
@@ -64,14 +62,14 @@ func (a *AutopilotConfig) Merge(b *AutopilotConfig) *AutopilotConfig {
if b.MaxTrailingLogs != 0 {
result.MaxTrailingLogs = b.MaxTrailingLogs
}
if b.RedundancyZoneTag != "" {
result.RedundancyZoneTag = b.RedundancyZoneTag
if b.EnableRedundancyZones != nil {
result.EnableRedundancyZones = b.EnableRedundancyZones
}
if b.DisableUpgradeMigration != nil {
result.DisableUpgradeMigration = helper.BoolToPtr(*b.DisableUpgradeMigration)
}
if b.UpgradeVersionTag != "" {
result.UpgradeVersionTag = b.UpgradeVersionTag
if b.EnableCustomUpgrades != nil {
result.EnableCustomUpgrades = b.EnableCustomUpgrades
}
return result
@@ -90,9 +88,15 @@ func (a *AutopilotConfig) Copy() *AutopilotConfig {
if a.CleanupDeadServers != nil {
nc.CleanupDeadServers = helper.BoolToPtr(*a.CleanupDeadServers)
}
if a.EnableRedundancyZones != nil {
nc.EnableRedundancyZones = helper.BoolToPtr(*a.EnableRedundancyZones)
}
if a.DisableUpgradeMigration != nil {
nc.DisableUpgradeMigration = helper.BoolToPtr(*a.DisableUpgradeMigration)
}
if a.EnableCustomUpgrades != nil {
nc.EnableCustomUpgrades = helper.BoolToPtr(*a.EnableCustomUpgrades)
}
return nc
}

View File

@@ -14,9 +14,9 @@ func TestAutopilotConfig_Merge(t *testing.T) {
ServerStabilizationTime: 1 * time.Second,
LastContactThreshold: 1 * time.Second,
MaxTrailingLogs: 1,
RedundancyZoneTag: "1",
EnableRedundancyZones: &trueValue,
DisableUpgradeMigration: &falseValue,
UpgradeVersionTag: "1",
EnableCustomUpgrades: &trueValue,
}
c2 := &AutopilotConfig{
@@ -24,9 +24,9 @@ func TestAutopilotConfig_Merge(t *testing.T) {
ServerStabilizationTime: 2 * time.Second,
LastContactThreshold: 2 * time.Second,
MaxTrailingLogs: 2,
RedundancyZoneTag: "2",
EnableRedundancyZones: nil,
DisableUpgradeMigration: nil,
UpgradeVersionTag: "2",
EnableCustomUpgrades: nil,
}
e := &AutopilotConfig{
@@ -34,9 +34,9 @@ func TestAutopilotConfig_Merge(t *testing.T) {
ServerStabilizationTime: 2 * time.Second,
LastContactThreshold: 2 * time.Second,
MaxTrailingLogs: 2,
RedundancyZoneTag: "2",
EnableRedundancyZones: &trueValue,
DisableUpgradeMigration: &falseValue,
UpgradeVersionTag: "2",
EnableCustomUpgrades: &trueValue,
}
result := c1.Merge(c2)

View File

@@ -1,7 +1,8 @@
package structs
import (
"github.com/hashicorp/consul/agent/consul/autopilot"
"time"
"github.com/hashicorp/raft"
)
@@ -69,7 +70,7 @@ type AutopilotSetConfigRequest struct {
Datacenter string
// Config is the new Autopilot configuration to use.
Config autopilot.Config
Config AutopilotConfig
// CAS controls whether to use check-and-set semantics for this request.
CAS bool
@@ -82,3 +83,39 @@ type AutopilotSetConfigRequest struct {
func (op *AutopilotSetConfigRequest) RequestDatacenter() string {
return op.Datacenter
}
// AutopilotConfig is the internal config for the Autopilot mechanism.
type AutopilotConfig struct {
// CleanupDeadServers controls whether to remove dead servers when a new
// server is added to the Raft peers.
CleanupDeadServers bool
// ServerStabilizationTime is the minimum amount of time a server must be
// in a stable, healthy state before it can be added to the cluster. Only
// applicable with Raft protocol version 3 or higher.
ServerStabilizationTime time.Duration
// LastContactThreshold is the limit on the amount of time a server can go
// without leader contact before being considered unhealthy.
LastContactThreshold time.Duration
// MaxTrailingLogs is the amount of entries in the Raft Log that a server can
// be behind before being considered unhealthy.
MaxTrailingLogs uint64
// (Enterprise-only) EnableRedundancyZones specifies whether to enable redundancy zones.
EnableRedundancyZones bool
// (Enterprise-only) DisableUpgradeMigration will disable Autopilot's upgrade migration
// strategy of waiting until enough newer-versioned servers have been added to the
// cluster before promoting them to voters.
DisableUpgradeMigration bool
// (Enterprise-only) EnableCustomUpgrades specifies whether to enable using custom
// upgrade versions when performing migrations.
EnableCustomUpgrades bool
// CreateIndex/ModifyIndex store the create/modify indexes of this configuration.
CreateIndex uint64
ModifyIndex uint64
}

View File

@@ -1173,7 +1173,11 @@ func (n *Node) TerminalStatus() bool {
// Stub returns a summarized version of the node
func (n *Node) Stub() *NodeListStub {
addr, _, _ := net.SplitHostPort(n.HTTPAddr)
return &NodeListStub{
Address: addr,
ID: n.ID,
Datacenter: n.Datacenter,
Name: n.Name,
@@ -1190,6 +1194,7 @@ func (n *Node) Stub() *NodeListStub {
// NodeListStub is used to return a subset of job information
// for the job list
type NodeListStub struct {
Address string
ID string
Datacenter string
Name string

View File

@@ -46,7 +46,6 @@ type serverParts struct {
MinorVersion int
Build version.Version
RaftVersion int
NonVoter bool
Addr net.Addr
RPCAddr net.Addr
Status serf.MemberStatus
@@ -71,7 +70,6 @@ func isNomadServer(m serf.Member) (bool, *serverParts) {
region := m.Tags["region"]
datacenter := m.Tags["dc"]
_, bootstrap := m.Tags["bootstrap"]
_, nonVoter := m.Tags["nonvoter"]
expect := 0
expectStr, ok := m.Tags["expect"]
@@ -140,7 +138,6 @@ func isNomadServer(m serf.Member) (bool, *serverParts) {
MinorVersion: minorVersion,
Build: *buildVersion,
RaftVersion: raftVsn,
NonVoter: nonVoter,
Status: m.Status,
}
return true, parts

View File

@@ -24,7 +24,6 @@ func TestIsNomadServer(t *testing.T) {
"port": "10000",
"vsn": "1",
"raft_vsn": "2",
"nonvoter": "1",
"build": "0.7.0+ent",
},
}
@@ -51,9 +50,6 @@ func TestIsNomadServer(t *testing.T) {
if parts.RPCAddr.String() != "1.1.1.1:10000" {
t.Fatalf("bad: %v", parts.RPCAddr.String())
}
if !parts.NonVoter {
t.Fatalf("bad: %v", parts.NonVoter)
}
if seg := parts.Build.Segments(); len(seg) != 3 {
t.Fatalf("bad: %v", parts.Build)
} else if seg[0] != 0 && seg[1] != 7 && seg[2] != 0 {

View File

@@ -2,6 +2,10 @@
set -o errexit
#enable ipv6
echo '{"ipv6":true, "fixed-cidr-v6":"2001:db8:1::/64"}' | sudo tee /etc/docker/daemon.json
sudo service docker restart
apt-get update
apt-get install -y liblxc1 lxc-dev lxc shellcheck
apt-get install -y qemu

View File

@@ -65,4 +65,17 @@ export default ApplicationAdapter.extend({
const url = this.buildURL('job', name, job, 'findRecord');
return this.ajax(url, 'GET', { data: assign(this.buildQuery() || {}, namespaceQuery) });
},
forcePeriodic(job) {
if (job.get('periodic')) {
const [name, namespace] = JSON.parse(job.get('id'));
let url = `${this.buildURL('job', name, job, 'findRecord')}/periodic/force`;
if (namespace) {
url += `?namespace=${namespace}`;
}
return this.ajax(url, 'POST');
}
},
});

View File

@@ -6,6 +6,8 @@ export default DistributionBar.extend({
allocationContainer: null,
'data-test-allocation-status-bar': true,
data: computed(
'allocationContainer.{queuedAllocs,completeAllocs,failedAllocs,runningAllocs,startingAllocs}',
function() {

View File

@@ -0,0 +1,27 @@
import { computed } from '@ember/object';
import DistributionBar from './distribution-bar';
export default DistributionBar.extend({
layoutName: 'components/distribution-bar',
job: null,
'data-test-children-status-bar': true,
data: computed('job.{pendingChildren,runningChildren,deadChildren}', function() {
if (!this.get('job')) {
return [];
}
const children = this.get('job').getProperties(
'pendingChildren',
'runningChildren',
'deadChildren'
);
return [
{ label: 'Pending', value: children.pendingChildren, className: 'queued' },
{ label: 'Running', value: children.runningChildren, className: 'running' },
{ label: 'Dead', value: children.deadChildren, className: 'complete' },
];
}),
});

View File

@@ -0,0 +1,29 @@
import Component from '@ember/component';
import { computed } from '@ember/object';
import { inject as service } from '@ember/service';
export default Component.extend({
system: service(),
job: null,
// Provide a value that is bound to a query param
sortProperty: null,
sortDescending: null,
// Provide actions that require routing
onNamespaceChange() {},
gotoTaskGroup() {},
gotoJob() {},
breadcrumbs: computed('job.{name,id}', function() {
const job = this.get('job');
return [
{ label: 'Jobs', args: ['jobs'] },
{
label: job.get('name'),
args: ['jobs.job', job],
},
];
}),
});

View File

@@ -0,0 +1,3 @@
import AbstractJobPage from './abstract';
export default AbstractJobPage.extend();

View File

@@ -0,0 +1,16 @@
import { computed } from '@ember/object';
import { alias } from '@ember/object/computed';
import PeriodicChildJobPage from './periodic-child';
export default PeriodicChildJobPage.extend({
payload: alias('job.decodedPayload'),
payloadJSON: computed('payload', function() {
let json;
try {
json = JSON.parse(this.get('payload'));
} catch (e) {
// Swallow error and fall back to plain text rendering
}
return json;
}),
});

View File

@@ -0,0 +1,3 @@
import AbstractJobPage from './abstract';
export default AbstractJobPage.extend();

View File

@@ -0,0 +1,31 @@
import Component from '@ember/component';
import { computed } from '@ember/object';
import { alias } from '@ember/object/computed';
import Sortable from 'nomad-ui/mixins/sortable';
export default Component.extend(Sortable, {
job: null,
classNames: ['boxed-section'],
// Provide a value that is bound to a query param
sortProperty: null,
sortDescending: null,
currentPage: null,
// Provide an action with access to the router
gotoJob() {},
pageSize: 10,
taskGroups: computed('job.taskGroups.[]', function() {
return this.get('job.taskGroups') || [];
}),
children: computed('job.children.[]', function() {
return this.get('job.children') || [];
}),
listToSort: alias('children'),
sortedChildren: alias('listSorted'),
});

View File

@@ -0,0 +1,12 @@
import Component from '@ember/component';
import { computed } from '@ember/object';
export default Component.extend({
job: null,
classNames: ['boxed-section'],
sortedEvaluations: computed('job.evaluations.@each.modifyIndex', function() {
return (this.get('job.evaluations') || []).sortBy('modifyIndex').reverse();
}),
});

View File

@@ -0,0 +1,6 @@
import Component from '@ember/component';
export default Component.extend({
job: null,
tagName: '',
});

View File

@@ -0,0 +1,6 @@
import Component from '@ember/component';
export default Component.extend({
job: null,
tagName: '',
});

View File

@@ -0,0 +1,7 @@
import Component from '@ember/component';
export default Component.extend({
job: null,
classNames: ['boxed-section'],
});

View File

@@ -0,0 +1,24 @@
import Component from '@ember/component';
import { computed } from '@ember/object';
import { alias } from '@ember/object/computed';
import Sortable from 'nomad-ui/mixins/sortable';
export default Component.extend(Sortable, {
job: null,
classNames: ['boxed-section'],
// Provide a value that is bound to a query param
sortProperty: null,
sortDescending: null,
// Provide an action with access to the router
gotoTaskGroup() {},
taskGroups: computed('job.taskGroups.[]', function() {
return this.get('job.taskGroups') || [];
}),
listToSort: alias('taskGroups'),
sortedTaskGroups: alias('listSorted'),
});

View File

@@ -0,0 +1,21 @@
import AbstractJobPage from './abstract';
import { computed } from '@ember/object';
export default AbstractJobPage.extend({
breadcrumbs: computed('job.{name,id}', 'job.parent.{name,id}', function() {
const job = this.get('job');
const parent = this.get('job.parent');
return [
{ label: 'Jobs', args: ['jobs'] },
{
label: parent.get('name'),
args: ['jobs.job', parent],
},
{
label: job.get('trimmedName'),
args: ['jobs.job', job],
},
];
}),
});

View File

@@ -0,0 +1,15 @@
import AbstractJobPage from './abstract';
import { inject as service } from '@ember/service';
export default AbstractJobPage.extend({
store: service(),
actions: {
forceLaunch() {
this.get('job')
.forcePeriodic()
.then(() => {
this.get('store').findAll('job');
});
},
},
});

Some files were not shown because too many files have changed in this diff Show More