This commit is contained in:
Alex Dadgar
2016-10-03 13:50:56 -07:00
parent e1bc05eadf
commit 7ba02acd0f
54 changed files with 11394 additions and 1 deletions

3
vendor/github.com/burntsushi/toml/COMPATIBLE generated vendored Normal file
View File

@@ -0,0 +1,3 @@
Compatible with TOML version
[v0.2.0](https://github.com/mojombo/toml/blob/master/versions/toml-v0.2.0.md)

14
vendor/github.com/burntsushi/toml/COPYING generated vendored Normal file
View File

@@ -0,0 +1,14 @@
DO WHAT THE FUCK YOU WANT TO PUBLIC LICENSE
Version 2, December 2004
Copyright (C) 2004 Sam Hocevar <sam@hocevar.net>
Everyone is permitted to copy and distribute verbatim or modified
copies of this license document, and changing it is allowed as long
as the name is changed.
DO WHAT THE FUCK YOU WANT TO PUBLIC LICENSE
TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION
0. You just DO WHAT THE FUCK YOU WANT TO.

19
vendor/github.com/burntsushi/toml/Makefile generated vendored Normal file
View File

@@ -0,0 +1,19 @@
install:
go install ./...
test: install
go test -v
toml-test toml-test-decoder
toml-test -encoder toml-test-encoder
fmt:
gofmt -w *.go */*.go
colcheck *.go */*.go
tags:
find ./ -name '*.go' -print0 | xargs -0 gotags > TAGS
push:
git push origin master
git push github master

220
vendor/github.com/burntsushi/toml/README.md generated vendored Normal file
View File

@@ -0,0 +1,220 @@
## TOML parser and encoder for Go with reflection
TOML stands for Tom's Obvious, Minimal Language. This Go package provides a
reflection interface similar to Go's standard library `json` and `xml`
packages. This package also supports the `encoding.TextUnmarshaler` and
`encoding.TextMarshaler` interfaces so that you can define custom data
representations. (There is an example of this below.)
Spec: https://github.com/mojombo/toml
Compatible with TOML version
[v0.2.0](https://github.com/toml-lang/toml/blob/master/versions/en/toml-v0.2.0.md)
Documentation: http://godoc.org/github.com/BurntSushi/toml
Installation:
```bash
go get github.com/BurntSushi/toml
```
Try the toml validator:
```bash
go get github.com/BurntSushi/toml/cmd/tomlv
tomlv some-toml-file.toml
```
[![Build status](https://api.travis-ci.org/BurntSushi/toml.png)](https://travis-ci.org/BurntSushi/toml)
### Testing
This package passes all tests in
[toml-test](https://github.com/BurntSushi/toml-test) for both the decoder
and the encoder.
### Examples
This package works similarly to how the Go standard library handles `XML`
and `JSON`. Namely, data is loaded into Go values via reflection.
For the simplest example, consider some TOML file as just a list of keys
and values:
```toml
Age = 25
Cats = [ "Cauchy", "Plato" ]
Pi = 3.14
Perfection = [ 6, 28, 496, 8128 ]
DOB = 1987-07-05T05:45:00Z
```
Which could be defined in Go as:
```go
type Config struct {
Age int
Cats []string
Pi float64
Perfection []int
DOB time.Time // requires `import time`
}
```
And then decoded with:
```go
var conf Config
if _, err := toml.Decode(tomlData, &conf); err != nil {
// handle error
}
```
You can also use struct tags if your struct field name doesn't map to a TOML
key value directly:
```toml
some_key_NAME = "wat"
```
```go
type TOML struct {
ObscureKey string `toml:"some_key_NAME"`
}
```
### Using the `encoding.TextUnmarshaler` interface
Here's an example that automatically parses duration strings into
`time.Duration` values:
```toml
[[song]]
name = "Thunder Road"
duration = "4m49s"
[[song]]
name = "Stairway to Heaven"
duration = "8m03s"
```
Which can be decoded with:
```go
type song struct {
Name string
Duration duration
}
type songs struct {
Song []song
}
var favorites songs
if _, err := toml.Decode(blob, &favorites); err != nil {
log.Fatal(err)
}
for _, s := range favorites.Song {
fmt.Printf("%s (%s)\n", s.Name, s.Duration)
}
```
And you'll also need a `duration` type that satisfies the
`encoding.TextUnmarshaler` interface:
```go
type duration struct {
time.Duration
}
func (d *duration) UnmarshalText(text []byte) error {
var err error
d.Duration, err = time.ParseDuration(string(text))
return err
}
```
### More complex usage
Here's an example of how to load the example from the official spec page:
```toml
# This is a TOML document. Boom.
title = "TOML Example"
[owner]
name = "Tom Preston-Werner"
organization = "GitHub"
bio = "GitHub Cofounder & CEO\nLikes tater tots and beer."
dob = 1979-05-27T07:32:00Z # First class dates? Why not?
[database]
server = "192.168.1.1"
ports = [ 8001, 8001, 8002 ]
connection_max = 5000
enabled = true
[servers]
# You can indent as you please. Tabs or spaces. TOML don't care.
[servers.alpha]
ip = "10.0.0.1"
dc = "eqdc10"
[servers.beta]
ip = "10.0.0.2"
dc = "eqdc10"
[clients]
data = [ ["gamma", "delta"], [1, 2] ] # just an update to make sure parsers support it
# Line breaks are OK when inside arrays
hosts = [
"alpha",
"omega"
]
```
And the corresponding Go types are:
```go
type tomlConfig struct {
Title string
Owner ownerInfo
DB database `toml:"database"`
Servers map[string]server
Clients clients
}
type ownerInfo struct {
Name string
Org string `toml:"organization"`
Bio string
DOB time.Time
}
type database struct {
Server string
Ports []int
ConnMax int `toml:"connection_max"`
Enabled bool
}
type server struct {
IP string
DC string
}
type clients struct {
Data [][]interface{}
Hosts []string
}
```
Note that a case insensitive match will be tried if an exact match can't be
found.
A working example of the above can be found in `_examples/example.{go,toml}`.

509
vendor/github.com/burntsushi/toml/decode.go generated vendored Normal file
View File

@@ -0,0 +1,509 @@
package toml
import (
"fmt"
"io"
"io/ioutil"
"math"
"reflect"
"strings"
"time"
)
func e(format string, args ...interface{}) error {
return fmt.Errorf("toml: "+format, args...)
}
// Unmarshaler is the interface implemented by objects that can unmarshal a
// TOML description of themselves.
type Unmarshaler interface {
UnmarshalTOML(interface{}) error
}
// Unmarshal decodes the contents of `p` in TOML format into a pointer `v`.
func Unmarshal(p []byte, v interface{}) error {
_, err := Decode(string(p), v)
return err
}
// Primitive is a TOML value that hasn't been decoded into a Go value.
// When using the various `Decode*` functions, the type `Primitive` may
// be given to any value, and its decoding will be delayed.
//
// A `Primitive` value can be decoded using the `PrimitiveDecode` function.
//
// The underlying representation of a `Primitive` value is subject to change.
// Do not rely on it.
//
// N.B. Primitive values are still parsed, so using them will only avoid
// the overhead of reflection. They can be useful when you don't know the
// exact type of TOML data until run time.
type Primitive struct {
undecoded interface{}
context Key
}
// DEPRECATED!
//
// Use MetaData.PrimitiveDecode instead.
func PrimitiveDecode(primValue Primitive, v interface{}) error {
md := MetaData{decoded: make(map[string]bool)}
return md.unify(primValue.undecoded, rvalue(v))
}
// PrimitiveDecode is just like the other `Decode*` functions, except it
// decodes a TOML value that has already been parsed. Valid primitive values
// can *only* be obtained from values filled by the decoder functions,
// including this method. (i.e., `v` may contain more `Primitive`
// values.)
//
// Meta data for primitive values is included in the meta data returned by
// the `Decode*` functions with one exception: keys returned by the Undecoded
// method will only reflect keys that were decoded. Namely, any keys hidden
// behind a Primitive will be considered undecoded. Executing this method will
// update the undecoded keys in the meta data. (See the example.)
func (md *MetaData) PrimitiveDecode(primValue Primitive, v interface{}) error {
md.context = primValue.context
defer func() { md.context = nil }()
return md.unify(primValue.undecoded, rvalue(v))
}
// Decode will decode the contents of `data` in TOML format into a pointer
// `v`.
//
// TOML hashes correspond to Go structs or maps. (Dealer's choice. They can be
// used interchangeably.)
//
// TOML arrays of tables correspond to either a slice of structs or a slice
// of maps.
//
// TOML datetimes correspond to Go `time.Time` values.
//
// All other TOML types (float, string, int, bool and array) correspond
// to the obvious Go types.
//
// An exception to the above rules is if a type implements the
// encoding.TextUnmarshaler interface. In this case, any primitive TOML value
// (floats, strings, integers, booleans and datetimes) will be converted to
// a byte string and given to the value's UnmarshalText method. See the
// Unmarshaler example for a demonstration with time duration strings.
//
// Key mapping
//
// TOML keys can map to either keys in a Go map or field names in a Go
// struct. The special `toml` struct tag may be used to map TOML keys to
// struct fields that don't match the key name exactly. (See the example.)
// A case insensitive match to struct names will be tried if an exact match
// can't be found.
//
// The mapping between TOML values and Go values is loose. That is, there
// may exist TOML values that cannot be placed into your representation, and
// there may be parts of your representation that do not correspond to
// TOML values. This loose mapping can be made stricter by using the IsDefined
// and/or Undecoded methods on the MetaData returned.
//
// This decoder will not handle cyclic types. If a cyclic type is passed,
// `Decode` will not terminate.
func Decode(data string, v interface{}) (MetaData, error) {
rv := reflect.ValueOf(v)
if rv.Kind() != reflect.Ptr {
return MetaData{}, e("Decode of non-pointer %s", reflect.TypeOf(v))
}
if rv.IsNil() {
return MetaData{}, e("Decode of nil %s", reflect.TypeOf(v))
}
p, err := parse(data)
if err != nil {
return MetaData{}, err
}
md := MetaData{
p.mapping, p.types, p.ordered,
make(map[string]bool, len(p.ordered)), nil,
}
return md, md.unify(p.mapping, indirect(rv))
}
// DecodeFile is just like Decode, except it will automatically read the
// contents of the file at `fpath` and decode it for you.
func DecodeFile(fpath string, v interface{}) (MetaData, error) {
bs, err := ioutil.ReadFile(fpath)
if err != nil {
return MetaData{}, err
}
return Decode(string(bs), v)
}
// DecodeReader is just like Decode, except it will consume all bytes
// from the reader and decode it for you.
func DecodeReader(r io.Reader, v interface{}) (MetaData, error) {
bs, err := ioutil.ReadAll(r)
if err != nil {
return MetaData{}, err
}
return Decode(string(bs), v)
}
// unify performs a sort of type unification based on the structure of `rv`,
// which is the client representation.
//
// Any type mismatch produces an error. Finding a type that we don't know
// how to handle produces an unsupported type error.
func (md *MetaData) unify(data interface{}, rv reflect.Value) error {
// Special case. Look for a `Primitive` value.
if rv.Type() == reflect.TypeOf((*Primitive)(nil)).Elem() {
// Save the undecoded data and the key context into the primitive
// value.
context := make(Key, len(md.context))
copy(context, md.context)
rv.Set(reflect.ValueOf(Primitive{
undecoded: data,
context: context,
}))
return nil
}
// Special case. Unmarshaler Interface support.
if rv.CanAddr() {
if v, ok := rv.Addr().Interface().(Unmarshaler); ok {
return v.UnmarshalTOML(data)
}
}
// Special case. Handle time.Time values specifically.
// TODO: Remove this code when we decide to drop support for Go 1.1.
// This isn't necessary in Go 1.2 because time.Time satisfies the encoding
// interfaces.
if rv.Type().AssignableTo(rvalue(time.Time{}).Type()) {
return md.unifyDatetime(data, rv)
}
// Special case. Look for a value satisfying the TextUnmarshaler interface.
if v, ok := rv.Interface().(TextUnmarshaler); ok {
return md.unifyText(data, v)
}
// BUG(burntsushi)
// The behavior here is incorrect whenever a Go type satisfies the
// encoding.TextUnmarshaler interface but also corresponds to a TOML
// hash or array. In particular, the unmarshaler should only be applied
// to primitive TOML values. But at this point, it will be applied to
// all kinds of values and produce an incorrect error whenever those values
// are hashes or arrays (including arrays of tables).
k := rv.Kind()
// laziness
if k >= reflect.Int && k <= reflect.Uint64 {
return md.unifyInt(data, rv)
}
switch k {
case reflect.Ptr:
elem := reflect.New(rv.Type().Elem())
err := md.unify(data, reflect.Indirect(elem))
if err != nil {
return err
}
rv.Set(elem)
return nil
case reflect.Struct:
return md.unifyStruct(data, rv)
case reflect.Map:
return md.unifyMap(data, rv)
case reflect.Array:
return md.unifyArray(data, rv)
case reflect.Slice:
return md.unifySlice(data, rv)
case reflect.String:
return md.unifyString(data, rv)
case reflect.Bool:
return md.unifyBool(data, rv)
case reflect.Interface:
// we only support empty interfaces.
if rv.NumMethod() > 0 {
return e("unsupported type %s", rv.Type())
}
return md.unifyAnything(data, rv)
case reflect.Float32:
fallthrough
case reflect.Float64:
return md.unifyFloat64(data, rv)
}
return e("unsupported type %s", rv.Kind())
}
func (md *MetaData) unifyStruct(mapping interface{}, rv reflect.Value) error {
tmap, ok := mapping.(map[string]interface{})
if !ok {
if mapping == nil {
return nil
}
return e("type mismatch for %s: expected table but found %T",
rv.Type().String(), mapping)
}
for key, datum := range tmap {
var f *field
fields := cachedTypeFields(rv.Type())
for i := range fields {
ff := &fields[i]
if ff.name == key {
f = ff
break
}
if f == nil && strings.EqualFold(ff.name, key) {
f = ff
}
}
if f != nil {
subv := rv
for _, i := range f.index {
subv = indirect(subv.Field(i))
}
if isUnifiable(subv) {
md.decoded[md.context.add(key).String()] = true
md.context = append(md.context, key)
if err := md.unify(datum, subv); err != nil {
return err
}
md.context = md.context[0 : len(md.context)-1]
} else if f.name != "" {
// Bad user! No soup for you!
return e("cannot write unexported field %s.%s",
rv.Type().String(), f.name)
}
}
}
return nil
}
func (md *MetaData) unifyMap(mapping interface{}, rv reflect.Value) error {
tmap, ok := mapping.(map[string]interface{})
if !ok {
if tmap == nil {
return nil
}
return badtype("map", mapping)
}
if rv.IsNil() {
rv.Set(reflect.MakeMap(rv.Type()))
}
for k, v := range tmap {
md.decoded[md.context.add(k).String()] = true
md.context = append(md.context, k)
rvkey := indirect(reflect.New(rv.Type().Key()))
rvval := reflect.Indirect(reflect.New(rv.Type().Elem()))
if err := md.unify(v, rvval); err != nil {
return err
}
md.context = md.context[0 : len(md.context)-1]
rvkey.SetString(k)
rv.SetMapIndex(rvkey, rvval)
}
return nil
}
func (md *MetaData) unifyArray(data interface{}, rv reflect.Value) error {
datav := reflect.ValueOf(data)
if datav.Kind() != reflect.Slice {
if !datav.IsValid() {
return nil
}
return badtype("slice", data)
}
sliceLen := datav.Len()
if sliceLen != rv.Len() {
return e("expected array length %d; got TOML array of length %d",
rv.Len(), sliceLen)
}
return md.unifySliceArray(datav, rv)
}
func (md *MetaData) unifySlice(data interface{}, rv reflect.Value) error {
datav := reflect.ValueOf(data)
if datav.Kind() != reflect.Slice {
if !datav.IsValid() {
return nil
}
return badtype("slice", data)
}
n := datav.Len()
if rv.IsNil() || rv.Cap() < n {
rv.Set(reflect.MakeSlice(rv.Type(), n, n))
}
rv.SetLen(n)
return md.unifySliceArray(datav, rv)
}
func (md *MetaData) unifySliceArray(data, rv reflect.Value) error {
sliceLen := data.Len()
for i := 0; i < sliceLen; i++ {
v := data.Index(i).Interface()
sliceval := indirect(rv.Index(i))
if err := md.unify(v, sliceval); err != nil {
return err
}
}
return nil
}
func (md *MetaData) unifyDatetime(data interface{}, rv reflect.Value) error {
if _, ok := data.(time.Time); ok {
rv.Set(reflect.ValueOf(data))
return nil
}
return badtype("time.Time", data)
}
func (md *MetaData) unifyString(data interface{}, rv reflect.Value) error {
if s, ok := data.(string); ok {
rv.SetString(s)
return nil
}
return badtype("string", data)
}
func (md *MetaData) unifyFloat64(data interface{}, rv reflect.Value) error {
if num, ok := data.(float64); ok {
switch rv.Kind() {
case reflect.Float32:
fallthrough
case reflect.Float64:
rv.SetFloat(num)
default:
panic("bug")
}
return nil
}
return badtype("float", data)
}
func (md *MetaData) unifyInt(data interface{}, rv reflect.Value) error {
if num, ok := data.(int64); ok {
if rv.Kind() >= reflect.Int && rv.Kind() <= reflect.Int64 {
switch rv.Kind() {
case reflect.Int, reflect.Int64:
// No bounds checking necessary.
case reflect.Int8:
if num < math.MinInt8 || num > math.MaxInt8 {
return e("value %d is out of range for int8", num)
}
case reflect.Int16:
if num < math.MinInt16 || num > math.MaxInt16 {
return e("value %d is out of range for int16", num)
}
case reflect.Int32:
if num < math.MinInt32 || num > math.MaxInt32 {
return e("value %d is out of range for int32", num)
}
}
rv.SetInt(num)
} else if rv.Kind() >= reflect.Uint && rv.Kind() <= reflect.Uint64 {
unum := uint64(num)
switch rv.Kind() {
case reflect.Uint, reflect.Uint64:
// No bounds checking necessary.
case reflect.Uint8:
if num < 0 || unum > math.MaxUint8 {
return e("value %d is out of range for uint8", num)
}
case reflect.Uint16:
if num < 0 || unum > math.MaxUint16 {
return e("value %d is out of range for uint16", num)
}
case reflect.Uint32:
if num < 0 || unum > math.MaxUint32 {
return e("value %d is out of range for uint32", num)
}
}
rv.SetUint(unum)
} else {
panic("unreachable")
}
return nil
}
return badtype("integer", data)
}
func (md *MetaData) unifyBool(data interface{}, rv reflect.Value) error {
if b, ok := data.(bool); ok {
rv.SetBool(b)
return nil
}
return badtype("boolean", data)
}
func (md *MetaData) unifyAnything(data interface{}, rv reflect.Value) error {
rv.Set(reflect.ValueOf(data))
return nil
}
func (md *MetaData) unifyText(data interface{}, v TextUnmarshaler) error {
var s string
switch sdata := data.(type) {
case TextMarshaler:
text, err := sdata.MarshalText()
if err != nil {
return err
}
s = string(text)
case fmt.Stringer:
s = sdata.String()
case string:
s = sdata
case bool:
s = fmt.Sprintf("%v", sdata)
case int64:
s = fmt.Sprintf("%d", sdata)
case float64:
s = fmt.Sprintf("%f", sdata)
default:
return badtype("primitive (string-like)", data)
}
if err := v.UnmarshalText([]byte(s)); err != nil {
return err
}
return nil
}
// rvalue returns a reflect.Value of `v`. All pointers are resolved.
func rvalue(v interface{}) reflect.Value {
return indirect(reflect.ValueOf(v))
}
// indirect returns the value pointed to by a pointer.
// Pointers are followed until the value is not a pointer.
// New values are allocated for each nil pointer.
//
// An exception to this rule is if the value satisfies an interface of
// interest to us (like encoding.TextUnmarshaler).
func indirect(v reflect.Value) reflect.Value {
if v.Kind() != reflect.Ptr {
if v.CanSet() {
pv := v.Addr()
if _, ok := pv.Interface().(TextUnmarshaler); ok {
return pv
}
}
return v
}
if v.IsNil() {
v.Set(reflect.New(v.Type().Elem()))
}
return indirect(reflect.Indirect(v))
}
func isUnifiable(rv reflect.Value) bool {
if rv.CanSet() {
return true
}
if _, ok := rv.Interface().(TextUnmarshaler); ok {
return true
}
return false
}
func badtype(expected string, data interface{}) error {
return e("cannot load TOML value of type %T into a Go %s", data, expected)
}

121
vendor/github.com/burntsushi/toml/decode_meta.go generated vendored Normal file
View File

@@ -0,0 +1,121 @@
package toml
import "strings"
// MetaData allows access to meta information about TOML data that may not
// be inferrable via reflection. In particular, whether a key has been defined
// and the TOML type of a key.
type MetaData struct {
mapping map[string]interface{}
types map[string]tomlType
keys []Key
decoded map[string]bool
context Key // Used only during decoding.
}
// IsDefined returns true if the key given exists in the TOML data. The key
// should be specified hierarchially. e.g.,
//
// // access the TOML key 'a.b.c'
// IsDefined("a", "b", "c")
//
// IsDefined will return false if an empty key given. Keys are case sensitive.
func (md *MetaData) IsDefined(key ...string) bool {
if len(key) == 0 {
return false
}
var hash map[string]interface{}
var ok bool
var hashOrVal interface{} = md.mapping
for _, k := range key {
if hash, ok = hashOrVal.(map[string]interface{}); !ok {
return false
}
if hashOrVal, ok = hash[k]; !ok {
return false
}
}
return true
}
// Type returns a string representation of the type of the key specified.
//
// Type will return the empty string if given an empty key or a key that
// does not exist. Keys are case sensitive.
func (md *MetaData) Type(key ...string) string {
fullkey := strings.Join(key, ".")
if typ, ok := md.types[fullkey]; ok {
return typ.typeString()
}
return ""
}
// Key is the type of any TOML key, including key groups. Use (MetaData).Keys
// to get values of this type.
type Key []string
func (k Key) String() string {
return strings.Join(k, ".")
}
func (k Key) maybeQuotedAll() string {
var ss []string
for i := range k {
ss = append(ss, k.maybeQuoted(i))
}
return strings.Join(ss, ".")
}
func (k Key) maybeQuoted(i int) string {
quote := false
for _, c := range k[i] {
if !isBareKeyChar(c) {
quote = true
break
}
}
if quote {
return "\"" + strings.Replace(k[i], "\"", "\\\"", -1) + "\""
}
return k[i]
}
func (k Key) add(piece string) Key {
newKey := make(Key, len(k)+1)
copy(newKey, k)
newKey[len(k)] = piece
return newKey
}
// Keys returns a slice of every key in the TOML data, including key groups.
// Each key is itself a slice, where the first element is the top of the
// hierarchy and the last is the most specific.
//
// The list will have the same order as the keys appeared in the TOML data.
//
// All keys returned are non-empty.
func (md *MetaData) Keys() []Key {
return md.keys
}
// Undecoded returns all keys that have not been decoded in the order in which
// they appear in the original TOML document.
//
// This includes keys that haven't been decoded because of a Primitive value.
// Once the Primitive value is decoded, the keys will be considered decoded.
//
// Also note that decoding into an empty interface will result in no decoding,
// and so no keys will be considered decoded.
//
// In this sense, the Undecoded keys correspond to keys in the TOML document
// that do not have a concrete type in your representation.
func (md *MetaData) Undecoded() []Key {
undecoded := make([]Key, 0, len(md.keys))
for _, key := range md.keys {
if !md.decoded[key.String()] {
undecoded = append(undecoded, key)
}
}
return undecoded
}

27
vendor/github.com/burntsushi/toml/doc.go generated vendored Normal file
View File

@@ -0,0 +1,27 @@
/*
Package toml provides facilities for decoding and encoding TOML configuration
files via reflection. There is also support for delaying decoding with
the Primitive type, and querying the set of keys in a TOML document with the
MetaData type.
The specification implemented: https://github.com/mojombo/toml
The sub-command github.com/BurntSushi/toml/cmd/tomlv can be used to verify
whether a file is a valid TOML document. It can also be used to print the
type of each key in a TOML document.
Testing
There are two important types of tests used for this package. The first is
contained inside '*_test.go' files and uses the standard Go unit testing
framework. These tests are primarily devoted to holistically testing the
decoder and encoder.
The second type of testing is used to verify the implementation's adherence
to the TOML specification. These tests have been factored into their own
project: https://github.com/BurntSushi/toml-test
The reason the tests are in a separate project is so that they can be used by
any implementation of TOML. Namely, it is language agnostic.
*/
package toml

568
vendor/github.com/burntsushi/toml/encode.go generated vendored Normal file
View File

@@ -0,0 +1,568 @@
package toml
import (
"bufio"
"errors"
"fmt"
"io"
"reflect"
"sort"
"strconv"
"strings"
"time"
)
type tomlEncodeError struct{ error }
var (
errArrayMixedElementTypes = errors.New(
"toml: cannot encode array with mixed element types")
errArrayNilElement = errors.New(
"toml: cannot encode array with nil element")
errNonString = errors.New(
"toml: cannot encode a map with non-string key type")
errAnonNonStruct = errors.New(
"toml: cannot encode an anonymous field that is not a struct")
errArrayNoTable = errors.New(
"toml: TOML array element cannot contain a table")
errNoKey = errors.New(
"toml: top-level values must be Go maps or structs")
errAnything = errors.New("") // used in testing
)
var quotedReplacer = strings.NewReplacer(
"\t", "\\t",
"\n", "\\n",
"\r", "\\r",
"\"", "\\\"",
"\\", "\\\\",
)
// Encoder controls the encoding of Go values to a TOML document to some
// io.Writer.
//
// The indentation level can be controlled with the Indent field.
type Encoder struct {
// A single indentation level. By default it is two spaces.
Indent string
// hasWritten is whether we have written any output to w yet.
hasWritten bool
w *bufio.Writer
}
// NewEncoder returns a TOML encoder that encodes Go values to the io.Writer
// given. By default, a single indentation level is 2 spaces.
func NewEncoder(w io.Writer) *Encoder {
return &Encoder{
w: bufio.NewWriter(w),
Indent: " ",
}
}
// Encode writes a TOML representation of the Go value to the underlying
// io.Writer. If the value given cannot be encoded to a valid TOML document,
// then an error is returned.
//
// The mapping between Go values and TOML values should be precisely the same
// as for the Decode* functions. Similarly, the TextMarshaler interface is
// supported by encoding the resulting bytes as strings. (If you want to write
// arbitrary binary data then you will need to use something like base64 since
// TOML does not have any binary types.)
//
// When encoding TOML hashes (i.e., Go maps or structs), keys without any
// sub-hashes are encoded first.
//
// If a Go map is encoded, then its keys are sorted alphabetically for
// deterministic output. More control over this behavior may be provided if
// there is demand for it.
//
// Encoding Go values without a corresponding TOML representation---like map
// types with non-string keys---will cause an error to be returned. Similarly
// for mixed arrays/slices, arrays/slices with nil elements, embedded
// non-struct types and nested slices containing maps or structs.
// (e.g., [][]map[string]string is not allowed but []map[string]string is OK
// and so is []map[string][]string.)
func (enc *Encoder) Encode(v interface{}) error {
rv := eindirect(reflect.ValueOf(v))
if err := enc.safeEncode(Key([]string{}), rv); err != nil {
return err
}
return enc.w.Flush()
}
func (enc *Encoder) safeEncode(key Key, rv reflect.Value) (err error) {
defer func() {
if r := recover(); r != nil {
if terr, ok := r.(tomlEncodeError); ok {
err = terr.error
return
}
panic(r)
}
}()
enc.encode(key, rv)
return nil
}
func (enc *Encoder) encode(key Key, rv reflect.Value) {
// Special case. Time needs to be in ISO8601 format.
// Special case. If we can marshal the type to text, then we used that.
// Basically, this prevents the encoder for handling these types as
// generic structs (or whatever the underlying type of a TextMarshaler is).
switch rv.Interface().(type) {
case time.Time, TextMarshaler:
enc.keyEqElement(key, rv)
return
}
k := rv.Kind()
switch k {
case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32,
reflect.Int64,
reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32,
reflect.Uint64,
reflect.Float32, reflect.Float64, reflect.String, reflect.Bool:
enc.keyEqElement(key, rv)
case reflect.Array, reflect.Slice:
if typeEqual(tomlArrayHash, tomlTypeOfGo(rv)) {
enc.eArrayOfTables(key, rv)
} else {
enc.keyEqElement(key, rv)
}
case reflect.Interface:
if rv.IsNil() {
return
}
enc.encode(key, rv.Elem())
case reflect.Map:
if rv.IsNil() {
return
}
enc.eTable(key, rv)
case reflect.Ptr:
if rv.IsNil() {
return
}
enc.encode(key, rv.Elem())
case reflect.Struct:
enc.eTable(key, rv)
default:
panic(e("unsupported type for key '%s': %s", key, k))
}
}
// eElement encodes any value that can be an array element (primitives and
// arrays).
func (enc *Encoder) eElement(rv reflect.Value) {
switch v := rv.Interface().(type) {
case time.Time:
// Special case time.Time as a primitive. Has to come before
// TextMarshaler below because time.Time implements
// encoding.TextMarshaler, but we need to always use UTC.
enc.wf(v.UTC().Format("2006-01-02T15:04:05Z"))
return
case TextMarshaler:
// Special case. Use text marshaler if it's available for this value.
if s, err := v.MarshalText(); err != nil {
encPanic(err)
} else {
enc.writeQuoted(string(s))
}
return
}
switch rv.Kind() {
case reflect.Bool:
enc.wf(strconv.FormatBool(rv.Bool()))
case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32,
reflect.Int64:
enc.wf(strconv.FormatInt(rv.Int(), 10))
case reflect.Uint, reflect.Uint8, reflect.Uint16,
reflect.Uint32, reflect.Uint64:
enc.wf(strconv.FormatUint(rv.Uint(), 10))
case reflect.Float32:
enc.wf(floatAddDecimal(strconv.FormatFloat(rv.Float(), 'f', -1, 32)))
case reflect.Float64:
enc.wf(floatAddDecimal(strconv.FormatFloat(rv.Float(), 'f', -1, 64)))
case reflect.Array, reflect.Slice:
enc.eArrayOrSliceElement(rv)
case reflect.Interface:
enc.eElement(rv.Elem())
case reflect.String:
enc.writeQuoted(rv.String())
default:
panic(e("unexpected primitive type: %s", rv.Kind()))
}
}
// By the TOML spec, all floats must have a decimal with at least one
// number on either side.
func floatAddDecimal(fstr string) string {
if !strings.Contains(fstr, ".") {
return fstr + ".0"
}
return fstr
}
func (enc *Encoder) writeQuoted(s string) {
enc.wf("\"%s\"", quotedReplacer.Replace(s))
}
func (enc *Encoder) eArrayOrSliceElement(rv reflect.Value) {
length := rv.Len()
enc.wf("[")
for i := 0; i < length; i++ {
elem := rv.Index(i)
enc.eElement(elem)
if i != length-1 {
enc.wf(", ")
}
}
enc.wf("]")
}
func (enc *Encoder) eArrayOfTables(key Key, rv reflect.Value) {
if len(key) == 0 {
encPanic(errNoKey)
}
for i := 0; i < rv.Len(); i++ {
trv := rv.Index(i)
if isNil(trv) {
continue
}
panicIfInvalidKey(key)
enc.newline()
enc.wf("%s[[%s]]", enc.indentStr(key), key.maybeQuotedAll())
enc.newline()
enc.eMapOrStruct(key, trv)
}
}
func (enc *Encoder) eTable(key Key, rv reflect.Value) {
panicIfInvalidKey(key)
if len(key) == 1 {
// Output an extra new line between top-level tables.
// (The newline isn't written if nothing else has been written though.)
enc.newline()
}
if len(key) > 0 {
enc.wf("%s[%s]", enc.indentStr(key), key.maybeQuotedAll())
enc.newline()
}
enc.eMapOrStruct(key, rv)
}
func (enc *Encoder) eMapOrStruct(key Key, rv reflect.Value) {
switch rv := eindirect(rv); rv.Kind() {
case reflect.Map:
enc.eMap(key, rv)
case reflect.Struct:
enc.eStruct(key, rv)
default:
panic("eTable: unhandled reflect.Value Kind: " + rv.Kind().String())
}
}
func (enc *Encoder) eMap(key Key, rv reflect.Value) {
rt := rv.Type()
if rt.Key().Kind() != reflect.String {
encPanic(errNonString)
}
// Sort keys so that we have deterministic output. And write keys directly
// underneath this key first, before writing sub-structs or sub-maps.
var mapKeysDirect, mapKeysSub []string
for _, mapKey := range rv.MapKeys() {
k := mapKey.String()
if typeIsHash(tomlTypeOfGo(rv.MapIndex(mapKey))) {
mapKeysSub = append(mapKeysSub, k)
} else {
mapKeysDirect = append(mapKeysDirect, k)
}
}
var writeMapKeys = func(mapKeys []string) {
sort.Strings(mapKeys)
for _, mapKey := range mapKeys {
mrv := rv.MapIndex(reflect.ValueOf(mapKey))
if isNil(mrv) {
// Don't write anything for nil fields.
continue
}
enc.encode(key.add(mapKey), mrv)
}
}
writeMapKeys(mapKeysDirect)
writeMapKeys(mapKeysSub)
}
func (enc *Encoder) eStruct(key Key, rv reflect.Value) {
// Write keys for fields directly under this key first, because if we write
// a field that creates a new table, then all keys under it will be in that
// table (not the one we're writing here).
rt := rv.Type()
var fieldsDirect, fieldsSub [][]int
var addFields func(rt reflect.Type, rv reflect.Value, start []int)
addFields = func(rt reflect.Type, rv reflect.Value, start []int) {
for i := 0; i < rt.NumField(); i++ {
f := rt.Field(i)
// skip unexported fields
if f.PkgPath != "" && !f.Anonymous {
continue
}
frv := rv.Field(i)
if f.Anonymous {
t := f.Type
switch t.Kind() {
case reflect.Struct:
// Treat anonymous struct fields with
// tag names as though they are not
// anonymous, like encoding/json does.
if getOptions(f.Tag).name == "" {
addFields(t, frv, f.Index)
continue
}
case reflect.Ptr:
if t.Elem().Kind() == reflect.Struct &&
getOptions(f.Tag).name == "" {
if !frv.IsNil() {
addFields(t.Elem(), frv.Elem(), f.Index)
}
continue
}
// Fall through to the normal field encoding logic below
// for non-struct anonymous fields.
}
}
if typeIsHash(tomlTypeOfGo(frv)) {
fieldsSub = append(fieldsSub, append(start, f.Index...))
} else {
fieldsDirect = append(fieldsDirect, append(start, f.Index...))
}
}
}
addFields(rt, rv, nil)
var writeFields = func(fields [][]int) {
for _, fieldIndex := range fields {
sft := rt.FieldByIndex(fieldIndex)
sf := rv.FieldByIndex(fieldIndex)
if isNil(sf) {
// Don't write anything for nil fields.
continue
}
opts := getOptions(sft.Tag)
if opts.skip {
continue
}
keyName := sft.Name
if opts.name != "" {
keyName = opts.name
}
if opts.omitempty && isEmpty(sf) {
continue
}
if opts.omitzero && isZero(sf) {
continue
}
enc.encode(key.add(keyName), sf)
}
}
writeFields(fieldsDirect)
writeFields(fieldsSub)
}
// tomlTypeName returns the TOML type name of the Go value's type. It is
// used to determine whether the types of array elements are mixed (which is
// forbidden). If the Go value is nil, then it is illegal for it to be an array
// element, and valueIsNil is returned as true.
// Returns the TOML type of a Go value. The type may be `nil`, which means
// no concrete TOML type could be found.
func tomlTypeOfGo(rv reflect.Value) tomlType {
if isNil(rv) || !rv.IsValid() {
return nil
}
switch rv.Kind() {
case reflect.Bool:
return tomlBool
case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32,
reflect.Int64,
reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32,
reflect.Uint64:
return tomlInteger
case reflect.Float32, reflect.Float64:
return tomlFloat
case reflect.Array, reflect.Slice:
if typeEqual(tomlHash, tomlArrayType(rv)) {
return tomlArrayHash
}
return tomlArray
case reflect.Ptr, reflect.Interface:
return tomlTypeOfGo(rv.Elem())
case reflect.String:
return tomlString
case reflect.Map:
return tomlHash
case reflect.Struct:
switch rv.Interface().(type) {
case time.Time:
return tomlDatetime
case TextMarshaler:
return tomlString
default:
return tomlHash
}
default:
panic("unexpected reflect.Kind: " + rv.Kind().String())
}
}
// tomlArrayType returns the element type of a TOML array. The type returned
// may be nil if it cannot be determined (e.g., a nil slice or a zero length
// slize). This function may also panic if it finds a type that cannot be
// expressed in TOML (such as nil elements, heterogeneous arrays or directly
// nested arrays of tables).
func tomlArrayType(rv reflect.Value) tomlType {
if isNil(rv) || !rv.IsValid() || rv.Len() == 0 {
return nil
}
firstType := tomlTypeOfGo(rv.Index(0))
if firstType == nil {
encPanic(errArrayNilElement)
}
rvlen := rv.Len()
for i := 1; i < rvlen; i++ {
elem := rv.Index(i)
switch elemType := tomlTypeOfGo(elem); {
case elemType == nil:
encPanic(errArrayNilElement)
case !typeEqual(firstType, elemType):
encPanic(errArrayMixedElementTypes)
}
}
// If we have a nested array, then we must make sure that the nested
// array contains ONLY primitives.
// This checks arbitrarily nested arrays.
if typeEqual(firstType, tomlArray) || typeEqual(firstType, tomlArrayHash) {
nest := tomlArrayType(eindirect(rv.Index(0)))
if typeEqual(nest, tomlHash) || typeEqual(nest, tomlArrayHash) {
encPanic(errArrayNoTable)
}
}
return firstType
}
type tagOptions struct {
skip bool // "-"
name string
omitempty bool
omitzero bool
}
func getOptions(tag reflect.StructTag) tagOptions {
t := tag.Get("toml")
if t == "-" {
return tagOptions{skip: true}
}
var opts tagOptions
parts := strings.Split(t, ",")
opts.name = parts[0]
for _, s := range parts[1:] {
switch s {
case "omitempty":
opts.omitempty = true
case "omitzero":
opts.omitzero = true
}
}
return opts
}
func isZero(rv reflect.Value) bool {
switch rv.Kind() {
case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64:
return rv.Int() == 0
case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64:
return rv.Uint() == 0
case reflect.Float32, reflect.Float64:
return rv.Float() == 0.0
}
return false
}
func isEmpty(rv reflect.Value) bool {
switch rv.Kind() {
case reflect.Array, reflect.Slice, reflect.Map, reflect.String:
return rv.Len() == 0
case reflect.Bool:
return !rv.Bool()
}
return false
}
func (enc *Encoder) newline() {
if enc.hasWritten {
enc.wf("\n")
}
}
func (enc *Encoder) keyEqElement(key Key, val reflect.Value) {
if len(key) == 0 {
encPanic(errNoKey)
}
panicIfInvalidKey(key)
enc.wf("%s%s = ", enc.indentStr(key), key.maybeQuoted(len(key)-1))
enc.eElement(val)
enc.newline()
}
func (enc *Encoder) wf(format string, v ...interface{}) {
if _, err := fmt.Fprintf(enc.w, format, v...); err != nil {
encPanic(err)
}
enc.hasWritten = true
}
func (enc *Encoder) indentStr(key Key) string {
return strings.Repeat(enc.Indent, len(key)-1)
}
func encPanic(err error) {
panic(tomlEncodeError{err})
}
func eindirect(v reflect.Value) reflect.Value {
switch v.Kind() {
case reflect.Ptr, reflect.Interface:
return eindirect(v.Elem())
default:
return v
}
}
func isNil(rv reflect.Value) bool {
switch rv.Kind() {
case reflect.Interface, reflect.Map, reflect.Ptr, reflect.Slice:
return rv.IsNil()
default:
return false
}
}
func panicIfInvalidKey(key Key) {
for _, k := range key {
if len(k) == 0 {
encPanic(e("Key '%s' is not a valid table name. Key names "+
"cannot be empty.", key.maybeQuotedAll()))
}
}
}
func isValidKeyName(s string) bool {
return len(s) != 0
}

19
vendor/github.com/burntsushi/toml/encoding_types.go generated vendored Normal file
View File

@@ -0,0 +1,19 @@
// +build go1.2
package toml
// In order to support Go 1.1, we define our own TextMarshaler and
// TextUnmarshaler types. For Go 1.2+, we just alias them with the
// standard library interfaces.
import (
"encoding"
)
// TextMarshaler is a synonym for encoding.TextMarshaler. It is defined here
// so that Go 1.1 can be supported.
type TextMarshaler encoding.TextMarshaler
// TextUnmarshaler is a synonym for encoding.TextUnmarshaler. It is defined
// here so that Go 1.1 can be supported.
type TextUnmarshaler encoding.TextUnmarshaler

View File

@@ -0,0 +1,18 @@
// +build !go1.2
package toml
// These interfaces were introduced in Go 1.2, so we add them manually when
// compiling for Go 1.1.
// TextMarshaler is a synonym for encoding.TextMarshaler. It is defined here
// so that Go 1.1 can be supported.
type TextMarshaler interface {
MarshalText() (text []byte, err error)
}
// TextUnmarshaler is a synonym for encoding.TextUnmarshaler. It is defined
// here so that Go 1.1 can be supported.
type TextUnmarshaler interface {
UnmarshalText(text []byte) error
}

858
vendor/github.com/burntsushi/toml/lex.go generated vendored Normal file
View File

@@ -0,0 +1,858 @@
package toml
import (
"fmt"
"strings"
"unicode"
"unicode/utf8"
)
type itemType int
const (
itemError itemType = iota
itemNIL // used in the parser to indicate no type
itemEOF
itemText
itemString
itemRawString
itemMultilineString
itemRawMultilineString
itemBool
itemInteger
itemFloat
itemDatetime
itemArray // the start of an array
itemArrayEnd
itemTableStart
itemTableEnd
itemArrayTableStart
itemArrayTableEnd
itemKeyStart
itemCommentStart
)
const (
eof = 0
tableStart = '['
tableEnd = ']'
arrayTableStart = '['
arrayTableEnd = ']'
tableSep = '.'
keySep = '='
arrayStart = '['
arrayEnd = ']'
arrayValTerm = ','
commentStart = '#'
stringStart = '"'
stringEnd = '"'
rawStringStart = '\''
rawStringEnd = '\''
)
type stateFn func(lx *lexer) stateFn
type lexer struct {
input string
start int
pos int
width int
line int
state stateFn
items chan item
// A stack of state functions used to maintain context.
// The idea is to reuse parts of the state machine in various places.
// For example, values can appear at the top level or within arbitrarily
// nested arrays. The last state on the stack is used after a value has
// been lexed. Similarly for comments.
stack []stateFn
}
type item struct {
typ itemType
val string
line int
}
func (lx *lexer) nextItem() item {
for {
select {
case item := <-lx.items:
return item
default:
lx.state = lx.state(lx)
}
}
}
func lex(input string) *lexer {
lx := &lexer{
input: input + "\n",
state: lexTop,
line: 1,
items: make(chan item, 10),
stack: make([]stateFn, 0, 10),
}
return lx
}
func (lx *lexer) push(state stateFn) {
lx.stack = append(lx.stack, state)
}
func (lx *lexer) pop() stateFn {
if len(lx.stack) == 0 {
return lx.errorf("BUG in lexer: no states to pop.")
}
last := lx.stack[len(lx.stack)-1]
lx.stack = lx.stack[0 : len(lx.stack)-1]
return last
}
func (lx *lexer) current() string {
return lx.input[lx.start:lx.pos]
}
func (lx *lexer) emit(typ itemType) {
lx.items <- item{typ, lx.current(), lx.line}
lx.start = lx.pos
}
func (lx *lexer) emitTrim(typ itemType) {
lx.items <- item{typ, strings.TrimSpace(lx.current()), lx.line}
lx.start = lx.pos
}
func (lx *lexer) next() (r rune) {
if lx.pos >= len(lx.input) {
lx.width = 0
return eof
}
if lx.input[lx.pos] == '\n' {
lx.line++
}
r, lx.width = utf8.DecodeRuneInString(lx.input[lx.pos:])
lx.pos += lx.width
return r
}
// ignore skips over the pending input before this point.
func (lx *lexer) ignore() {
lx.start = lx.pos
}
// backup steps back one rune. Can be called only once per call of next.
func (lx *lexer) backup() {
lx.pos -= lx.width
if lx.pos < len(lx.input) && lx.input[lx.pos] == '\n' {
lx.line--
}
}
// accept consumes the next rune if it's equal to `valid`.
func (lx *lexer) accept(valid rune) bool {
if lx.next() == valid {
return true
}
lx.backup()
return false
}
// peek returns but does not consume the next rune in the input.
func (lx *lexer) peek() rune {
r := lx.next()
lx.backup()
return r
}
// skip ignores all input that matches the given predicate.
func (lx *lexer) skip(pred func(rune) bool) {
for {
r := lx.next()
if pred(r) {
continue
}
lx.backup()
lx.ignore()
return
}
}
// errorf stops all lexing by emitting an error and returning `nil`.
// Note that any value that is a character is escaped if it's a special
// character (new lines, tabs, etc.).
func (lx *lexer) errorf(format string, values ...interface{}) stateFn {
lx.items <- item{
itemError,
fmt.Sprintf(format, values...),
lx.line,
}
return nil
}
// lexTop consumes elements at the top level of TOML data.
func lexTop(lx *lexer) stateFn {
r := lx.next()
if isWhitespace(r) || isNL(r) {
return lexSkip(lx, lexTop)
}
switch r {
case commentStart:
lx.push(lexTop)
return lexCommentStart
case tableStart:
return lexTableStart
case eof:
if lx.pos > lx.start {
return lx.errorf("Unexpected EOF.")
}
lx.emit(itemEOF)
return nil
}
// At this point, the only valid item can be a key, so we back up
// and let the key lexer do the rest.
lx.backup()
lx.push(lexTopEnd)
return lexKeyStart
}
// lexTopEnd is entered whenever a top-level item has been consumed. (A value
// or a table.) It must see only whitespace, and will turn back to lexTop
// upon a new line. If it sees EOF, it will quit the lexer successfully.
func lexTopEnd(lx *lexer) stateFn {
r := lx.next()
switch {
case r == commentStart:
// a comment will read to a new line for us.
lx.push(lexTop)
return lexCommentStart
case isWhitespace(r):
return lexTopEnd
case isNL(r):
lx.ignore()
return lexTop
case r == eof:
lx.ignore()
return lexTop
}
return lx.errorf("Expected a top-level item to end with a new line, "+
"comment or EOF, but got %q instead.", r)
}
// lexTable lexes the beginning of a table. Namely, it makes sure that
// it starts with a character other than '.' and ']'.
// It assumes that '[' has already been consumed.
// It also handles the case that this is an item in an array of tables.
// e.g., '[[name]]'.
func lexTableStart(lx *lexer) stateFn {
if lx.peek() == arrayTableStart {
lx.next()
lx.emit(itemArrayTableStart)
lx.push(lexArrayTableEnd)
} else {
lx.emit(itemTableStart)
lx.push(lexTableEnd)
}
return lexTableNameStart
}
func lexTableEnd(lx *lexer) stateFn {
lx.emit(itemTableEnd)
return lexTopEnd
}
func lexArrayTableEnd(lx *lexer) stateFn {
if r := lx.next(); r != arrayTableEnd {
return lx.errorf("Expected end of table array name delimiter %q, "+
"but got %q instead.", arrayTableEnd, r)
}
lx.emit(itemArrayTableEnd)
return lexTopEnd
}
func lexTableNameStart(lx *lexer) stateFn {
lx.skip(isWhitespace)
switch r := lx.peek(); {
case r == tableEnd || r == eof:
return lx.errorf("Unexpected end of table name. (Table names cannot " +
"be empty.)")
case r == tableSep:
return lx.errorf("Unexpected table separator. (Table names cannot " +
"be empty.)")
case r == stringStart || r == rawStringStart:
lx.ignore()
lx.push(lexTableNameEnd)
return lexValue // reuse string lexing
default:
return lexBareTableName
}
}
// lexBareTableName lexes the name of a table. It assumes that at least one
// valid character for the table has already been read.
func lexBareTableName(lx *lexer) stateFn {
r := lx.next()
if isBareKeyChar(r) {
return lexBareTableName
}
lx.backup()
lx.emit(itemText)
return lexTableNameEnd
}
// lexTableNameEnd reads the end of a piece of a table name, optionally
// consuming whitespace.
func lexTableNameEnd(lx *lexer) stateFn {
lx.skip(isWhitespace)
switch r := lx.next(); {
case isWhitespace(r):
return lexTableNameEnd
case r == tableSep:
lx.ignore()
return lexTableNameStart
case r == tableEnd:
return lx.pop()
default:
return lx.errorf("Expected '.' or ']' to end table name, but got %q "+
"instead.", r)
}
}
// lexKeyStart consumes a key name up until the first non-whitespace character.
// lexKeyStart will ignore whitespace.
func lexKeyStart(lx *lexer) stateFn {
r := lx.peek()
switch {
case r == keySep:
return lx.errorf("Unexpected key separator %q.", keySep)
case isWhitespace(r) || isNL(r):
lx.next()
return lexSkip(lx, lexKeyStart)
case r == stringStart || r == rawStringStart:
lx.ignore()
lx.emit(itemKeyStart)
lx.push(lexKeyEnd)
return lexValue // reuse string lexing
default:
lx.ignore()
lx.emit(itemKeyStart)
return lexBareKey
}
}
// lexBareKey consumes the text of a bare key. Assumes that the first character
// (which is not whitespace) has not yet been consumed.
func lexBareKey(lx *lexer) stateFn {
switch r := lx.next(); {
case isBareKeyChar(r):
return lexBareKey
case isWhitespace(r):
lx.backup()
lx.emit(itemText)
return lexKeyEnd
case r == keySep:
lx.backup()
lx.emit(itemText)
return lexKeyEnd
default:
return lx.errorf("Bare keys cannot contain %q.", r)
}
}
// lexKeyEnd consumes the end of a key and trims whitespace (up to the key
// separator).
func lexKeyEnd(lx *lexer) stateFn {
switch r := lx.next(); {
case r == keySep:
return lexSkip(lx, lexValue)
case isWhitespace(r):
return lexSkip(lx, lexKeyEnd)
default:
return lx.errorf("Expected key separator %q, but got %q instead.",
keySep, r)
}
}
// lexValue starts the consumption of a value anywhere a value is expected.
// lexValue will ignore whitespace.
// After a value is lexed, the last state on the next is popped and returned.
func lexValue(lx *lexer) stateFn {
// We allow whitespace to precede a value, but NOT new lines.
// In array syntax, the array states are responsible for ignoring new
// lines.
r := lx.next()
switch {
case isWhitespace(r):
return lexSkip(lx, lexValue)
case isDigit(r):
lx.backup() // avoid an extra state and use the same as above
return lexNumberOrDateStart
}
switch r {
case arrayStart:
lx.ignore()
lx.emit(itemArray)
return lexArrayValue
case stringStart:
if lx.accept(stringStart) {
if lx.accept(stringStart) {
lx.ignore() // Ignore """
return lexMultilineString
}
lx.backup()
}
lx.ignore() // ignore the '"'
return lexString
case rawStringStart:
if lx.accept(rawStringStart) {
if lx.accept(rawStringStart) {
lx.ignore() // Ignore """
return lexMultilineRawString
}
lx.backup()
}
lx.ignore() // ignore the "'"
return lexRawString
case '+', '-':
return lexNumberStart
case '.': // special error case, be kind to users
return lx.errorf("Floats must start with a digit, not '.'.")
}
if unicode.IsLetter(r) {
// Be permissive here; lexBool will give a nice error if the
// user wrote something like
// x = foo
// (i.e. not 'true' or 'false' but is something else word-like.)
lx.backup()
return lexBool
}
return lx.errorf("Expected value but found %q instead.", r)
}
// lexArrayValue consumes one value in an array. It assumes that '[' or ','
// have already been consumed. All whitespace and new lines are ignored.
func lexArrayValue(lx *lexer) stateFn {
r := lx.next()
switch {
case isWhitespace(r) || isNL(r):
return lexSkip(lx, lexArrayValue)
case r == commentStart:
lx.push(lexArrayValue)
return lexCommentStart
case r == arrayValTerm:
return lx.errorf("Unexpected array value terminator %q.",
arrayValTerm)
case r == arrayEnd:
return lexArrayEnd
}
lx.backup()
lx.push(lexArrayValueEnd)
return lexValue
}
// lexArrayValueEnd consumes the cruft between values of an array. Namely,
// it ignores whitespace and expects either a ',' or a ']'.
func lexArrayValueEnd(lx *lexer) stateFn {
r := lx.next()
switch {
case isWhitespace(r) || isNL(r):
return lexSkip(lx, lexArrayValueEnd)
case r == commentStart:
lx.push(lexArrayValueEnd)
return lexCommentStart
case r == arrayValTerm:
lx.ignore()
return lexArrayValue // move on to the next value
case r == arrayEnd:
return lexArrayEnd
}
return lx.errorf("Expected an array value terminator %q or an array "+
"terminator %q, but got %q instead.", arrayValTerm, arrayEnd, r)
}
// lexArrayEnd finishes the lexing of an array. It assumes that a ']' has
// just been consumed.
func lexArrayEnd(lx *lexer) stateFn {
lx.ignore()
lx.emit(itemArrayEnd)
return lx.pop()
}
// lexString consumes the inner contents of a string. It assumes that the
// beginning '"' has already been consumed and ignored.
func lexString(lx *lexer) stateFn {
r := lx.next()
switch {
case isNL(r):
return lx.errorf("Strings cannot contain new lines.")
case r == '\\':
lx.push(lexString)
return lexStringEscape
case r == stringEnd:
lx.backup()
lx.emit(itemString)
lx.next()
lx.ignore()
return lx.pop()
}
return lexString
}
// lexMultilineString consumes the inner contents of a string. It assumes that
// the beginning '"""' has already been consumed and ignored.
func lexMultilineString(lx *lexer) stateFn {
r := lx.next()
switch {
case r == '\\':
return lexMultilineStringEscape
case r == stringEnd:
if lx.accept(stringEnd) {
if lx.accept(stringEnd) {
lx.backup()
lx.backup()
lx.backup()
lx.emit(itemMultilineString)
lx.next()
lx.next()
lx.next()
lx.ignore()
return lx.pop()
}
lx.backup()
}
}
return lexMultilineString
}
// lexRawString consumes a raw string. Nothing can be escaped in such a string.
// It assumes that the beginning "'" has already been consumed and ignored.
func lexRawString(lx *lexer) stateFn {
r := lx.next()
switch {
case isNL(r):
return lx.errorf("Strings cannot contain new lines.")
case r == rawStringEnd:
lx.backup()
lx.emit(itemRawString)
lx.next()
lx.ignore()
return lx.pop()
}
return lexRawString
}
// lexMultilineRawString consumes a raw string. Nothing can be escaped in such
// a string. It assumes that the beginning "'" has already been consumed and
// ignored.
func lexMultilineRawString(lx *lexer) stateFn {
r := lx.next()
switch {
case r == rawStringEnd:
if lx.accept(rawStringEnd) {
if lx.accept(rawStringEnd) {
lx.backup()
lx.backup()
lx.backup()
lx.emit(itemRawMultilineString)
lx.next()
lx.next()
lx.next()
lx.ignore()
return lx.pop()
}
lx.backup()
}
}
return lexMultilineRawString
}
// lexMultilineStringEscape consumes an escaped character. It assumes that the
// preceding '\\' has already been consumed.
func lexMultilineStringEscape(lx *lexer) stateFn {
// Handle the special case first:
if isNL(lx.next()) {
return lexMultilineString
}
lx.backup()
lx.push(lexMultilineString)
return lexStringEscape(lx)
}
func lexStringEscape(lx *lexer) stateFn {
r := lx.next()
switch r {
case 'b':
fallthrough
case 't':
fallthrough
case 'n':
fallthrough
case 'f':
fallthrough
case 'r':
fallthrough
case '"':
fallthrough
case '\\':
return lx.pop()
case 'u':
return lexShortUnicodeEscape
case 'U':
return lexLongUnicodeEscape
}
return lx.errorf("Invalid escape character %q. Only the following "+
"escape characters are allowed: "+
"\\b, \\t, \\n, \\f, \\r, \\\", \\/, \\\\, "+
"\\uXXXX and \\UXXXXXXXX.", r)
}
func lexShortUnicodeEscape(lx *lexer) stateFn {
var r rune
for i := 0; i < 4; i++ {
r = lx.next()
if !isHexadecimal(r) {
return lx.errorf("Expected four hexadecimal digits after '\\u', "+
"but got '%s' instead.", lx.current())
}
}
return lx.pop()
}
func lexLongUnicodeEscape(lx *lexer) stateFn {
var r rune
for i := 0; i < 8; i++ {
r = lx.next()
if !isHexadecimal(r) {
return lx.errorf("Expected eight hexadecimal digits after '\\U', "+
"but got '%s' instead.", lx.current())
}
}
return lx.pop()
}
// lexNumberOrDateStart consumes either an integer, a float, or datetime.
func lexNumberOrDateStart(lx *lexer) stateFn {
r := lx.next()
if isDigit(r) {
return lexNumberOrDate
}
switch r {
case '_':
return lexNumber
case 'e', 'E':
return lexFloat
case '.':
return lx.errorf("Floats must start with a digit, not '.'.")
}
return lx.errorf("Expected a digit but got %q.", r)
}
// lexNumberOrDate consumes either an integer, float or datetime.
func lexNumberOrDate(lx *lexer) stateFn {
r := lx.next()
if isDigit(r) {
return lexNumberOrDate
}
switch r {
case '-':
return lexDatetime
case '_':
return lexNumber
case '.', 'e', 'E':
return lexFloat
}
lx.backup()
lx.emit(itemInteger)
return lx.pop()
}
// lexDatetime consumes a Datetime, to a first approximation.
// The parser validates that it matches one of the accepted formats.
func lexDatetime(lx *lexer) stateFn {
r := lx.next()
if isDigit(r) {
return lexDatetime
}
switch r {
case '-', 'T', ':', '.', 'Z':
return lexDatetime
}
lx.backup()
lx.emit(itemDatetime)
return lx.pop()
}
// lexNumberStart consumes either an integer or a float. It assumes that a sign
// has already been read, but that *no* digits have been consumed.
// lexNumberStart will move to the appropriate integer or float states.
func lexNumberStart(lx *lexer) stateFn {
// We MUST see a digit. Even floats have to start with a digit.
r := lx.next()
if !isDigit(r) {
if r == '.' {
return lx.errorf("Floats must start with a digit, not '.'.")
}
return lx.errorf("Expected a digit but got %q.", r)
}
return lexNumber
}
// lexNumber consumes an integer or a float after seeing the first digit.
func lexNumber(lx *lexer) stateFn {
r := lx.next()
if isDigit(r) {
return lexNumber
}
switch r {
case '_':
return lexNumber
case '.', 'e', 'E':
return lexFloat
}
lx.backup()
lx.emit(itemInteger)
return lx.pop()
}
// lexFloat consumes the elements of a float. It allows any sequence of
// float-like characters, so floats emitted by the lexer are only a first
// approximation and must be validated by the parser.
func lexFloat(lx *lexer) stateFn {
r := lx.next()
if isDigit(r) {
return lexFloat
}
switch r {
case '_', '.', '-', '+', 'e', 'E':
return lexFloat
}
lx.backup()
lx.emit(itemFloat)
return lx.pop()
}
// lexBool consumes a bool string: 'true' or 'false.
func lexBool(lx *lexer) stateFn {
var rs []rune
for {
r := lx.next()
if r == eof || isWhitespace(r) || isNL(r) {
lx.backup()
break
}
rs = append(rs, r)
}
s := string(rs)
switch s {
case "true", "false":
lx.emit(itemBool)
return lx.pop()
}
return lx.errorf("Expected value but found %q instead.", s)
}
// lexCommentStart begins the lexing of a comment. It will emit
// itemCommentStart and consume no characters, passing control to lexComment.
func lexCommentStart(lx *lexer) stateFn {
lx.ignore()
lx.emit(itemCommentStart)
return lexComment
}
// lexComment lexes an entire comment. It assumes that '#' has been consumed.
// It will consume *up to* the first new line character, and pass control
// back to the last state on the stack.
func lexComment(lx *lexer) stateFn {
r := lx.peek()
if isNL(r) || r == eof {
lx.emit(itemText)
return lx.pop()
}
lx.next()
return lexComment
}
// lexSkip ignores all slurped input and moves on to the next state.
func lexSkip(lx *lexer, nextState stateFn) stateFn {
return func(lx *lexer) stateFn {
lx.ignore()
return nextState
}
}
// isWhitespace returns true if `r` is a whitespace character according
// to the spec.
func isWhitespace(r rune) bool {
return r == '\t' || r == ' '
}
func isNL(r rune) bool {
return r == '\n' || r == '\r'
}
func isDigit(r rune) bool {
return r >= '0' && r <= '9'
}
func isHexadecimal(r rune) bool {
return (r >= '0' && r <= '9') ||
(r >= 'a' && r <= 'f') ||
(r >= 'A' && r <= 'F')
}
func isBareKeyChar(r rune) bool {
return (r >= 'A' && r <= 'Z') ||
(r >= 'a' && r <= 'z') ||
(r >= '0' && r <= '9') ||
r == '_' ||
r == '-'
}
func (itype itemType) String() string {
switch itype {
case itemError:
return "Error"
case itemNIL:
return "NIL"
case itemEOF:
return "EOF"
case itemText:
return "Text"
case itemString, itemRawString, itemMultilineString, itemRawMultilineString:
return "String"
case itemBool:
return "Bool"
case itemInteger:
return "Integer"
case itemFloat:
return "Float"
case itemDatetime:
return "DateTime"
case itemTableStart:
return "TableStart"
case itemTableEnd:
return "TableEnd"
case itemKeyStart:
return "KeyStart"
case itemArray:
return "Array"
case itemArrayEnd:
return "ArrayEnd"
case itemCommentStart:
return "CommentStart"
}
panic(fmt.Sprintf("BUG: Unknown type '%d'.", int(itype)))
}
func (item item) String() string {
return fmt.Sprintf("(%s, %s)", item.typ.String(), item.val)
}

557
vendor/github.com/burntsushi/toml/parse.go generated vendored Normal file
View File

@@ -0,0 +1,557 @@
package toml
import (
"fmt"
"strconv"
"strings"
"time"
"unicode"
"unicode/utf8"
)
type parser struct {
mapping map[string]interface{}
types map[string]tomlType
lx *lexer
// A list of keys in the order that they appear in the TOML data.
ordered []Key
// the full key for the current hash in scope
context Key
// the base key name for everything except hashes
currentKey string
// rough approximation of line number
approxLine int
// A map of 'key.group.names' to whether they were created implicitly.
implicits map[string]bool
}
type parseError string
func (pe parseError) Error() string {
return string(pe)
}
func parse(data string) (p *parser, err error) {
defer func() {
if r := recover(); r != nil {
var ok bool
if err, ok = r.(parseError); ok {
return
}
panic(r)
}
}()
p = &parser{
mapping: make(map[string]interface{}),
types: make(map[string]tomlType),
lx: lex(data),
ordered: make([]Key, 0),
implicits: make(map[string]bool),
}
for {
item := p.next()
if item.typ == itemEOF {
break
}
p.topLevel(item)
}
return p, nil
}
func (p *parser) panicf(format string, v ...interface{}) {
msg := fmt.Sprintf("Near line %d (last key parsed '%s'): %s",
p.approxLine, p.current(), fmt.Sprintf(format, v...))
panic(parseError(msg))
}
func (p *parser) next() item {
it := p.lx.nextItem()
if it.typ == itemError {
p.panicf("%s", it.val)
}
return it
}
func (p *parser) bug(format string, v ...interface{}) {
panic(fmt.Sprintf("BUG: "+format+"\n\n", v...))
}
func (p *parser) expect(typ itemType) item {
it := p.next()
p.assertEqual(typ, it.typ)
return it
}
func (p *parser) assertEqual(expected, got itemType) {
if expected != got {
p.bug("Expected '%s' but got '%s'.", expected, got)
}
}
func (p *parser) topLevel(item item) {
switch item.typ {
case itemCommentStart:
p.approxLine = item.line
p.expect(itemText)
case itemTableStart:
kg := p.next()
p.approxLine = kg.line
var key Key
for ; kg.typ != itemTableEnd && kg.typ != itemEOF; kg = p.next() {
key = append(key, p.keyString(kg))
}
p.assertEqual(itemTableEnd, kg.typ)
p.establishContext(key, false)
p.setType("", tomlHash)
p.ordered = append(p.ordered, key)
case itemArrayTableStart:
kg := p.next()
p.approxLine = kg.line
var key Key
for ; kg.typ != itemArrayTableEnd && kg.typ != itemEOF; kg = p.next() {
key = append(key, p.keyString(kg))
}
p.assertEqual(itemArrayTableEnd, kg.typ)
p.establishContext(key, true)
p.setType("", tomlArrayHash)
p.ordered = append(p.ordered, key)
case itemKeyStart:
kname := p.next()
p.approxLine = kname.line
p.currentKey = p.keyString(kname)
val, typ := p.value(p.next())
p.setValue(p.currentKey, val)
p.setType(p.currentKey, typ)
p.ordered = append(p.ordered, p.context.add(p.currentKey))
p.currentKey = ""
default:
p.bug("Unexpected type at top level: %s", item.typ)
}
}
// Gets a string for a key (or part of a key in a table name).
func (p *parser) keyString(it item) string {
switch it.typ {
case itemText:
return it.val
case itemString, itemMultilineString,
itemRawString, itemRawMultilineString:
s, _ := p.value(it)
return s.(string)
default:
p.bug("Unexpected key type: %s", it.typ)
panic("unreachable")
}
}
// value translates an expected value from the lexer into a Go value wrapped
// as an empty interface.
func (p *parser) value(it item) (interface{}, tomlType) {
switch it.typ {
case itemString:
return p.replaceEscapes(it.val), p.typeOfPrimitive(it)
case itemMultilineString:
trimmed := stripFirstNewline(stripEscapedWhitespace(it.val))
return p.replaceEscapes(trimmed), p.typeOfPrimitive(it)
case itemRawString:
return it.val, p.typeOfPrimitive(it)
case itemRawMultilineString:
return stripFirstNewline(it.val), p.typeOfPrimitive(it)
case itemBool:
switch it.val {
case "true":
return true, p.typeOfPrimitive(it)
case "false":
return false, p.typeOfPrimitive(it)
}
p.bug("Expected boolean value, but got '%s'.", it.val)
case itemInteger:
if !numUnderscoresOK(it.val) {
p.panicf("Invalid integer %q: underscores must be surrounded by digits",
it.val)
}
val := strings.Replace(it.val, "_", "", -1)
num, err := strconv.ParseInt(val, 10, 64)
if err != nil {
// Distinguish integer values. Normally, it'd be a bug if the lexer
// provides an invalid integer, but it's possible that the number is
// out of range of valid values (which the lexer cannot determine).
// So mark the former as a bug but the latter as a legitimate user
// error.
if e, ok := err.(*strconv.NumError); ok &&
e.Err == strconv.ErrRange {
p.panicf("Integer '%s' is out of the range of 64-bit "+
"signed integers.", it.val)
} else {
p.bug("Expected integer value, but got '%s'.", it.val)
}
}
return num, p.typeOfPrimitive(it)
case itemFloat:
parts := strings.FieldsFunc(it.val, func(r rune) bool {
switch r {
case '.', 'e', 'E':
return true
}
return false
})
for _, part := range parts {
if !numUnderscoresOK(part) {
p.panicf("Invalid float %q: underscores must be "+
"surrounded by digits", it.val)
}
}
if !numPeriodsOK(it.val) {
// As a special case, numbers like '123.' or '1.e2',
// which are valid as far as Go/strconv are concerned,
// must be rejected because TOML says that a fractional
// part consists of '.' followed by 1+ digits.
p.panicf("Invalid float %q: '.' must be followed "+
"by one or more digits", it.val)
}
val := strings.Replace(it.val, "_", "", -1)
num, err := strconv.ParseFloat(val, 64)
if err != nil {
if e, ok := err.(*strconv.NumError); ok &&
e.Err == strconv.ErrRange {
p.panicf("Float '%s' is out of the range of 64-bit "+
"IEEE-754 floating-point numbers.", it.val)
} else {
p.panicf("Invalid float value: %q", it.val)
}
}
return num, p.typeOfPrimitive(it)
case itemDatetime:
var t time.Time
var ok bool
var err error
for _, format := range []string{
"2006-01-02T15:04:05Z07:00",
"2006-01-02T15:04:05",
"2006-01-02",
} {
t, err = time.ParseInLocation(format, it.val, time.Local)
if err == nil {
ok = true
break
}
}
if !ok {
p.panicf("Invalid TOML Datetime: %q.", it.val)
}
return t, p.typeOfPrimitive(it)
case itemArray:
array := make([]interface{}, 0)
types := make([]tomlType, 0)
for it = p.next(); it.typ != itemArrayEnd; it = p.next() {
if it.typ == itemCommentStart {
p.expect(itemText)
continue
}
val, typ := p.value(it)
array = append(array, val)
types = append(types, typ)
}
return array, p.typeOfArray(types)
}
p.bug("Unexpected value type: %s", it.typ)
panic("unreachable")
}
// numUnderscoresOK checks whether each underscore in s is surrounded by
// characters that are not underscores.
func numUnderscoresOK(s string) bool {
accept := false
for _, r := range s {
if r == '_' {
if !accept {
return false
}
accept = false
continue
}
accept = true
}
return accept
}
// numPeriodsOK checks whether every period in s is followed by a digit.
func numPeriodsOK(s string) bool {
period := false
for _, r := range s {
if period && !isDigit(r) {
return false
}
period = r == '.'
}
return !period
}
// establishContext sets the current context of the parser,
// where the context is either a hash or an array of hashes. Which one is
// set depends on the value of the `array` parameter.
//
// Establishing the context also makes sure that the key isn't a duplicate, and
// will create implicit hashes automatically.
func (p *parser) establishContext(key Key, array bool) {
var ok bool
// Always start at the top level and drill down for our context.
hashContext := p.mapping
keyContext := make(Key, 0)
// We only need implicit hashes for key[0:-1]
for _, k := range key[0 : len(key)-1] {
_, ok = hashContext[k]
keyContext = append(keyContext, k)
// No key? Make an implicit hash and move on.
if !ok {
p.addImplicit(keyContext)
hashContext[k] = make(map[string]interface{})
}
// If the hash context is actually an array of tables, then set
// the hash context to the last element in that array.
//
// Otherwise, it better be a table, since this MUST be a key group (by
// virtue of it not being the last element in a key).
switch t := hashContext[k].(type) {
case []map[string]interface{}:
hashContext = t[len(t)-1]
case map[string]interface{}:
hashContext = t
default:
p.panicf("Key '%s' was already created as a hash.", keyContext)
}
}
p.context = keyContext
if array {
// If this is the first element for this array, then allocate a new
// list of tables for it.
k := key[len(key)-1]
if _, ok := hashContext[k]; !ok {
hashContext[k] = make([]map[string]interface{}, 0, 5)
}
// Add a new table. But make sure the key hasn't already been used
// for something else.
if hash, ok := hashContext[k].([]map[string]interface{}); ok {
hashContext[k] = append(hash, make(map[string]interface{}))
} else {
p.panicf("Key '%s' was already created and cannot be used as "+
"an array.", keyContext)
}
} else {
p.setValue(key[len(key)-1], make(map[string]interface{}))
}
p.context = append(p.context, key[len(key)-1])
}
// setValue sets the given key to the given value in the current context.
// It will make sure that the key hasn't already been defined, account for
// implicit key groups.
func (p *parser) setValue(key string, value interface{}) {
var tmpHash interface{}
var ok bool
hash := p.mapping
keyContext := make(Key, 0)
for _, k := range p.context {
keyContext = append(keyContext, k)
if tmpHash, ok = hash[k]; !ok {
p.bug("Context for key '%s' has not been established.", keyContext)
}
switch t := tmpHash.(type) {
case []map[string]interface{}:
// The context is a table of hashes. Pick the most recent table
// defined as the current hash.
hash = t[len(t)-1]
case map[string]interface{}:
hash = t
default:
p.bug("Expected hash to have type 'map[string]interface{}', but "+
"it has '%T' instead.", tmpHash)
}
}
keyContext = append(keyContext, key)
if _, ok := hash[key]; ok {
// Typically, if the given key has already been set, then we have
// to raise an error since duplicate keys are disallowed. However,
// it's possible that a key was previously defined implicitly. In this
// case, it is allowed to be redefined concretely. (See the
// `tests/valid/implicit-and-explicit-after.toml` test in `toml-test`.)
//
// But we have to make sure to stop marking it as an implicit. (So that
// another redefinition provokes an error.)
//
// Note that since it has already been defined (as a hash), we don't
// want to overwrite it. So our business is done.
if p.isImplicit(keyContext) {
p.removeImplicit(keyContext)
return
}
// Otherwise, we have a concrete key trying to override a previous
// key, which is *always* wrong.
p.panicf("Key '%s' has already been defined.", keyContext)
}
hash[key] = value
}
// setType sets the type of a particular value at a given key.
// It should be called immediately AFTER setValue.
//
// Note that if `key` is empty, then the type given will be applied to the
// current context (which is either a table or an array of tables).
func (p *parser) setType(key string, typ tomlType) {
keyContext := make(Key, 0, len(p.context)+1)
for _, k := range p.context {
keyContext = append(keyContext, k)
}
if len(key) > 0 { // allow type setting for hashes
keyContext = append(keyContext, key)
}
p.types[keyContext.String()] = typ
}
// addImplicit sets the given Key as having been created implicitly.
func (p *parser) addImplicit(key Key) {
p.implicits[key.String()] = true
}
// removeImplicit stops tagging the given key as having been implicitly
// created.
func (p *parser) removeImplicit(key Key) {
p.implicits[key.String()] = false
}
// isImplicit returns true if the key group pointed to by the key was created
// implicitly.
func (p *parser) isImplicit(key Key) bool {
return p.implicits[key.String()]
}
// current returns the full key name of the current context.
func (p *parser) current() string {
if len(p.currentKey) == 0 {
return p.context.String()
}
if len(p.context) == 0 {
return p.currentKey
}
return fmt.Sprintf("%s.%s", p.context, p.currentKey)
}
func stripFirstNewline(s string) string {
if len(s) == 0 || s[0] != '\n' {
return s
}
return s[1:]
}
func stripEscapedWhitespace(s string) string {
esc := strings.Split(s, "\\\n")
if len(esc) > 1 {
for i := 1; i < len(esc); i++ {
esc[i] = strings.TrimLeftFunc(esc[i], unicode.IsSpace)
}
}
return strings.Join(esc, "")
}
func (p *parser) replaceEscapes(str string) string {
var replaced []rune
s := []byte(str)
r := 0
for r < len(s) {
if s[r] != '\\' {
c, size := utf8.DecodeRune(s[r:])
r += size
replaced = append(replaced, c)
continue
}
r += 1
if r >= len(s) {
p.bug("Escape sequence at end of string.")
return ""
}
switch s[r] {
default:
p.bug("Expected valid escape code after \\, but got %q.", s[r])
return ""
case 'b':
replaced = append(replaced, rune(0x0008))
r += 1
case 't':
replaced = append(replaced, rune(0x0009))
r += 1
case 'n':
replaced = append(replaced, rune(0x000A))
r += 1
case 'f':
replaced = append(replaced, rune(0x000C))
r += 1
case 'r':
replaced = append(replaced, rune(0x000D))
r += 1
case '"':
replaced = append(replaced, rune(0x0022))
r += 1
case '\\':
replaced = append(replaced, rune(0x005C))
r += 1
case 'u':
// At this point, we know we have a Unicode escape of the form
// `uXXXX` at [r, r+5). (Because the lexer guarantees this
// for us.)
escaped := p.asciiEscapeToUnicode(s[r+1 : r+5])
replaced = append(replaced, escaped)
r += 5
case 'U':
// At this point, we know we have a Unicode escape of the form
// `uXXXX` at [r, r+9). (Because the lexer guarantees this
// for us.)
escaped := p.asciiEscapeToUnicode(s[r+1 : r+9])
replaced = append(replaced, escaped)
r += 9
}
}
return string(replaced)
}
func (p *parser) asciiEscapeToUnicode(bs []byte) rune {
s := string(bs)
hex, err := strconv.ParseUint(strings.ToLower(s), 16, 32)
if err != nil {
p.bug("Could not parse '%s' as a hexadecimal number, but the "+
"lexer claims it's OK: %s", s, err)
}
if !utf8.ValidRune(rune(hex)) {
p.panicf("Escaped character '\\u%s' is not valid UTF-8.", s)
}
return rune(hex)
}
func isStringType(ty itemType) bool {
return ty == itemString || ty == itemMultilineString ||
ty == itemRawString || ty == itemRawMultilineString
}

1
vendor/github.com/burntsushi/toml/session.vim generated vendored Normal file
View File

@@ -0,0 +1 @@
au BufWritePost *.go silent!make tags > /dev/null 2>&1

91
vendor/github.com/burntsushi/toml/type_check.go generated vendored Normal file
View File

@@ -0,0 +1,91 @@
package toml
// tomlType represents any Go type that corresponds to a TOML type.
// While the first draft of the TOML spec has a simplistic type system that
// probably doesn't need this level of sophistication, we seem to be militating
// toward adding real composite types.
type tomlType interface {
typeString() string
}
// typeEqual accepts any two types and returns true if they are equal.
func typeEqual(t1, t2 tomlType) bool {
if t1 == nil || t2 == nil {
return false
}
return t1.typeString() == t2.typeString()
}
func typeIsHash(t tomlType) bool {
return typeEqual(t, tomlHash) || typeEqual(t, tomlArrayHash)
}
type tomlBaseType string
func (btype tomlBaseType) typeString() string {
return string(btype)
}
func (btype tomlBaseType) String() string {
return btype.typeString()
}
var (
tomlInteger tomlBaseType = "Integer"
tomlFloat tomlBaseType = "Float"
tomlDatetime tomlBaseType = "Datetime"
tomlString tomlBaseType = "String"
tomlBool tomlBaseType = "Bool"
tomlArray tomlBaseType = "Array"
tomlHash tomlBaseType = "Hash"
tomlArrayHash tomlBaseType = "ArrayHash"
)
// typeOfPrimitive returns a tomlType of any primitive value in TOML.
// Primitive values are: Integer, Float, Datetime, String and Bool.
//
// Passing a lexer item other than the following will cause a BUG message
// to occur: itemString, itemBool, itemInteger, itemFloat, itemDatetime.
func (p *parser) typeOfPrimitive(lexItem item) tomlType {
switch lexItem.typ {
case itemInteger:
return tomlInteger
case itemFloat:
return tomlFloat
case itemDatetime:
return tomlDatetime
case itemString:
return tomlString
case itemMultilineString:
return tomlString
case itemRawString:
return tomlString
case itemRawMultilineString:
return tomlString
case itemBool:
return tomlBool
}
p.bug("Cannot infer primitive type of lex item '%s'.", lexItem)
panic("unreachable")
}
// typeOfArray returns a tomlType for an array given a list of types of its
// values.
//
// In the current spec, if an array is homogeneous, then its type is always
// "Array". If the array is not homogeneous, an error is generated.
func (p *parser) typeOfArray(types []tomlType) tomlType {
// Empty arrays are cool.
if len(types) == 0 {
return tomlArray
}
theType := types[0]
for _, t := range types[1:] {
if !typeEqual(theType, t) {
p.panicf("Array contains values of type '%s' and '%s', but "+
"arrays must be homogeneous.", theType, t)
}
}
return tomlArray
}

242
vendor/github.com/burntsushi/toml/type_fields.go generated vendored Normal file
View File

@@ -0,0 +1,242 @@
package toml
// Struct field handling is adapted from code in encoding/json:
//
// Copyright 2010 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the Go distribution.
import (
"reflect"
"sort"
"sync"
)
// A field represents a single field found in a struct.
type field struct {
name string // the name of the field (`toml` tag included)
tag bool // whether field has a `toml` tag
index []int // represents the depth of an anonymous field
typ reflect.Type // the type of the field
}
// byName sorts field by name, breaking ties with depth,
// then breaking ties with "name came from toml tag", then
// breaking ties with index sequence.
type byName []field
func (x byName) Len() int { return len(x) }
func (x byName) Swap(i, j int) { x[i], x[j] = x[j], x[i] }
func (x byName) Less(i, j int) bool {
if x[i].name != x[j].name {
return x[i].name < x[j].name
}
if len(x[i].index) != len(x[j].index) {
return len(x[i].index) < len(x[j].index)
}
if x[i].tag != x[j].tag {
return x[i].tag
}
return byIndex(x).Less(i, j)
}
// byIndex sorts field by index sequence.
type byIndex []field
func (x byIndex) Len() int { return len(x) }
func (x byIndex) Swap(i, j int) { x[i], x[j] = x[j], x[i] }
func (x byIndex) Less(i, j int) bool {
for k, xik := range x[i].index {
if k >= len(x[j].index) {
return false
}
if xik != x[j].index[k] {
return xik < x[j].index[k]
}
}
return len(x[i].index) < len(x[j].index)
}
// typeFields returns a list of fields that TOML should recognize for the given
// type. The algorithm is breadth-first search over the set of structs to
// include - the top struct and then any reachable anonymous structs.
func typeFields(t reflect.Type) []field {
// Anonymous fields to explore at the current level and the next.
current := []field{}
next := []field{{typ: t}}
// Count of queued names for current level and the next.
count := map[reflect.Type]int{}
nextCount := map[reflect.Type]int{}
// Types already visited at an earlier level.
visited := map[reflect.Type]bool{}
// Fields found.
var fields []field
for len(next) > 0 {
current, next = next, current[:0]
count, nextCount = nextCount, map[reflect.Type]int{}
for _, f := range current {
if visited[f.typ] {
continue
}
visited[f.typ] = true
// Scan f.typ for fields to include.
for i := 0; i < f.typ.NumField(); i++ {
sf := f.typ.Field(i)
if sf.PkgPath != "" && !sf.Anonymous { // unexported
continue
}
opts := getOptions(sf.Tag)
if opts.skip {
continue
}
index := make([]int, len(f.index)+1)
copy(index, f.index)
index[len(f.index)] = i
ft := sf.Type
if ft.Name() == "" && ft.Kind() == reflect.Ptr {
// Follow pointer.
ft = ft.Elem()
}
// Record found field and index sequence.
if opts.name != "" || !sf.Anonymous || ft.Kind() != reflect.Struct {
tagged := opts.name != ""
name := opts.name
if name == "" {
name = sf.Name
}
fields = append(fields, field{name, tagged, index, ft})
if count[f.typ] > 1 {
// If there were multiple instances, add a second,
// so that the annihilation code will see a duplicate.
// It only cares about the distinction between 1 or 2,
// so don't bother generating any more copies.
fields = append(fields, fields[len(fields)-1])
}
continue
}
// Record new anonymous struct to explore in next round.
nextCount[ft]++
if nextCount[ft] == 1 {
f := field{name: ft.Name(), index: index, typ: ft}
next = append(next, f)
}
}
}
}
sort.Sort(byName(fields))
// Delete all fields that are hidden by the Go rules for embedded fields,
// except that fields with TOML tags are promoted.
// The fields are sorted in primary order of name, secondary order
// of field index length. Loop over names; for each name, delete
// hidden fields by choosing the one dominant field that survives.
out := fields[:0]
for advance, i := 0, 0; i < len(fields); i += advance {
// One iteration per name.
// Find the sequence of fields with the name of this first field.
fi := fields[i]
name := fi.name
for advance = 1; i+advance < len(fields); advance++ {
fj := fields[i+advance]
if fj.name != name {
break
}
}
if advance == 1 { // Only one field with this name
out = append(out, fi)
continue
}
dominant, ok := dominantField(fields[i : i+advance])
if ok {
out = append(out, dominant)
}
}
fields = out
sort.Sort(byIndex(fields))
return fields
}
// dominantField looks through the fields, all of which are known to
// have the same name, to find the single field that dominates the
// others using Go's embedding rules, modified by the presence of
// TOML tags. If there are multiple top-level fields, the boolean
// will be false: This condition is an error in Go and we skip all
// the fields.
func dominantField(fields []field) (field, bool) {
// The fields are sorted in increasing index-length order. The winner
// must therefore be one with the shortest index length. Drop all
// longer entries, which is easy: just truncate the slice.
length := len(fields[0].index)
tagged := -1 // Index of first tagged field.
for i, f := range fields {
if len(f.index) > length {
fields = fields[:i]
break
}
if f.tag {
if tagged >= 0 {
// Multiple tagged fields at the same level: conflict.
// Return no field.
return field{}, false
}
tagged = i
}
}
if tagged >= 0 {
return fields[tagged], true
}
// All remaining fields have the same length. If there's more than one,
// we have a conflict (two fields named "X" at the same level) and we
// return no field.
if len(fields) > 1 {
return field{}, false
}
return fields[0], true
}
var fieldCache struct {
sync.RWMutex
m map[reflect.Type][]field
}
// cachedTypeFields is like typeFields but uses a cache to avoid repeated work.
func cachedTypeFields(t reflect.Type) []field {
fieldCache.RLock()
f := fieldCache.m[t]
fieldCache.RUnlock()
if f != nil {
return f
}
// Compute fields without lock.
// Might duplicate effort but won't hold other computations back.
f = typeFields(t)
if f == nil {
f = []field{}
}
fieldCache.Lock()
if fieldCache.m == nil {
fieldCache.m = map[reflect.Type][]field{}
}
fieldCache.m[t] = f
fieldCache.Unlock()
return f
}

353
vendor/github.com/hashicorp/consul-template/LICENSE generated vendored Normal file
View File

@@ -0,0 +1,353 @@
Mozilla Public License, version 2.0
1. Definitions
1.1. “Contributor”
means each individual or legal entity that creates, contributes to the
creation of, or owns Covered Software.
1.2. “Contributor Version”
means the combination of the Contributions of others (if any) used by a
Contributor and that particular Contributors Contribution.
1.3. “Contribution”
means Covered Software of a particular Contributor.
1.4. “Covered Software”
means Source Code Form to which the initial Contributor has attached the
notice in Exhibit A, the Executable Form of such Source Code Form, and
Modifications of such Source Code Form, in each case including portions
thereof.
1.5. “Incompatible With Secondary Licenses”
means
a. that the initial Contributor has attached the notice described in
Exhibit B to the Covered Software; or
b. that the Covered Software was made available under the terms of version
1.1 or earlier of the License, but not also under the terms of a
Secondary License.
1.6. “Executable Form”
means any form of the work other than Source Code Form.
1.7. “Larger Work”
means a work that combines Covered Software with other material, in a separate
file or files, that is not Covered Software.
1.8. “License”
means this document.
1.9. “Licensable”
means having the right to grant, to the maximum extent possible, whether at the
time of the initial grant or subsequently, any and all of the rights conveyed by
this License.
1.10. “Modifications”
means any of the following:
a. any file in Source Code Form that results from an addition to, deletion
from, or modification of the contents of Covered Software; or
b. any new file in Source Code Form that contains any Covered Software.
1.11. “Patent Claims” of a Contributor
means any patent claim(s), including without limitation, method, process,
and apparatus claims, in any patent Licensable by such Contributor that
would be infringed, but for the grant of the License, by the making,
using, selling, offering for sale, having made, import, or transfer of
either its Contributions or its Contributor Version.
1.12. “Secondary License”
means either the GNU General Public License, Version 2.0, the GNU Lesser
General Public License, Version 2.1, the GNU Affero General Public
License, Version 3.0, or any later versions of those licenses.
1.13. “Source Code Form”
means the form of the work preferred for making modifications.
1.14. “You” (or “Your”)
means an individual or a legal entity exercising rights under this
License. For legal entities, “You” includes any entity that controls, is
controlled by, or is under common control with You. For purposes of this
definition, “control” means (a) the power, direct or indirect, to cause
the direction or management of such entity, whether by contract or
otherwise, or (b) ownership of more than fifty percent (50%) of the
outstanding shares or beneficial ownership of such entity.
2. License Grants and Conditions
2.1. Grants
Each Contributor hereby grants You a world-wide, royalty-free,
non-exclusive license:
a. under intellectual property rights (other than patent or trademark)
Licensable by such Contributor to use, reproduce, make available,
modify, display, perform, distribute, and otherwise exploit its
Contributions, either on an unmodified basis, with Modifications, or as
part of a Larger Work; and
b. under Patent Claims of such Contributor to make, use, sell, offer for
sale, have made, import, and otherwise transfer either its Contributions
or its Contributor Version.
2.2. Effective Date
The licenses granted in Section 2.1 with respect to any Contribution become
effective for each Contribution on the date the Contributor first distributes
such Contribution.
2.3. Limitations on Grant Scope
The licenses granted in this Section 2 are the only rights granted under this
License. No additional rights or licenses will be implied from the distribution
or licensing of Covered Software under this License. Notwithstanding Section
2.1(b) above, no patent license is granted by a Contributor:
a. for any code that a Contributor has removed from Covered Software; or
b. for infringements caused by: (i) Your and any other third partys
modifications of Covered Software, or (ii) the combination of its
Contributions with other software (except as part of its Contributor
Version); or
c. under Patent Claims infringed by Covered Software in the absence of its
Contributions.
This License does not grant any rights in the trademarks, service marks, or
logos of any Contributor (except as may be necessary to comply with the
notice requirements in Section 3.4).
2.4. Subsequent Licenses
No Contributor makes additional grants as a result of Your choice to
distribute the Covered Software under a subsequent version of this License
(see Section 10.2) or under the terms of a Secondary License (if permitted
under the terms of Section 3.3).
2.5. Representation
Each Contributor represents that the Contributor believes its Contributions
are its original creation(s) or it has sufficient rights to grant the
rights to its Contributions conveyed by this License.
2.6. Fair Use
This License is not intended to limit any rights You have under applicable
copyright doctrines of fair use, fair dealing, or other equivalents.
2.7. Conditions
Sections 3.1, 3.2, 3.3, and 3.4 are conditions of the licenses granted in
Section 2.1.
3. Responsibilities
3.1. Distribution of Source Form
All distribution of Covered Software in Source Code Form, including any
Modifications that You create or to which You contribute, must be under the
terms of this License. You must inform recipients that the Source Code Form
of the Covered Software is governed by the terms of this License, and how
they can obtain a copy of this License. You may not attempt to alter or
restrict the recipients rights in the Source Code Form.
3.2. Distribution of Executable Form
If You distribute Covered Software in Executable Form then:
a. such Covered Software must also be made available in Source Code Form,
as described in Section 3.1, and You must inform recipients of the
Executable Form how they can obtain a copy of such Source Code Form by
reasonable means in a timely manner, at a charge no more than the cost
of distribution to the recipient; and
b. You may distribute such Executable Form under the terms of this License,
or sublicense it under different terms, provided that the license for
the Executable Form does not attempt to limit or alter the recipients
rights in the Source Code Form under this License.
3.3. Distribution of a Larger Work
You may create and distribute a Larger Work under terms of Your choice,
provided that You also comply with the requirements of this License for the
Covered Software. If the Larger Work is a combination of Covered Software
with a work governed by one or more Secondary Licenses, and the Covered
Software is not Incompatible With Secondary Licenses, this License permits
You to additionally distribute such Covered Software under the terms of
such Secondary License(s), so that the recipient of the Larger Work may, at
their option, further distribute the Covered Software under the terms of
either this License or such Secondary License(s).
3.4. Notices
You may not remove or alter the substance of any license notices (including
copyright notices, patent notices, disclaimers of warranty, or limitations
of liability) contained within the Source Code Form of the Covered
Software, except that You may alter any license notices to the extent
required to remedy known factual inaccuracies.
3.5. Application of Additional Terms
You may choose to offer, and to charge a fee for, warranty, support,
indemnity or liability obligations to one or more recipients of Covered
Software. However, You may do so only on Your own behalf, and not on behalf
of any Contributor. You must make it absolutely clear that any such
warranty, support, indemnity, or liability obligation is offered by You
alone, and You hereby agree to indemnify every Contributor for any
liability incurred by such Contributor as a result of warranty, support,
indemnity or liability terms You offer. You may include additional
disclaimers of warranty and limitations of liability specific to any
jurisdiction.
4. Inability to Comply Due to Statute or Regulation
If it is impossible for You to comply with any of the terms of this License
with respect to some or all of the Covered Software due to statute, judicial
order, or regulation then You must: (a) comply with the terms of this License
to the maximum extent possible; and (b) describe the limitations and the code
they affect. Such description must be placed in a text file included with all
distributions of the Covered Software under this License. Except to the
extent prohibited by statute or regulation, such description must be
sufficiently detailed for a recipient of ordinary skill to be able to
understand it.
5. Termination
5.1. The rights granted under this License will terminate automatically if You
fail to comply with any of its terms. However, if You become compliant,
then the rights granted under this License from a particular Contributor
are reinstated (a) provisionally, unless and until such Contributor
explicitly and finally terminates Your grants, and (b) on an ongoing basis,
if such Contributor fails to notify You of the non-compliance by some
reasonable means prior to 60 days after You have come back into compliance.
Moreover, Your grants from a particular Contributor are reinstated on an
ongoing basis if such Contributor notifies You of the non-compliance by
some reasonable means, this is the first time You have received notice of
non-compliance with this License from such Contributor, and You become
compliant prior to 30 days after Your receipt of the notice.
5.2. If You initiate litigation against any entity by asserting a patent
infringement claim (excluding declaratory judgment actions, counter-claims,
and cross-claims) alleging that a Contributor Version directly or
indirectly infringes any patent, then the rights granted to You by any and
all Contributors for the Covered Software under Section 2.1 of this License
shall terminate.
5.3. In the event of termination under Sections 5.1 or 5.2 above, all end user
license agreements (excluding distributors and resellers) which have been
validly granted by You or Your distributors under this License prior to
termination shall survive termination.
6. Disclaimer of Warranty
Covered Software is provided under this License on an “as is” basis, without
warranty of any kind, either expressed, implied, or statutory, including,
without limitation, warranties that the Covered Software is free of defects,
merchantable, fit for a particular purpose or non-infringing. The entire
risk as to the quality and performance of the Covered Software is with You.
Should any Covered Software prove defective in any respect, You (not any
Contributor) assume the cost of any necessary servicing, repair, or
correction. This disclaimer of warranty constitutes an essential part of this
License. No use of any Covered Software is authorized under this License
except under this disclaimer.
7. Limitation of Liability
Under no circumstances and under no legal theory, whether tort (including
negligence), contract, or otherwise, shall any Contributor, or anyone who
distributes Covered Software as permitted above, be liable to You for any
direct, indirect, special, incidental, or consequential damages of any
character including, without limitation, damages for lost profits, loss of
goodwill, work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses, even if such party shall have been
informed of the possibility of such damages. This limitation of liability
shall not apply to liability for death or personal injury resulting from such
partys negligence to the extent applicable law prohibits such limitation.
Some jurisdictions do not allow the exclusion or limitation of incidental or
consequential damages, so this exclusion and limitation may not apply to You.
8. Litigation
Any litigation relating to this License may be brought only in the courts of
a jurisdiction where the defendant maintains its principal place of business
and such litigation shall be governed by laws of that jurisdiction, without
reference to its conflict-of-law provisions. Nothing in this Section shall
prevent a partys ability to bring cross-claims or counter-claims.
9. Miscellaneous
This License represents the complete agreement concerning the subject matter
hereof. If any provision of this License is held to be unenforceable, such
provision shall be reformed only to the extent necessary to make it
enforceable. Any law or regulation which provides that the language of a
contract shall be construed against the drafter shall not be used to construe
this License against a Contributor.
10. Versions of the License
10.1. New Versions
Mozilla Foundation is the license steward. Except as provided in Section
10.3, no one other than the license steward has the right to modify or
publish new versions of this License. Each version will be given a
distinguishing version number.
10.2. Effect of New Versions
You may distribute the Covered Software under the terms of the version of
the License under which You originally received the Covered Software, or
under the terms of any subsequent version published by the license
steward.
10.3. Modified Versions
If you create software not governed by this License, and you want to
create a new license for such software, you may create and use a modified
version of this License if you rename the license and remove any
references to the name of the license steward (except to note that such
modified license differs from this License).
10.4. Distributing Source Code Form that is Incompatible With Secondary Licenses
If You choose to distribute Source Code Form that is Incompatible With
Secondary Licenses under the terms of this version of the License, the
notice described in Exhibit B of this License must be attached.
Exhibit A - Source Code Form License Notice
This Source Code Form is subject to the
terms of the Mozilla Public License, v.
2.0. If a copy of the MPL was not
distributed with this file, You can
obtain one at
http://mozilla.org/MPL/2.0/.
If it is not possible or desirable to put the notice in a particular file, then
You may include the notice in a location (such as a LICENSE file in a relevant
directory) where a recipient would be likely to look for such a notice.
You may add additional accurate notices of copyright ownership.
Exhibit B - “Incompatible With Secondary Licenses” Notice
This Source Code Form is “Incompatible
With Secondary Licenses”, as defined by
the Mozilla Public License, v. 2.0.

View File

@@ -0,0 +1,344 @@
package child
import (
"errors"
"io"
"log"
"math/rand"
"os"
"os/exec"
"sync"
"syscall"
"time"
)
var (
// ErrMissingCommand is the error returned when no command is specified
// to run.
ErrMissingCommand error = errors.New("missing command")
// ExitCodeOK is the default OK exit code.
ExitCodeOK int = 0
// ExitCodeError is the default error code returned when the child exits with
// an error without a more specific code.
ExitCodeError int = 127
)
// Child is a wrapper around a child process which can be used to send signals
// and manage the processes' lifecycle.
type Child struct {
sync.RWMutex
stdin io.Reader
stdout, stderr io.Writer
command string
args []string
reloadSignal os.Signal
killSignal os.Signal
killTimeout time.Duration
splay time.Duration
// cmd is the actual child process under management.
cmd *exec.Cmd
// exitCh is the channel where the processes exit will be returned.
exitCh chan int
// stopLock is the mutex to lock when stopping. stopCh is the circuit breaker
// to force-terminate any waiting splays to kill the process now. stopped is
// a boolean that tells us if we have previously been stopped.
stopLock sync.RWMutex
stopCh chan struct{}
stopped bool
}
// NewInput is input to the NewChild function.
type NewInput struct {
// Stdin is the io.Reader where input will come from. This is sent directly to
// the child process. Stdout and Stderr represent the io.Writer objects where
// the child process will send output and errorput.
Stdin io.Reader
Stdout, Stderr io.Writer
// Command is the name of the command to execute. Args are the list of
// arguments to pass when starting the command.
Command string
Args []string
// ReloadSignal is the signal to send to reload this process. This value may
// be nil.
ReloadSignal os.Signal
// KillSignal is the signal to send to gracefully kill this process. This
// value may be nil.
KillSignal os.Signal
// KillTimeout is the amount of time to wait for the process to gracefully
// terminate before force-killing.
KillTimeout time.Duration
// Splay is the maximum random amount of time to wait before sending signals.
// This option helps reduce the thundering herd problem by effectively
// sleeping for a random amount of time before sending the signal. This
// prevents multiple processes from all signaling at the same time. This value
// may be zero (which disables the splay entirely).
Splay time.Duration
}
// New creates a new child process for management with high-level APIs for
// sending signals to the child process, restarting the child process, and
// gracefully terminating the child process.
func New(i *NewInput) (*Child, error) {
if i == nil {
i = new(NewInput)
}
if len(i.Command) == 0 {
return nil, ErrMissingCommand
}
child := &Child{
stdin: i.Stdin,
stdout: i.Stdout,
stderr: i.Stderr,
command: i.Command,
args: i.Args,
reloadSignal: i.ReloadSignal,
killSignal: i.KillSignal,
killTimeout: i.KillTimeout,
splay: i.Splay,
stopCh: make(chan struct{}, 1),
}
return child, nil
}
// ExitCh returns the current exit channel for this child process. This channel
// may change if the process is restarted, so implementers must not cache this
// value.
func (c *Child) ExitCh() <-chan int {
c.RLock()
defer c.RUnlock()
return c.exitCh
}
// Pid returns the pid of the child process. If no child process exists, 0 is
// returned.
func (c *Child) Pid() int {
c.RLock()
defer c.RUnlock()
return c.pid()
}
// Start starts and begins execution of the child process. A buffered channel
// is returned which is where the command's exit code will be returned upon
// exit. Any errors that occur prior to starting the command will be returned
// as the second error argument, but any errors returned by the command after
// execution will be returned as a non-zero value over the exit code channel.
func (c *Child) Start() error {
log.Printf("[INFO] (child) spawning %q %q", c.command, c.args)
c.Lock()
defer c.Unlock()
return c.start()
}
// Signal sends the signal to the child process, returning any errors that
// occur.
func (c *Child) Signal(s os.Signal) error {
log.Printf("[INFO] (child) receiving signal %q", s.String())
c.RLock()
defer c.RUnlock()
return c.signal(s)
}
// Reload sends the reload signal to the child process and does not wait for a
// response. If no reload signal was provided, the process is restarted and
// replaces the process attached to this Child.
func (c *Child) Reload() error {
if c.reloadSignal == nil {
log.Printf("[INFO] (child) restarting process")
// Take a full lock because start is going to replace the process. We also
// want to make sure that no other routines attempt to send reload signals
// during this transition.
c.Lock()
defer c.Unlock()
c.kill()
return c.start()
} else {
log.Printf("[INFO] (child) reloading process")
// We only need a read lock here because neither the process nor the exit
// channel are changing.
c.RLock()
defer c.RUnlock()
return c.reload()
}
}
// Kill sends the kill signal to the child process and waits for successful
// termination. If no kill signal is defined, the process is killed with the
// most aggressive kill signal. If the process does not gracefully stop within
// the provided KillTimeout, the process is force-killed. If a splay was
// provided, this function will sleep for a random period of time between 0 and
// the provided splay value to reduce the thundering herd problem. This function
// does not return any errors because it guarantees the process will be dead by
// the return of the function call.
func (c *Child) Kill() {
log.Printf("[INFO] (child) killing process")
c.Lock()
defer c.Unlock()
c.kill()
}
// Stop behaves almost identical to Kill except it supresses future processes
// from being started by this child and it prevents the killing of the child
// process from sending its value back up the exit channel. This is useful
// when doing a graceful shutdown of an application.
func (c *Child) Stop() {
log.Printf("[INFO] (child) stopping process")
c.Lock()
defer c.Unlock()
c.stopLock.Lock()
defer c.stopLock.Unlock()
if c.stopped {
log.Printf("[WARN] (child) already stopped")
return
}
c.kill()
close(c.stopCh)
c.stopped = true
}
func (c *Child) start() error {
cmd := exec.Command(c.command, c.args...)
cmd.Stdin = c.stdin
cmd.Stdout = c.stdout
cmd.Stderr = c.stderr
if err := cmd.Start(); err != nil {
return err
}
c.cmd = cmd
// Create a new exitCh so that previously invoked commands (if any) don't
// cause us to exit, and start a goroutine to wait for that process to end.
exitCh := make(chan int, 1)
go func() {
var code int
err := cmd.Wait()
if err == nil {
code = ExitCodeOK
} else {
code = ExitCodeError
if exiterr, ok := err.(*exec.ExitError); ok {
if status, ok := exiterr.Sys().(syscall.WaitStatus); ok {
code = status.ExitStatus()
}
}
}
// If the child is in the process of killing, do not send a response back
// down the exit channel.
c.stopLock.RLock()
defer c.stopLock.RUnlock()
if c.stopped {
return
}
select {
case <-c.stopCh:
case exitCh <- code:
}
}()
c.exitCh = exitCh
return nil
}
func (c *Child) pid() int {
if !c.running() {
return 0
}
return c.cmd.Process.Pid
}
func (c *Child) signal(s os.Signal) error {
if !c.running() {
return nil
}
return c.cmd.Process.Signal(s)
}
func (c *Child) reload() error {
select {
case <-c.stopCh:
case <-c.randomSplay():
}
return c.signal(c.reloadSignal)
}
func (c *Child) kill() {
if !c.running() {
return
}
exited := false
process := c.cmd.Process
select {
case <-c.stopCh:
case <-c.randomSplay():
}
if c.killSignal != nil {
if err := process.Signal(c.killSignal); err == nil {
// Wait a few seconds for it to exit
killCh := make(chan struct{}, 1)
go func() {
defer close(killCh)
process.Wait()
}()
select {
case <-c.stopCh:
case <-killCh:
exited = true
case <-time.After(c.killTimeout):
}
}
}
if !exited {
process.Kill()
}
c.cmd = nil
}
func (c *Child) running() bool {
return c.cmd != nil && c.cmd.Process != nil
}
func (c *Child) randomSplay() <-chan time.Time {
if c.splay == 0 {
return time.After(0)
}
ns := c.splay.Nanoseconds()
offset := rand.Int63n(ns)
t := time.Duration(offset)
log.Printf("[DEBUG] (child) waiting %.2fs for random splay", t.Seconds())
return time.After(t)
}

View File

@@ -0,0 +1,880 @@
package config
import (
"errors"
"fmt"
"io/ioutil"
"log"
"os"
"path/filepath"
"regexp"
"strings"
"syscall"
"time"
"github.com/hashicorp/consul-template/signals"
"github.com/hashicorp/consul-template/watch"
"github.com/hashicorp/go-multierror"
"github.com/hashicorp/hcl"
"github.com/mitchellh/mapstructure"
)
// The pattern to split the config template syntax on
var configTemplateRe = regexp.MustCompile("([a-zA-Z]:)?([^:]+)")
const (
// DefaultFilePerms are the default file permissions for templates rendered
// onto disk when a specific file permission has not already been specified.
DefaultFilePerms = 0644
// DefaultDedupPrefix is the default prefix used for de-duplication mode
DefaultDedupPrefix = "consul-template/dedup/"
// DefaultCommandTimeout is the amount of time to wait for a command to return.
DefaultCommandTimeout = 30 * time.Second
// DefaultReloadSignal is the default signal for reload.
DefaultReloadSignal = syscall.SIGHUP
// DefaultDumpSignal is the default signal for a core dump.
DefaultDumpSignal = syscall.SIGQUIT
// DefaultKillSignal is the default signal for termination.
DefaultKillSignal = syscall.SIGINT
)
// Config is used to configure Consul Template
type Config struct {
// Path is the path to this configuration file on disk. This value is not
// read from disk by rather dynamically populated by the code so the Config
// has a reference to the path to the file on disk that created it.
Path string `mapstructure:"-"`
// Consul is the location of the Consul instance to query (may be an IP
// address or FQDN) with port.
Consul string `mapstructure:"consul"`
// Token is the Consul API token.
Token string `mapstructure:"token"`
// ReloadSignal is the signal to listen for a reload event.
ReloadSignal os.Signal `mapstructure:"reload_signal"`
// DumpSignal is the signal to listen for a core dump event.
DumpSignal os.Signal `mapstructure:"dump_signal"`
// KillSignal is the signal to listen for a graceful terminate event.
KillSignal os.Signal `mapstructure:"kill_signal"`
// Auth is the HTTP basic authentication for communicating with Consul.
Auth *AuthConfig `mapstructure:"auth"`
// Vault is the configuration for connecting to a vault server.
Vault *VaultConfig `mapstructure:"vault"`
// SSL indicates we should use a secure connection while talking to
// Consul. This requires Consul to be configured to serve HTTPS.
SSL *SSLConfig `mapstructure:"ssl"`
// Syslog is the configuration for syslog.
Syslog *SyslogConfig `mapstructure:"syslog"`
// Exec is the configuration for exec/supervise mode.
Exec *ExecConfig `mapstructure:"exec"`
// MaxStale is the maximum amount of time for staleness from Consul as given
// by LastContact. If supplied, Consul Template will query all servers instead
// of just the leader.
MaxStale time.Duration `mapstructure:"max_stale"`
// ConfigTemplates is a slice of the ConfigTemplate objects in the config.
ConfigTemplates []*ConfigTemplate `mapstructure:"template"`
// Retry is the duration of time to wait between Consul failures.
Retry time.Duration `mapstructure:"retry"`
// Wait is the quiescence timers.
Wait *watch.Wait `mapstructure:"wait"`
// PidFile is the path on disk where a PID file should be written containing
// this processes PID.
PidFile string `mapstructure:"pid_file"`
// LogLevel is the level with which to log for this config.
LogLevel string `mapstructure:"log_level"`
// Deduplicate is used to configure the dedup settings
Deduplicate *DeduplicateConfig `mapstructure:"deduplicate"`
// setKeys is the list of config keys that were set by the user.
setKeys map[string]struct{}
}
// Copy returns a deep copy of the current configuration. This is useful because
// the nested data structures may be shared.
func (c *Config) Copy() *Config {
config := new(Config)
config.Path = c.Path
config.Consul = c.Consul
config.Token = c.Token
config.ReloadSignal = c.ReloadSignal
config.DumpSignal = c.DumpSignal
config.KillSignal = c.KillSignal
if c.Auth != nil {
config.Auth = &AuthConfig{
Enabled: c.Auth.Enabled,
Username: c.Auth.Username,
Password: c.Auth.Password,
}
}
if c.Vault != nil {
config.Vault = &VaultConfig{
Address: c.Vault.Address,
Token: c.Vault.Token,
UnwrapToken: c.Vault.UnwrapToken,
RenewToken: c.Vault.RenewToken,
}
if c.Vault.SSL != nil {
config.Vault.SSL = &SSLConfig{
Enabled: c.Vault.SSL.Enabled,
Verify: c.Vault.SSL.Verify,
Cert: c.Vault.SSL.Cert,
Key: c.Vault.SSL.Key,
CaCert: c.Vault.SSL.CaCert,
}
}
}
if c.SSL != nil {
config.SSL = &SSLConfig{
Enabled: c.SSL.Enabled,
Verify: c.SSL.Verify,
Cert: c.SSL.Cert,
Key: c.SSL.Key,
CaCert: c.SSL.CaCert,
}
}
if c.Syslog != nil {
config.Syslog = &SyslogConfig{
Enabled: c.Syslog.Enabled,
Facility: c.Syslog.Facility,
}
}
if c.Exec != nil {
config.Exec = &ExecConfig{
Command: c.Exec.Command,
Splay: c.Exec.Splay,
ReloadSignal: c.Exec.ReloadSignal,
KillSignal: c.Exec.KillSignal,
KillTimeout: c.Exec.KillTimeout,
}
}
config.MaxStale = c.MaxStale
config.ConfigTemplates = make([]*ConfigTemplate, len(c.ConfigTemplates))
for i, t := range c.ConfigTemplates {
config.ConfigTemplates[i] = &ConfigTemplate{
Source: t.Source,
Destination: t.Destination,
EmbeddedTemplate: t.EmbeddedTemplate,
Command: t.Command,
CommandTimeout: t.CommandTimeout,
Perms: t.Perms,
Backup: t.Backup,
LeftDelim: t.LeftDelim,
RightDelim: t.RightDelim,
Wait: t.Wait,
}
}
config.Retry = c.Retry
if c.Wait != nil {
config.Wait = &watch.Wait{
Min: c.Wait.Min,
Max: c.Wait.Max,
}
}
config.PidFile = c.PidFile
config.LogLevel = c.LogLevel
if c.Deduplicate != nil {
config.Deduplicate = &DeduplicateConfig{
Enabled: c.Deduplicate.Enabled,
Prefix: c.Deduplicate.Prefix,
TTL: c.Deduplicate.TTL,
}
}
config.setKeys = c.setKeys
return config
}
// Merge merges the values in config into this config object. Values in the
// config object overwrite the values in c.
func (c *Config) Merge(config *Config) {
if config.WasSet("path") {
c.Path = config.Path
}
if config.WasSet("consul") {
c.Consul = config.Consul
}
if config.WasSet("token") {
c.Token = config.Token
}
if config.WasSet("reload_signal") {
c.ReloadSignal = config.ReloadSignal
}
if config.WasSet("dump_signal") {
c.DumpSignal = config.DumpSignal
}
if config.WasSet("kill_signal") {
c.KillSignal = config.KillSignal
}
if config.WasSet("vault") {
if c.Vault == nil {
c.Vault = &VaultConfig{}
}
if config.WasSet("vault.address") {
c.Vault.Address = config.Vault.Address
}
if config.WasSet("vault.token") {
c.Vault.Token = config.Vault.Token
}
if config.WasSet("vault.unwrap_token") {
c.Vault.UnwrapToken = config.Vault.UnwrapToken
}
if config.WasSet("vault.renew_token") {
c.Vault.RenewToken = config.Vault.RenewToken
}
if config.WasSet("vault.ssl") {
if c.Vault.SSL == nil {
c.Vault.SSL = &SSLConfig{}
}
if config.WasSet("vault.ssl.verify") {
c.Vault.SSL.Verify = config.Vault.SSL.Verify
c.Vault.SSL.Enabled = true
}
if config.WasSet("vault.ssl.cert") {
c.Vault.SSL.Cert = config.Vault.SSL.Cert
c.Vault.SSL.Enabled = true
}
if config.WasSet("vault.ssl.key") {
c.Vault.SSL.Key = config.Vault.SSL.Key
c.Vault.SSL.Enabled = true
}
if config.WasSet("vault.ssl.ca_cert") {
c.Vault.SSL.CaCert = config.Vault.SSL.CaCert
c.Vault.SSL.Enabled = true
}
if config.WasSet("vault.ssl.enabled") {
c.Vault.SSL.Enabled = config.Vault.SSL.Enabled
}
}
}
if config.WasSet("auth") {
if c.Auth == nil {
c.Auth = &AuthConfig{}
}
if config.WasSet("auth.username") {
c.Auth.Username = config.Auth.Username
c.Auth.Enabled = true
}
if config.WasSet("auth.password") {
c.Auth.Password = config.Auth.Password
c.Auth.Enabled = true
}
if config.WasSet("auth.enabled") {
c.Auth.Enabled = config.Auth.Enabled
}
}
if config.WasSet("ssl") {
if c.SSL == nil {
c.SSL = &SSLConfig{}
}
if config.WasSet("ssl.verify") {
c.SSL.Verify = config.SSL.Verify
c.SSL.Enabled = true
}
if config.WasSet("ssl.cert") {
c.SSL.Cert = config.SSL.Cert
c.SSL.Enabled = true
}
if config.WasSet("ssl.key") {
c.SSL.Key = config.SSL.Key
c.SSL.Enabled = true
}
if config.WasSet("ssl.ca_cert") {
c.SSL.CaCert = config.SSL.CaCert
c.SSL.Enabled = true
}
if config.WasSet("ssl.enabled") {
c.SSL.Enabled = config.SSL.Enabled
}
}
if config.WasSet("syslog") {
if c.Syslog == nil {
c.Syslog = &SyslogConfig{}
}
if config.WasSet("syslog.facility") {
c.Syslog.Facility = config.Syslog.Facility
c.Syslog.Enabled = true
}
if config.WasSet("syslog.enabled") {
c.Syslog.Enabled = config.Syslog.Enabled
}
}
if config.WasSet("exec") {
if c.Exec == nil {
c.Exec = &ExecConfig{}
}
if config.WasSet("exec.command") {
c.Exec.Command = config.Exec.Command
}
if config.WasSet("exec.splay") {
c.Exec.Splay = config.Exec.Splay
}
if config.WasSet("exec.reload_signal") {
c.Exec.ReloadSignal = config.Exec.ReloadSignal
}
if config.WasSet("exec.kill_signal") {
c.Exec.KillSignal = config.Exec.KillSignal
}
if config.WasSet("exec.kill_timeout") {
c.Exec.KillTimeout = config.Exec.KillTimeout
}
}
if config.WasSet("max_stale") {
c.MaxStale = config.MaxStale
}
if len(config.ConfigTemplates) > 0 {
if c.ConfigTemplates == nil {
c.ConfigTemplates = make([]*ConfigTemplate, 0, 1)
}
for _, template := range config.ConfigTemplates {
c.ConfigTemplates = append(c.ConfigTemplates, &ConfigTemplate{
Source: template.Source,
Destination: template.Destination,
EmbeddedTemplate: template.EmbeddedTemplate,
Command: template.Command,
CommandTimeout: template.CommandTimeout,
Perms: template.Perms,
Backup: template.Backup,
LeftDelim: template.LeftDelim,
RightDelim: template.RightDelim,
Wait: template.Wait,
})
}
}
if config.WasSet("retry") {
c.Retry = config.Retry
}
if config.WasSet("wait") {
c.Wait = &watch.Wait{
Min: config.Wait.Min,
Max: config.Wait.Max,
}
}
if config.WasSet("pid_file") {
c.PidFile = config.PidFile
}
if config.WasSet("log_level") {
c.LogLevel = config.LogLevel
}
if config.WasSet("deduplicate") {
if c.Deduplicate == nil {
c.Deduplicate = &DeduplicateConfig{}
}
if config.WasSet("deduplicate.enabled") {
c.Deduplicate.Enabled = config.Deduplicate.Enabled
}
if config.WasSet("deduplicate.prefix") {
c.Deduplicate.Prefix = config.Deduplicate.Prefix
}
}
if c.setKeys == nil {
c.setKeys = make(map[string]struct{})
}
for k := range config.setKeys {
if _, ok := c.setKeys[k]; !ok {
c.setKeys[k] = struct{}{}
}
}
}
// WasSet determines if the given key was set in the config (as opposed to just
// having the default value).
func (c *Config) WasSet(key string) bool {
if _, ok := c.setKeys[key]; ok {
return true
}
return false
}
// Set is a helper function for marking a key as set.
func (c *Config) Set(key string) {
if c.setKeys == nil {
c.setKeys = make(map[string]struct{})
}
if _, ok := c.setKeys[key]; !ok {
c.setKeys[key] = struct{}{}
}
}
// ParseConfig reads the configuration file at the given path and returns a new
// Config struct with the data populated.
func ParseConfig(path string) (*Config, error) {
var errs *multierror.Error
// Read the contents of the file
contents, err := ioutil.ReadFile(path)
if err != nil {
return nil, fmt.Errorf("error reading config at %q: %s", path, err)
}
// Parse the file (could be HCL or JSON)
var shadow interface{}
if err := hcl.Decode(&shadow, string(contents)); err != nil {
return nil, fmt.Errorf("error decoding config at %q: %s", path, err)
}
// Convert to a map and flatten the keys we want to flatten
parsed, ok := shadow.(map[string]interface{})
if !ok {
return nil, fmt.Errorf("error converting config at %q", path)
}
flattenKeys(parsed, []string{
"auth",
"ssl",
"syslog",
"exec",
"vault",
"deduplicate",
})
// Deprecations
if vault, ok := parsed["vault"].(map[string]interface{}); ok {
if val, ok := vault["renew"]; ok {
log.Println(`[WARN] vault.renew has been renamed to vault.renew_token. ` +
`Update your configuration files and change "renew" to "renew_token".`)
vault["renew_token"] = val
delete(vault, "renew")
}
}
// Create a new, empty config
config := new(Config)
// Use mapstructure to populate the basic config fields
metadata := new(mapstructure.Metadata)
decoder, err := mapstructure.NewDecoder(&mapstructure.DecoderConfig{
DecodeHook: mapstructure.ComposeDecodeHookFunc(
StringToFileModeFunc(),
signals.StringToSignalFunc(),
watch.StringToWaitDurationHookFunc(),
mapstructure.StringToSliceHookFunc(","),
mapstructure.StringToTimeDurationHookFunc(),
),
ErrorUnused: true,
Metadata: metadata,
Result: config,
})
if err != nil {
errs = multierror.Append(errs, err)
return nil, errs.ErrorOrNil()
}
if err := decoder.Decode(parsed); err != nil {
errs = multierror.Append(errs, err)
return nil, errs.ErrorOrNil()
}
// Store a reference to the path where this config was read from
config.Path = path
// Explicitly check for the nil signal and set the value back to nil
if config.ReloadSignal == signals.SIGNIL {
config.ReloadSignal = nil
}
if config.DumpSignal == signals.SIGNIL {
config.DumpSignal = nil
}
if config.KillSignal == signals.SIGNIL {
config.KillSignal = nil
}
if config.Exec != nil {
if config.Exec.ReloadSignal == signals.SIGNIL {
config.Exec.ReloadSignal = nil
}
if config.Exec.KillSignal == signals.SIGNIL {
config.Exec.KillSignal = nil
}
}
// Setup default values for templates
for _, t := range config.ConfigTemplates {
// Ensure there's a default value for the template's file permissions
if t.Perms == 0000 {
t.Perms = DefaultFilePerms
}
// Ensure we have a default command timeout
if t.CommandTimeout == 0 {
t.CommandTimeout = DefaultCommandTimeout
}
// Set up a default zero wait, which disables it for this
// template.
if t.Wait == nil {
t.Wait = &watch.Wait{}
}
}
// Update the list of set keys
if config.setKeys == nil {
config.setKeys = make(map[string]struct{})
}
for _, key := range metadata.Keys {
if _, ok := config.setKeys[key]; !ok {
config.setKeys[key] = struct{}{}
}
}
config.setKeys["path"] = struct{}{}
d := DefaultConfig()
d.Merge(config)
config = d
return config, errs.ErrorOrNil()
}
// ConfigFromPath iterates and merges all configuration files in a given
// directory, returning the resulting config.
func ConfigFromPath(path string) (*Config, error) {
// Ensure the given filepath exists
if _, err := os.Stat(path); os.IsNotExist(err) {
return nil, fmt.Errorf("config: missing file/folder: %s", path)
}
// Check if a file was given or a path to a directory
stat, err := os.Stat(path)
if err != nil {
return nil, fmt.Errorf("config: error stating file: %s", err)
}
// Recursively parse directories, single load files
if stat.Mode().IsDir() {
// Ensure the given filepath has at least one config file
_, err := ioutil.ReadDir(path)
if err != nil {
return nil, fmt.Errorf("config: error listing directory: %s", err)
}
// Create a blank config to merge off of
config := DefaultConfig()
// Potential bug: Walk does not follow symlinks!
err = filepath.Walk(path, func(path string, info os.FileInfo, err error) error {
// If WalkFunc had an error, just return it
if err != nil {
return err
}
// Do nothing for directories
if info.IsDir() {
return nil
}
// Parse and merge the config
newConfig, err := ParseConfig(path)
if err != nil {
return err
}
config.Merge(newConfig)
return nil
})
if err != nil {
return nil, fmt.Errorf("config: walk error: %s", err)
}
return config, nil
} else if stat.Mode().IsRegular() {
return ParseConfig(path)
}
return nil, fmt.Errorf("config: unknown filetype: %q", stat.Mode().String())
}
// DefaultConfig returns the default configuration struct.
func DefaultConfig() *Config {
logLevel := os.Getenv("CONSUL_TEMPLATE_LOG")
if logLevel == "" {
logLevel = "WARN"
}
config := &Config{
Vault: &VaultConfig{
RenewToken: true,
SSL: &SSLConfig{
Enabled: true,
Verify: true,
},
},
Auth: &AuthConfig{
Enabled: false,
},
ReloadSignal: DefaultReloadSignal,
DumpSignal: DefaultDumpSignal,
KillSignal: DefaultKillSignal,
SSL: &SSLConfig{
Enabled: false,
Verify: true,
},
Syslog: &SyslogConfig{
Enabled: false,
Facility: "LOCAL0",
},
Deduplicate: &DeduplicateConfig{
Enabled: false,
Prefix: DefaultDedupPrefix,
TTL: 15 * time.Second,
},
Exec: &ExecConfig{
KillSignal: syscall.SIGTERM,
KillTimeout: 30 * time.Second,
},
ConfigTemplates: make([]*ConfigTemplate, 0),
Retry: 5 * time.Second,
MaxStale: 1 * time.Second,
Wait: &watch.Wait{},
LogLevel: logLevel,
setKeys: make(map[string]struct{}),
}
if v := os.Getenv("CONSUL_HTTP_ADDR"); v != "" {
config.Consul = v
}
if v := os.Getenv("CONSUL_TOKEN"); v != "" {
config.Token = v
}
if v := os.Getenv("VAULT_ADDR"); v != "" {
config.Vault.Address = v
}
if v := os.Getenv("VAULT_TOKEN"); v != "" {
config.Vault.Token = v
}
if v := os.Getenv("VAULT_UNWRAP_TOKEN"); v != "" {
config.Vault.UnwrapToken = true
}
if v := os.Getenv("VAULT_CAPATH"); v != "" {
config.Vault.SSL.Cert = v
}
if v := os.Getenv("VAULT_CACERT"); v != "" {
config.Vault.SSL.CaCert = v
}
if v := os.Getenv("VAULT_SKIP_VERIFY"); v != "" {
config.Vault.SSL.Verify = false
}
return config
}
// AuthConfig is the HTTP basic authentication data.
type AuthConfig struct {
Enabled bool `mapstructure:"enabled"`
Username string `mapstructure:"username"`
Password string `mapstructure:"password"`
}
// String is the string representation of this authentication. If authentication
// is not enabled, this returns the empty string. The username and password will
// be separated by a colon.
func (a *AuthConfig) String() string {
if !a.Enabled {
return ""
}
if a.Password != "" {
return fmt.Sprintf("%s:%s", a.Username, a.Password)
}
return a.Username
}
// ExecConfig is used to configure the application when it runs in
// exec/supervise mode.
type ExecConfig struct {
// Command is the command to execute and watch as a child process.
Command string `mapstructure:"command"`
// Splay is the maximum amount of time to wait to kill the process.
Splay time.Duration `mapstructure:"splay"`
// ReloadSignal is the signal to send to the child process when a template
// changes. This tells the child process that templates have
ReloadSignal os.Signal `mapstructure:"reload_signal"`
// KillSignal is the signal to send to the command to kill it gracefully. The
// default value is "SIGTERM".
KillSignal os.Signal `mapstructure:"kill_signal"`
// KillTimeout is the amount of time to give the process to cleanup before
// hard-killing it.
KillTimeout time.Duration `mapstructure:"kill_timeout"`
}
// DeduplicateConfig is used to enable the de-duplication mode, which depends
// on electing a leader per-template and watching of a key. This is used
// to reduce the cost of many instances of CT running the same template.
type DeduplicateConfig struct {
// Controls if deduplication mode is enabled
Enabled bool `mapstructure:"enabled"`
// Controls the KV prefix used. Defaults to defaultDedupPrefix
Prefix string `mapstructure:"prefix"`
// TTL is the Session TTL used for lock acquisition, defaults to 15 seconds.
TTL time.Duration `mapstructure:"ttl"`
}
// SSLConfig is the configuration for SSL.
type SSLConfig struct {
Enabled bool `mapstructure:"enabled"`
Verify bool `mapstructure:"verify"`
Cert string `mapstructure:"cert"`
Key string `mapstructure:"key"`
CaCert string `mapstructure:"ca_cert"`
}
// SyslogConfig is the configuration for syslog.
type SyslogConfig struct {
Enabled bool `mapstructure:"enabled"`
Facility string `mapstructure:"facility"`
}
// ConfigTemplate is the representation of an input template, output location,
// and optional command to execute when rendered
type ConfigTemplate struct {
Source string `mapstructure:"source"`
Destination string `mapstructure:"destination"`
EmbeddedTemplate string `mapstructure:"contents"`
Command string `mapstructure:"command"`
CommandTimeout time.Duration `mapstructure:"command_timeout"`
Perms os.FileMode `mapstructure:"perms"`
Backup bool `mapstructure:"backup"`
LeftDelim string `mapstructure:"left_delimiter"`
RightDelim string `mapstructure:"right_delimiter"`
Wait *watch.Wait `mapstructure:"wait"`
}
// VaultConfig is the configuration for connecting to a vault server.
type VaultConfig struct {
Address string `mapstructure:"address"`
Token string `mapstructure:"token" json:"-"`
UnwrapToken bool `mapstructure:"unwrap_token"`
RenewToken bool `mapstructure:"renew_token"`
// SSL indicates we should use a secure connection while talking to Vault.
SSL *SSLConfig `mapstructure:"ssl"`
}
// ParseConfigTemplate parses a string into a ConfigTemplate struct
func ParseConfigTemplate(s string) (*ConfigTemplate, error) {
if len(strings.TrimSpace(s)) < 1 {
return nil, errors.New("cannot specify empty template declaration")
}
var source, destination, command string
parts := configTemplateRe.FindAllString(s, -1)
switch len(parts) {
case 1:
source = parts[0]
case 2:
source, destination = parts[0], parts[1]
case 3:
source, destination, command = parts[0], parts[1], parts[2]
default:
return nil, errors.New("invalid template declaration format")
}
return &ConfigTemplate{
Source: source,
Destination: destination,
Command: command,
CommandTimeout: DefaultCommandTimeout,
Perms: DefaultFilePerms,
Wait: &watch.Wait{},
}, nil
}
// flattenKeys is a function that takes a map[string]interface{} and recursively
// flattens any keys that are a []map[string]interface{} where the key is in the
// given list of keys.
func flattenKeys(m map[string]interface{}, keys []string) {
keyMap := make(map[string]struct{})
for _, key := range keys {
keyMap[key] = struct{}{}
}
var flatten func(map[string]interface{})
flatten = func(m map[string]interface{}) {
for k, v := range m {
if _, ok := keyMap[k]; !ok {
continue
}
switch typed := v.(type) {
case []map[string]interface{}:
if len(typed) > 0 {
last := typed[len(typed)-1]
flatten(last)
m[k] = last
} else {
m[k] = nil
}
case map[string]interface{}:
flatten(typed)
m[k] = typed
default:
m[k] = v
}
}
}
flatten(m)
}

View File

@@ -0,0 +1,25 @@
package config
import (
"io/ioutil"
"os"
"testing"
)
func TestConfig(contents string, t *testing.T) *Config {
f, err := ioutil.TempFile(os.TempDir(), "")
if err != nil {
t.Fatal(err)
}
_, err = f.Write([]byte(contents))
if err != nil {
t.Fatal(err)
}
config, err := ParseConfig(f.Name())
if err != nil {
t.Fatal(err)
}
return config
}

View File

@@ -0,0 +1,33 @@
package config
import (
"os"
"reflect"
"strconv"
"github.com/mitchellh/mapstructure"
)
// StringToFileModeFunc returns a function that converts strings to os.FileMode
// value. This is designed to be used with mapstructure for parsing out a
// filemode value.
func StringToFileModeFunc() mapstructure.DecodeHookFunc {
return func(
f reflect.Type,
t reflect.Type,
data interface{}) (interface{}, error) {
if f.Kind() != reflect.String {
return data, nil
}
if t != reflect.TypeOf(os.FileMode(0)) {
return data, nil
}
// Convert it by parsing
v, err := strconv.ParseUint(data.(string), 8, 12)
if err != nil {
return data, err
}
return os.FileMode(v), nil
}
}

View File

@@ -0,0 +1,222 @@
package dependency
import (
"encoding/gob"
"errors"
"fmt"
"log"
"regexp"
"sort"
"sync"
"github.com/hashicorp/consul/api"
)
func init() {
gob.Register([]*NodeDetail{})
gob.Register([]*NodeService{})
}
// NodeDetail is a wrapper around the node and its services.
type NodeDetail struct {
Node *Node
Services NodeServiceList
}
// NodeService is a service on a single node.
type NodeService struct {
ID string
Service string
Tags ServiceTags
Port int
Address string
}
// CatalogNode represents a single node from the Consul catalog.
type CatalogNode struct {
sync.Mutex
rawKey string
dataCenter string
stopped bool
stopCh chan struct{}
}
// Fetch queries the Consul API defined by the given client and returns a
// of NodeDetail object.
func (d *CatalogNode) Fetch(clients *ClientSet, opts *QueryOptions) (interface{}, *ResponseMetadata, error) {
d.Lock()
if d.stopped {
defer d.Unlock()
return nil, nil, ErrStopped
}
d.Unlock()
if opts == nil {
opts = &QueryOptions{}
}
consulOpts := opts.consulQueryOptions()
if d.dataCenter != "" {
consulOpts.Datacenter = d.dataCenter
}
consul, err := clients.Consul()
if err != nil {
return nil, nil, fmt.Errorf("catalog node: error getting client: %s", err)
}
nodeName := d.rawKey
if nodeName == "" {
log.Printf("[DEBUG] (%s) getting local agent name", d.Display())
nodeName, err = consul.Agent().NodeName()
if err != nil {
return nil, nil, fmt.Errorf("catalog node: error getting local agent: %s", err)
}
}
var n *api.CatalogNode
var qm *api.QueryMeta
dataCh := make(chan struct{})
go func() {
log.Printf("[DEBUG] (%s) querying consul with %+v", d.Display(), consulOpts)
n, qm, err = consul.Catalog().Node(nodeName, consulOpts)
close(dataCh)
}()
select {
case <-d.stopCh:
return nil, nil, ErrStopped
case <-dataCh:
}
if err != nil {
return nil, nil, fmt.Errorf("catalog node: error fetching: %s", err)
}
rm := &ResponseMetadata{
LastIndex: qm.LastIndex,
LastContact: qm.LastContact,
}
if n == nil {
log.Printf("[WARN] (%s) could not find node by that name", d.Display())
var node *NodeDetail
return node, rm, nil
}
services := make(NodeServiceList, 0, len(n.Services))
for _, v := range n.Services {
services = append(services, &NodeService{
ID: v.ID,
Service: v.Service,
Tags: ServiceTags(deepCopyAndSortTags(v.Tags)),
Port: v.Port,
Address: v.Address,
})
}
sort.Stable(services)
node := &NodeDetail{
Node: &Node{
Node: n.Node.Node,
Address: n.Node.Address,
},
Services: services,
}
return node, rm, nil
}
// CanShare returns a boolean if this dependency is shareable.
func (d *CatalogNode) CanShare() bool {
return false
}
// HashCode returns a unique identifier.
func (d *CatalogNode) HashCode() string {
if d.dataCenter != "" {
return fmt.Sprintf("NodeDetail|%s@%s", d.rawKey, d.dataCenter)
}
return fmt.Sprintf("NodeDetail|%s", d.rawKey)
}
// Display prints the human-friendly output.
func (d *CatalogNode) Display() string {
if d.dataCenter != "" {
return fmt.Sprintf("node(%s@%s)", d.rawKey, d.dataCenter)
}
return fmt.Sprintf(`"node(%s)"`, d.rawKey)
}
// Stop halts the dependency's fetch function.
func (d *CatalogNode) Stop() {
d.Lock()
defer d.Unlock()
if !d.stopped {
close(d.stopCh)
d.stopped = true
}
}
// ParseCatalogNode parses a name name and optional datacenter value.
// If the name is empty or not provided then the current agent is used.
func ParseCatalogNode(s ...string) (*CatalogNode, error) {
switch len(s) {
case 0:
cn := &CatalogNode{stopCh: make(chan struct{})}
return cn, nil
case 1:
cn := &CatalogNode{
rawKey: s[0],
stopCh: make(chan struct{}),
}
return cn, nil
case 2:
dc := s[1]
re := regexp.MustCompile(`\A` +
`(@(?P<datacenter>[[:word:]\.\-]+))?` +
`\z`)
names := re.SubexpNames()
match := re.FindAllStringSubmatch(dc, -1)
if len(match) == 0 {
return nil, errors.New("invalid node dependency format")
}
r := match[0]
m := map[string]string{}
for i, n := range r {
if names[i] != "" {
m[names[i]] = n
}
}
nd := &CatalogNode{
rawKey: s[0],
dataCenter: m["datacenter"],
stopCh: make(chan struct{}),
}
return nd, nil
default:
return nil, fmt.Errorf("expected 0, 1, or 2 arguments, got %d", len(s))
}
}
// Sorting
// NodeServiceList is a sortable list of node service names.
type NodeServiceList []*NodeService
func (s NodeServiceList) Len() int { return len(s) }
func (s NodeServiceList) Swap(i, j int) { s[i], s[j] = s[j], s[i] }
func (s NodeServiceList) Less(i, j int) bool {
if s[i].Service == s[j].Service {
return s[i].ID <= s[j].ID
}
return s[i].Service <= s[j].Service
}

View File

@@ -0,0 +1,180 @@
package dependency
import (
"encoding/gob"
"errors"
"fmt"
"log"
"regexp"
"sort"
"sync"
"github.com/hashicorp/consul/api"
)
func init() {
gob.Register([]*Node{})
}
// Node is a node entry in Consul
type Node struct {
Node string
Address string
}
// CatalogNodes is the representation of all registered nodes in Consul.
type CatalogNodes struct {
sync.Mutex
rawKey string
DataCenter string
stopped bool
stopCh chan struct{}
}
// Fetch queries the Consul API defined by the given client and returns a slice
// of Node objects
func (d *CatalogNodes) Fetch(clients *ClientSet, opts *QueryOptions) (interface{}, *ResponseMetadata, error) {
d.Lock()
if d.stopped {
defer d.Unlock()
return nil, nil, ErrStopped
}
d.Unlock()
if opts == nil {
opts = &QueryOptions{}
}
consulOpts := opts.consulQueryOptions()
if d.DataCenter != "" {
consulOpts.Datacenter = d.DataCenter
}
consul, err := clients.Consul()
if err != nil {
return nil, nil, fmt.Errorf("catalog nodes: error getting client: %s", err)
}
var n []*api.Node
var qm *api.QueryMeta
dataCh := make(chan struct{})
go func() {
log.Printf("[DEBUG] (%s) querying Consul with %+v", d.Display(), consulOpts)
n, qm, err = consul.Catalog().Nodes(consulOpts)
close(dataCh)
}()
select {
case <-d.stopCh:
return nil, nil, ErrStopped
case <-dataCh:
}
if err != nil {
return nil, nil, fmt.Errorf("catalog nodes: error fetching: %s", err)
}
log.Printf("[DEBUG] (%s) Consul returned %d nodes", d.Display(), len(n))
nodes := make([]*Node, 0, len(n))
for _, node := range n {
nodes = append(nodes, &Node{
Node: node.Node,
Address: node.Address,
})
}
sort.Stable(NodeList(nodes))
rm := &ResponseMetadata{
LastIndex: qm.LastIndex,
LastContact: qm.LastContact,
}
return nodes, rm, nil
}
// CanShare returns a boolean if this dependency is shareable.
func (d *CatalogNodes) CanShare() bool {
return true
}
// HashCode returns a unique identifier.
func (d *CatalogNodes) HashCode() string {
return fmt.Sprintf("CatalogNodes|%s", d.rawKey)
}
// Display prints the human-friendly output.
func (d *CatalogNodes) Display() string {
if d.rawKey == "" {
return fmt.Sprintf(`"nodes"`)
}
return fmt.Sprintf(`"nodes(%s)"`, d.rawKey)
}
// Stop halts the dependency's fetch function.
func (d *CatalogNodes) Stop() {
d.Lock()
defer d.Unlock()
if !d.stopped {
close(d.stopCh)
d.stopped = true
}
}
// ParseCatalogNodes parses a string of the format @dc.
func ParseCatalogNodes(s ...string) (*CatalogNodes, error) {
switch len(s) {
case 0:
cn := &CatalogNodes{
rawKey: "",
stopCh: make(chan struct{}),
}
return cn, nil
case 1:
dc := s[0]
re := regexp.MustCompile(`\A` +
`(@(?P<datacenter>[[:word:]\.\-]+))?` +
`\z`)
names := re.SubexpNames()
match := re.FindAllStringSubmatch(dc, -1)
if len(match) == 0 {
return nil, errors.New("invalid node dependency format")
}
r := match[0]
m := map[string]string{}
for i, n := range r {
if names[i] != "" {
m[names[i]] = n
}
}
cn := &CatalogNodes{
rawKey: dc,
DataCenter: m["datacenter"],
stopCh: make(chan struct{}),
}
return cn, nil
default:
return nil, fmt.Errorf("expected 0 or 1 arguments, got %d", len(s))
}
}
// NodeList is a sortable list of node objects by name and then IP address.
type NodeList []*Node
func (s NodeList) Len() int { return len(s) }
func (s NodeList) Swap(i, j int) { s[i], s[j] = s[j], s[i] }
func (s NodeList) Less(i, j int) bool {
if s[i].Node == s[j].Node {
return s[i].Address <= s[j].Address
}
return s[i].Node <= s[j].Node
}

View File

@@ -0,0 +1,187 @@
package dependency
import (
"encoding/gob"
"errors"
"fmt"
"log"
"regexp"
"sort"
"sync"
"github.com/hashicorp/consul/api"
)
func init() {
gob.Register([]*CatalogService{})
}
// CatalogService is a catalog entry in Consul.
type CatalogService struct {
Name string
Tags ServiceTags
}
// CatalogServices is the representation of a requested catalog service
// dependency from inside a template.
type CatalogServices struct {
sync.Mutex
rawKey string
Name string
Tags []string
DataCenter string
stopped bool
stopCh chan struct{}
}
// Fetch queries the Consul API defined by the given client and returns a slice
// of CatalogService objects.
func (d *CatalogServices) Fetch(clients *ClientSet, opts *QueryOptions) (interface{}, *ResponseMetadata, error) {
d.Lock()
if d.stopped {
defer d.Unlock()
return nil, nil, ErrStopped
}
d.Unlock()
if opts == nil {
opts = &QueryOptions{}
}
consulOpts := opts.consulQueryOptions()
if d.DataCenter != "" {
consulOpts.Datacenter = d.DataCenter
}
consul, err := clients.Consul()
if err != nil {
return nil, nil, fmt.Errorf("catalog services: error getting client: %s", err)
}
var entries map[string][]string
var qm *api.QueryMeta
dataCh := make(chan struct{})
go func() {
log.Printf("[DEBUG] (%s) querying Consul with %+v", d.Display(), consulOpts)
entries, qm, err = consul.Catalog().Services(consulOpts)
close(dataCh)
}()
select {
case <-d.stopCh:
return nil, nil, ErrStopped
case <-dataCh:
}
if err != nil {
return nil, nil, fmt.Errorf("catalog services: error fetching: %s", err)
}
log.Printf("[DEBUG] (%s) Consul returned %d catalog services", d.Display(), len(entries))
var catalogServices []*CatalogService
for name, tags := range entries {
tags = deepCopyAndSortTags(tags)
catalogServices = append(catalogServices, &CatalogService{
Name: name,
Tags: ServiceTags(tags),
})
}
sort.Stable(CatalogServicesList(catalogServices))
rm := &ResponseMetadata{
LastIndex: qm.LastIndex,
LastContact: qm.LastContact,
}
return catalogServices, rm, nil
}
// CanShare returns a boolean if this dependency is shareable.
func (d *CatalogServices) CanShare() bool {
return true
}
// HashCode returns a unique identifier.
func (d *CatalogServices) HashCode() string {
return fmt.Sprintf("CatalogServices|%s", d.rawKey)
}
// Display prints the human-friendly output.
func (d *CatalogServices) Display() string {
if d.rawKey == "" {
return fmt.Sprintf(`"services"`)
}
return fmt.Sprintf(`"services(%s)"`, d.rawKey)
}
// Stop halts the dependency's fetch function.
func (d *CatalogServices) Stop() {
d.Lock()
defer d.Unlock()
if !d.stopped {
close(d.stopCh)
d.stopped = true
}
}
// ParseCatalogServices parses a string of the format @dc.
func ParseCatalogServices(s ...string) (*CatalogServices, error) {
switch len(s) {
case 0:
cs := &CatalogServices{
rawKey: "",
stopCh: make(chan struct{}),
}
return cs, nil
case 1:
dc := s[0]
re := regexp.MustCompile(`\A` +
`(@(?P<datacenter>[[:word:]\.\-]+))?` +
`\z`)
names := re.SubexpNames()
match := re.FindAllStringSubmatch(dc, -1)
if len(match) == 0 {
return nil, errors.New("invalid catalog service dependency format")
}
r := match[0]
m := map[string]string{}
for i, n := range r {
if names[i] != "" {
m[names[i]] = n
}
}
nd := &CatalogServices{
rawKey: dc,
DataCenter: m["datacenter"],
stopCh: make(chan struct{}),
}
return nd, nil
default:
return nil, fmt.Errorf("expected 0 or 1 arguments, got %d", len(s))
}
}
/// --- Sorting
// CatalogServicesList is a sortable slice of CatalogService structs.
type CatalogServicesList []*CatalogService
func (s CatalogServicesList) Len() int { return len(s) }
func (s CatalogServicesList) Swap(i, j int) { s[i], s[j] = s[j], s[i] }
func (s CatalogServicesList) Less(i, j int) bool {
if s[i].Name <= s[j].Name {
return true
}
return false
}

View File

@@ -0,0 +1,312 @@
package dependency
import (
"crypto/tls"
"crypto/x509"
"fmt"
"io/ioutil"
"log"
"net/http"
"sync"
consulapi "github.com/hashicorp/consul/api"
"github.com/hashicorp/go-cleanhttp"
vaultapi "github.com/hashicorp/vault/api"
)
// ClientSet is a collection of clients that dependencies use to communicate
// with remote services like Consul or Vault.
type ClientSet struct {
sync.RWMutex
vault *vaultClient
consul *consulClient
}
// consulClient is a wrapper around a real Consul API client.
type consulClient struct {
client *consulapi.Client
httpClient *http.Client
}
// vaultClient is a wrapper around a real Vault API client.
type vaultClient struct {
client *vaultapi.Client
httpClient *http.Client
}
// CreateConsulClientInput is used as input to the CreateConsulClient function.
type CreateConsulClientInput struct {
Address string
Token string
AuthEnabled bool
AuthUsername string
AuthPassword string
SSLEnabled bool
SSLVerify bool
SSLCert string
SSLKey string
SSLCACert string
}
// CreateVaultClientInput is used as input to the CreateVaultClient function.
type CreateVaultClientInput struct {
Address string
Token string
UnwrapToken bool
SSLEnabled bool
SSLVerify bool
SSLCert string
SSLKey string
SSLCACert string
}
// NewClientSet creates a new client set that is ready to accept clients.
func NewClientSet() *ClientSet {
return &ClientSet{}
}
// CreateConsulClient creates a new Consul API client from the given input.
func (c *ClientSet) CreateConsulClient(i *CreateConsulClientInput) error {
log.Printf("[INFO] (clients) creating consul/api client")
// Generate the default config
consulConfig := consulapi.DefaultConfig()
// Set the address
if i.Address != "" {
log.Printf("[DEBUG] (clients) setting consul address to %q", i.Address)
consulConfig.Address = i.Address
}
// Configure the token
if i.Token != "" {
log.Printf("[DEBUG] (clients) setting consul token")
consulConfig.Token = i.Token
}
// Add basic auth
if i.AuthEnabled {
log.Printf("[DEBUG] (clients) setting basic auth")
consulConfig.HttpAuth = &consulapi.HttpBasicAuth{
Username: i.AuthUsername,
Password: i.AuthPassword,
}
}
// This transport will attempt to keep connections open to the Consul server.
transport := cleanhttp.DefaultPooledTransport()
// Configure SSL
if i.SSLEnabled {
log.Printf("[DEBUG] (clients) enabling consul SSL")
consulConfig.Scheme = "https"
var tlsConfig tls.Config
// Custom certificate or certificate and key
if i.SSLCert != "" && i.SSLKey != "" {
cert, err := tls.LoadX509KeyPair(i.SSLCert, i.SSLKey)
if err != nil {
return fmt.Errorf("client set: consul: %s", err)
}
tlsConfig.Certificates = []tls.Certificate{cert}
} else if i.SSLCert != "" {
cert, err := tls.LoadX509KeyPair(i.SSLCert, i.SSLCert)
if err != nil {
return fmt.Errorf("client set: consul: %s", err)
}
tlsConfig.Certificates = []tls.Certificate{cert}
}
// Custom CA certificate
if i.SSLCACert != "" {
cacert, err := ioutil.ReadFile(i.SSLCACert)
if err != nil {
return fmt.Errorf("client set: consul: %s", err)
}
caCertPool := x509.NewCertPool()
caCertPool.AppendCertsFromPEM(cacert)
tlsConfig.RootCAs = caCertPool
}
// Construct all the certificates now
tlsConfig.BuildNameToCertificate()
// SSL verification
if !i.SSLVerify {
log.Printf("[WARN] (clients) disabling consul SSL verification")
tlsConfig.InsecureSkipVerify = true
}
// Save the TLS config on our transport
transport.TLSClientConfig = &tlsConfig
}
// Setup the new transport
consulConfig.HttpClient.Transport = transport
// Create the API client
client, err := consulapi.NewClient(consulConfig)
if err != nil {
return fmt.Errorf("client set: consul: %s", err)
}
// Save the data on ourselves
c.consul = &consulClient{
client: client,
httpClient: consulConfig.HttpClient,
}
return nil
}
func (c *ClientSet) CreateVaultClient(i *CreateVaultClientInput) error {
log.Printf("[INFO] (clients) creating vault/api client")
// Generate the default config
vaultConfig := vaultapi.DefaultConfig()
// Set the address
if i.Address != "" {
log.Printf("[DEBUG] (clients) setting vault address to %q", i.Address)
vaultConfig.Address = i.Address
}
// This transport will attempt to keep connections open to the Vault server.
transport := cleanhttp.DefaultPooledTransport()
// Configure SSL
if i.SSLEnabled {
log.Printf("[DEBUG] (clients) enabling vault SSL")
var tlsConfig tls.Config
// Custom certificate or certificate and key
if i.SSLCert != "" && i.SSLKey != "" {
cert, err := tls.LoadX509KeyPair(i.SSLCert, i.SSLKey)
if err != nil {
return fmt.Errorf("client set: vault: %s", err)
}
tlsConfig.Certificates = []tls.Certificate{cert}
} else if i.SSLCert != "" {
cert, err := tls.LoadX509KeyPair(i.SSLCert, i.SSLCert)
if err != nil {
return fmt.Errorf("client set: vault: %s", err)
}
tlsConfig.Certificates = []tls.Certificate{cert}
}
// Custom CA certificate
if i.SSLCACert != "" {
cacert, err := ioutil.ReadFile(i.SSLCACert)
if err != nil {
return fmt.Errorf("client set: vault: %s", err)
}
caCertPool := x509.NewCertPool()
caCertPool.AppendCertsFromPEM(cacert)
tlsConfig.RootCAs = caCertPool
}
// Construct all the certificates now
tlsConfig.BuildNameToCertificate()
// SSL verification
if !i.SSLVerify {
log.Printf("[WARN] (clients) disabling vault SSL verification")
tlsConfig.InsecureSkipVerify = true
}
// Save the TLS config on our transport
transport.TLSClientConfig = &tlsConfig
}
// Setup the new transport
vaultConfig.HttpClient.Transport = transport
// Create the client
client, err := vaultapi.NewClient(vaultConfig)
if err != nil {
return fmt.Errorf("client set: vault: %s", err)
}
// Set the token if given
if i.Token != "" {
log.Printf("[DEBUG] (clients) setting vault token")
client.SetToken(i.Token)
}
// Check if we are unwrapping
if i.UnwrapToken {
log.Printf("[INFO] (clients) unwrapping vault token")
secret, err := client.Logical().Unwrap(i.Token)
if err != nil {
return fmt.Errorf("client set: vault unwrap: %s", err)
}
if secret == nil {
return fmt.Errorf("client set: vault unwrap: no secret")
}
if secret.Auth == nil {
return fmt.Errorf("client set: vault unwrap: no secret auth")
}
if secret.Auth.ClientToken == "" {
return fmt.Errorf("client set: vault unwrap: no token returned")
}
client.SetToken(secret.Auth.ClientToken)
}
// Save the data on ourselves
c.vault = &vaultClient{
client: client,
httpClient: vaultConfig.HttpClient,
}
return nil
}
// Consul returns the Consul client for this clientset, or an error if no
// Consul client has been set.
func (c *ClientSet) Consul() (*consulapi.Client, error) {
c.RLock()
defer c.RUnlock()
if c.consul == nil {
return nil, fmt.Errorf("clientset: missing consul client")
}
cp := new(consulapi.Client)
*cp = *c.consul.client
return cp, nil
}
// Vault returns the Vault client for this clientset, or an error if no
// Vault client has been set.
func (c *ClientSet) Vault() (*vaultapi.Client, error) {
c.RLock()
defer c.RUnlock()
if c.vault == nil {
return nil, fmt.Errorf("clientset: missing vault client")
}
cp := new(vaultapi.Client)
*cp = *c.vault.client
return cp, nil
}
// Stop closes all idle connections for any attached clients.
func (c *ClientSet) Stop() {
c.Lock()
defer c.Unlock()
if c.consul != nil {
c.consul.httpClient.Transport.(*http.Transport).CloseIdleConnections()
}
if c.vault != nil {
c.vault.httpClient.Transport.(*http.Transport).CloseIdleConnections()
}
}

View File

@@ -0,0 +1,118 @@
package dependency
import (
"fmt"
"log"
"sort"
"sync"
"time"
)
var sleepTime = 15 * time.Second
// Datacenters is the dependency to query all datacenters
type Datacenters struct {
sync.Mutex
rawKey string
stopped bool
stopCh chan struct{}
}
// Fetch queries the Consul API defined by the given client and returns a slice
// of strings representing the datacenters
func (d *Datacenters) Fetch(clients *ClientSet, opts *QueryOptions) (interface{}, *ResponseMetadata, error) {
d.Lock()
if d.stopped {
defer d.Unlock()
return nil, nil, ErrStopped
}
d.Unlock()
if opts == nil {
opts = &QueryOptions{}
}
log.Printf("[DEBUG] (%s) querying Consul with %+v", d.Display(), opts)
// This is pretty ghetto, but the datacenters endpoint does not support
// blocking queries, so we are going to "fake it until we make it". When we
// first query, the LastIndex will be "0", meaning we should immediately
// return data, but future calls will include a LastIndex. If we have a
// LastIndex in the query metadata, sleep for 15 seconds before asking Consul
// again.
//
// This is probably okay given the frequency in which datacenters actually
// change, but is technically not edge-triggering.
if opts.WaitIndex != 0 {
log.Printf("[DEBUG] (%s) pretending to long-poll", d.Display())
select {
case <-d.stopCh:
log.Printf("[DEBUG] (%s) received interrupt", d.Display())
return nil, nil, ErrStopped
case <-time.After(sleepTime):
}
}
consul, err := clients.Consul()
if err != nil {
return nil, nil, fmt.Errorf("datacenters: error getting client: %s", err)
}
catalog := consul.Catalog()
result, err := catalog.Datacenters()
if err != nil {
return nil, nil, fmt.Errorf("datacenters: error fetching: %s", err)
}
log.Printf("[DEBUG] (%s) Consul returned %d datacenters", d.Display(), len(result))
sort.Strings(result)
return respWithMetadata(result)
}
// CanShare returns if this dependency is shareable.
func (d *Datacenters) CanShare() bool {
return true
}
// HashCode returns the hash code for this dependency.
func (d *Datacenters) HashCode() string {
return fmt.Sprintf("Datacenters|%s", d.rawKey)
}
// Display returns a string that should be displayed to the user in output (for
// example).
func (d *Datacenters) Display() string {
if d.rawKey == "" {
return fmt.Sprintf(`"datacenters"`)
}
return fmt.Sprintf(`"datacenters(%s)"`, d.rawKey)
}
// Stop terminates this dependency's execution early.
func (d *Datacenters) Stop() {
d.Lock()
defer d.Unlock()
if !d.stopped {
close(d.stopCh)
d.stopped = true
}
}
// ParseDatacenters creates a new datacenter dependency.
func ParseDatacenters(s ...string) (*Datacenters, error) {
switch len(s) {
case 0:
dcs := &Datacenters{
rawKey: "",
stopCh: make(chan struct{}, 0),
}
return dcs, nil
default:
return nil, fmt.Errorf("expected 0 arguments, got %d", len(s))
}
}

View File

@@ -0,0 +1,115 @@
package dependency
import (
"errors"
"fmt"
"sort"
"time"
consulapi "github.com/hashicorp/consul/api"
)
// ErrStopped is a special error that is returned when a dependency is
// prematurely stopped, usually due to a configuration reload or a process
// interrupt.
var ErrStopped = errors.New("dependency stopped")
// Dependency is an interface for a dependency that Consul Template is capable
// of watching.
type Dependency interface {
Fetch(*ClientSet, *QueryOptions) (interface{}, *ResponseMetadata, error)
CanShare() bool
HashCode() string
Display() string
Stop()
}
// FetchError is a special kind of error returned by the Fetch method that
// contains additional metadata which informs the caller how to respond. This
// error implements the standard Error interface, so it can be passed as a
// regular error down the stack.
type FetchError struct {
originalError error
shouldExit bool
}
func (e *FetchError) Error() string {
return e.originalError.Error()
}
func (e *FetchError) OriginalError() error {
return e.originalError
}
func (e *FetchError) ShouldExit() bool {
return e.shouldExit
}
func ErrWithExit(err error) *FetchError {
return &FetchError{
originalError: err,
shouldExit: true,
}
}
func ErrWithExitf(s string, i ...interface{}) *FetchError {
return ErrWithExit(fmt.Errorf(s, i...))
}
// ServiceTags is a slice of tags assigned to a Service
type ServiceTags []string
// Contains returns true if the tags exists in the ServiceTags slice.
func (t ServiceTags) Contains(s string) bool {
for _, v := range t {
if v == s {
return true
}
}
return false
}
// QueryOptions is a list of options to send with the query. These options are
// client-agnostic, and the dependency determines which, if any, of the options
// to use.
type QueryOptions struct {
AllowStale bool
WaitIndex uint64
WaitTime time.Duration
}
// Converts the query options to Consul API ready query options.
func (r *QueryOptions) consulQueryOptions() *consulapi.QueryOptions {
return &consulapi.QueryOptions{
AllowStale: r.AllowStale,
WaitIndex: r.WaitIndex,
WaitTime: r.WaitTime,
}
}
// ResponseMetadata is a struct that contains metadata about the response. This
// is returned from a Fetch function call.
type ResponseMetadata struct {
LastIndex uint64
LastContact time.Duration
}
// deepCopyAndSortTags deep copies the tags in the given string slice and then
// sorts and returns the copied result.
func deepCopyAndSortTags(tags []string) []string {
newTags := make([]string, 0, len(tags))
for _, tag := range tags {
newTags = append(newTags, tag)
}
sort.Strings(newTags)
return newTags
}
// respWithMetadata is a short wrapper to return the given interface with fake
// response metadata for non-Consul dependencies.
func respWithMetadata(i interface{}) (interface{}, *ResponseMetadata, error) {
return i, &ResponseMetadata{
LastContact: 0,
LastIndex: uint64(time.Now().Unix()),
}, nil
}

View File

@@ -0,0 +1,135 @@
package dependency
import (
"errors"
"fmt"
"io/ioutil"
"log"
"os"
"sync"
"time"
)
// File represents a local file dependency.
type File struct {
sync.Mutex
mutex sync.RWMutex
rawKey string
lastStat os.FileInfo
stopped bool
stopCh chan struct{}
}
// Fetch retrieves this dependency and returns the result or any errors that
// occur in the process.
func (d *File) Fetch(clients *ClientSet, opts *QueryOptions) (interface{}, *ResponseMetadata, error) {
d.Lock()
if d.stopped {
defer d.Unlock()
return nil, nil, ErrStopped
}
d.Unlock()
var err error
var newStat os.FileInfo
var data []byte
dataCh := make(chan struct{})
go func() {
log.Printf("[DEBUG] (%s) querying file", d.Display())
newStat, err = d.watch()
close(dataCh)
}()
select {
case <-d.stopCh:
return nil, nil, ErrStopped
case <-dataCh:
}
if err != nil {
return "", nil, fmt.Errorf("file: error watching: %s", err)
}
d.mutex.Lock()
defer d.mutex.Unlock()
d.lastStat = newStat
if data, err = ioutil.ReadFile(d.rawKey); err == nil {
return respWithMetadata(string(data))
}
return nil, nil, fmt.Errorf("file: error reading: %s", err)
}
// CanShare returns a boolean if this dependency is shareable.
func (d *File) CanShare() bool {
return false
}
// HashCode returns a unique identifier.
func (d *File) HashCode() string {
return fmt.Sprintf("StoreKeyPrefix|%s", d.rawKey)
}
// Display prints the human-friendly output.
func (d *File) Display() string {
return fmt.Sprintf(`"file(%s)"`, d.rawKey)
}
// Stop halts the dependency's fetch function.
func (d *File) Stop() {
d.Lock()
defer d.Unlock()
if !d.stopped {
close(d.stopCh)
d.stopped = true
}
}
// watch watchers the file for changes
func (d *File) watch() (os.FileInfo, error) {
for {
stat, err := os.Stat(d.rawKey)
if err != nil {
return nil, err
}
changed := func(d *File, stat os.FileInfo) bool {
d.mutex.RLock()
defer d.mutex.RUnlock()
if d.lastStat == nil {
return true
}
if d.lastStat.Size() != stat.Size() {
return true
}
if d.lastStat.ModTime() != stat.ModTime() {
return true
}
return false
}(d, stat)
if changed {
return stat, nil
}
time.Sleep(3 * time.Second)
}
}
// ParseFile creates a file dependency from the given path.
func ParseFile(s string) (*File, error) {
if len(s) == 0 {
return nil, errors.New("cannot specify empty file dependency")
}
kd := &File{
rawKey: s,
stopCh: make(chan struct{}),
}
return kd, nil
}

View File

@@ -0,0 +1,414 @@
package dependency
import (
"encoding/gob"
"errors"
"fmt"
"log"
"regexp"
"sort"
"strings"
"sync"
"github.com/hashicorp/consul/api"
"github.com/hashicorp/go-multierror"
)
func init() {
gob.Register([]*HealthService{})
}
const (
HealthAny = "any"
HealthPassing = "passing"
HealthWarning = "warning"
HealthUnknown = "unknown"
HealthCritical = "critical"
HealthMaint = "maintenance"
NodeMaint = "_node_maintenance"
ServiceMaint = "_service_maintenance:"
)
// HealthService is a service entry in Consul.
type HealthService struct {
Node string
NodeAddress string
Address string
ID string
Name string
Tags ServiceTags
Checks []*api.HealthCheck
Status string
Port uint64
}
// HealthServices is the struct that is formed from the dependency inside a
// template.
type HealthServices struct {
sync.Mutex
rawKey string
Name string
Tag string
DataCenter string
StatusFilter ServiceStatusFilter
stopped bool
stopCh chan struct{}
}
// Fetch queries the Consul API defined by the given client and returns a slice
// of HealthService objects.
func (d *HealthServices) Fetch(clients *ClientSet, opts *QueryOptions) (interface{}, *ResponseMetadata, error) {
d.Lock()
if d.stopped {
defer d.Unlock()
return nil, nil, ErrStopped
}
d.Unlock()
if opts == nil {
opts = &QueryOptions{}
}
consulOpts := opts.consulQueryOptions()
if d.DataCenter != "" {
consulOpts.Datacenter = d.DataCenter
}
onlyHealthy := false
if d.StatusFilter == nil {
onlyHealthy = true
}
consul, err := clients.Consul()
if err != nil {
return nil, nil, fmt.Errorf("health services: error getting client: %s", err)
}
var entries []*api.ServiceEntry
var qm *api.QueryMeta
dataCh := make(chan struct{})
go func() {
log.Printf("[DEBUG] (%s) querying consul with %+v", d.Display(), consulOpts)
entries, qm, err = consul.Health().Service(d.Name, d.Tag, onlyHealthy, consulOpts)
close(dataCh)
}()
select {
case <-d.stopCh:
return nil, nil, ErrStopped
case <-dataCh:
}
if err != nil {
return nil, nil, fmt.Errorf("health services: error fetching: %s", err)
}
log.Printf("[DEBUG] (%s) Consul returned %d services", d.Display(), len(entries))
services := make([]*HealthService, 0, len(entries))
for _, entry := range entries {
// Get the status of this service from its checks.
status, err := statusFromChecks(entry.Checks)
if err != nil {
return nil, nil, fmt.Errorf("health services: "+
"error getting status from checks: %s", err)
}
// If we are not checking only healthy services, filter out services that do
// not match the given filter.
if d.StatusFilter != nil && !d.StatusFilter.Accept(status) {
continue
}
// Sort the tags.
tags := deepCopyAndSortTags(entry.Service.Tags)
// Get the address of the service, falling back to the address of the node.
var address string
if entry.Service.Address != "" {
address = entry.Service.Address
} else {
address = entry.Node.Address
}
services = append(services, &HealthService{
Node: entry.Node.Node,
NodeAddress: entry.Node.Address,
Address: address,
ID: entry.Service.ID,
Name: entry.Service.Service,
Tags: tags,
Status: status,
Checks: entry.Checks,
Port: uint64(entry.Service.Port),
})
}
log.Printf("[DEBUG] (%s) %d services after health check status filtering", d.Display(), len(services))
sort.Stable(HealthServiceList(services))
rm := &ResponseMetadata{
LastIndex: qm.LastIndex,
LastContact: qm.LastContact,
}
return services, rm, nil
}
// CanShare returns a boolean if this dependency is shareable.
func (d *HealthServices) CanShare() bool {
return true
}
// HashCode returns a unique identifier.
func (d *HealthServices) HashCode() string {
return fmt.Sprintf("HealthServices|%s", d.rawKey)
}
// Display prints the human-friendly output.
func (d *HealthServices) Display() string {
return fmt.Sprintf(`"service(%s)"`, d.rawKey)
}
// Stop halts the dependency's fetch function.
func (d *HealthServices) Stop() {
d.Lock()
defer d.Unlock()
if !d.stopped {
close(d.stopCh)
d.stopped = true
}
}
// ParseHealthServices processes the incoming strings to build a service dependency.
//
// Supported arguments
// ParseHealthServices("service_id")
// ParseHealthServices("service_id", "health_check")
//
// Where service_id is in the format of service(.tag(@datacenter))
// and health_check is either "any" or "passing".
//
// If no health_check is provided then its the same as "passing".
func ParseHealthServices(s ...string) (*HealthServices, error) {
var query string
var filter ServiceStatusFilter
var err error
switch len(s) {
case 1:
query = s[0]
filter, err = NewServiceStatusFilter("")
if err != nil {
return nil, err
}
case 2:
query = s[0]
filter, err = NewServiceStatusFilter(s[1])
if err != nil {
return nil, err
}
default:
return nil, fmt.Errorf("expected 1 or 2 arguments, got %d", len(s))
}
if len(query) == 0 {
return nil, errors.New("cannot specify empty health service dependency")
}
re := regexp.MustCompile(`\A` +
`((?P<tag>[[:word:]\-.]+)\.)?` +
`((?P<name>[[:word:]\-/_]+))` +
`(@(?P<datacenter>[[:word:]\.\-]+))?(:(?P<port>[0-9]+))?` +
`\z`)
names := re.SubexpNames()
match := re.FindAllStringSubmatch(query, -1)
if len(match) == 0 {
return nil, errors.New("invalid health service dependency format")
}
r := match[0]
m := map[string]string{}
for i, n := range r {
if names[i] != "" {
m[names[i]] = n
}
}
tag, name, datacenter, port := m["tag"], m["name"], m["datacenter"], m["port"]
if name == "" {
return nil, errors.New("name part is required")
}
if port != "" {
log.Printf("[WARN] specifying a port in a 'service' query is not "+
"supported - please remove the port from the query %q", query)
}
var key string
if filter == nil {
key = query
} else {
key = fmt.Sprintf("%s %s", query, filter)
}
sd := &HealthServices{
rawKey: key,
Name: name,
Tag: tag,
DataCenter: datacenter,
StatusFilter: filter,
stopCh: make(chan struct{}),
}
return sd, nil
}
// statusFromChecks accepts a list of checks and returns the most likely status
// given those checks. Any "critical" statuses will automatically mark the
// service as critical. After that, any "unknown" statuses will mark as
// "unknown". If any warning checks exist, the status will be marked as
// "warning", and finally "passing". If there are no checks, the service will be
// marked as "passing".
func statusFromChecks(checks []*api.HealthCheck) (string, error) {
var passing, warning, unknown, critical, maintenance bool
for _, check := range checks {
if check.CheckID == NodeMaint || strings.HasPrefix(check.CheckID, ServiceMaint) {
maintenance = true
continue
}
switch check.Status {
case "passing":
passing = true
case "warning":
warning = true
case "unknown":
unknown = true
case "critical":
critical = true
default:
return "", fmt.Errorf("unknown status: %q", check.Status)
}
}
switch {
case maintenance:
return HealthMaint, nil
case critical:
return HealthCritical, nil
case unknown:
return HealthUnknown, nil
case warning:
return HealthWarning, nil
case passing:
return HealthPassing, nil
default:
// No checks?
return HealthPassing, nil
}
}
// ServiceStatusFilter is used to specify a list of service statuses that you want filter by.
type ServiceStatusFilter []string
// String returns the string representation of this status filter
func (f ServiceStatusFilter) String() string {
return fmt.Sprintf("[%s]", strings.Join(f, ","))
}
// NewServiceStatusFilter creates a status filter from the given string in the
// format `[key[,key[,key...]]]`. Each status is split on the comma character
// and must match one of the valid status names.
//
// If the empty string is given, it is assumed only "passing" statuses are to
// be returned.
//
// If the user specifies "any" with other keys, an error will be returned.
func NewServiceStatusFilter(s string) (ServiceStatusFilter, error) {
// If no statuses were given, use the default status of "all passing".
if len(s) == 0 || len(strings.TrimSpace(s)) == 0 {
return nil, nil
}
var errs *multierror.Error
var hasAny bool
raw := strings.Split(s, ",")
trimmed := make(ServiceStatusFilter, 0, len(raw))
for _, r := range raw {
trim := strings.TrimSpace(r)
// Ignore the empty string.
if len(trim) == 0 {
continue
}
// Record the case where we have the "any" status - it will be used later.
if trim == HealthAny {
hasAny = true
}
// Validate that the service is actually a valid name.
switch trim {
case HealthAny, HealthUnknown, HealthPassing, HealthWarning, HealthCritical, HealthMaint:
trimmed = append(trimmed, trim)
default:
errs = multierror.Append(errs, fmt.Errorf("service filter: invalid filter %q", trim))
}
}
// If the user specified "any" with additional keys, that is invalid.
if hasAny && len(trimmed) != 1 {
errs = multierror.Append(errs, fmt.Errorf("service filter: cannot specify extra keys when using %q", "any"))
}
return trimmed, errs.ErrorOrNil()
}
// Accept allows us to check if a slice of health checks pass this filter.
func (f ServiceStatusFilter) Accept(s string) bool {
// If the any filter is activated, pass everything.
if f.any() {
return true
}
// Iterate over each status and see if the given status is any of those
// statuses.
for _, status := range f {
if status == s {
return true
}
}
return false
}
// any is a helper method to determine if this is an "any" service status
// filter. If "any" was given, it must be the only item in the list.
func (f ServiceStatusFilter) any() bool {
return len(f) == 1 && f[0] == HealthAny
}
// HealthServiceList is a sortable slice of Service
type HealthServiceList []*HealthService
// Len, Swap, and Less are used to implement the sort.Sort interface.
func (s HealthServiceList) Len() int { return len(s) }
func (s HealthServiceList) Swap(i, j int) { s[i], s[j] = s[j], s[i] }
func (s HealthServiceList) Less(i, j int) bool {
if s[i].Node < s[j].Node {
return true
} else if s[i].Node == s[j].Node {
return s[i].ID <= s[j].ID
}
return false
}

View File

@@ -0,0 +1,171 @@
package dependency
import (
"errors"
"fmt"
"log"
"regexp"
"sync"
api "github.com/hashicorp/consul/api"
)
// StoreKey represents a single item in Consul's KV store.
type StoreKey struct {
sync.Mutex
rawKey string
Path string
DataCenter string
defaultValue string
defaultGiven bool
stopped bool
stopCh chan struct{}
}
// Fetch queries the Consul API defined by the given client and returns string
// of the value to Path.
func (d *StoreKey) Fetch(clients *ClientSet, opts *QueryOptions) (interface{}, *ResponseMetadata, error) {
d.Lock()
if d.stopped {
defer d.Unlock()
return nil, nil, ErrStopped
}
d.Unlock()
if opts == nil {
opts = &QueryOptions{}
}
consulOpts := opts.consulQueryOptions()
if d.DataCenter != "" {
consulOpts.Datacenter = d.DataCenter
}
consul, err := clients.Consul()
if err != nil {
return nil, nil, fmt.Errorf("store key: error getting client: %s", err)
}
var pair *api.KVPair
var qm *api.QueryMeta
dataCh := make(chan struct{})
go func() {
log.Printf("[DEBUG] (%s) querying consul with %+v", d.Display(), consulOpts)
pair, qm, err = consul.KV().Get(d.Path, consulOpts)
close(dataCh)
}()
select {
case <-d.stopCh:
return nil, nil, ErrStopped
case <-dataCh:
}
if err != nil {
return "", nil, fmt.Errorf("store key: error fetching: %s", err)
}
rm := &ResponseMetadata{
LastIndex: qm.LastIndex,
LastContact: qm.LastContact,
}
if pair == nil {
if d.defaultGiven {
log.Printf("[DEBUG] (%s) Consul returned no data (using default of %q)",
d.Display(), d.defaultValue)
return d.defaultValue, rm, nil
}
log.Printf("[WARN] (%s) Consul returned no data (does the path exist?)",
d.Display())
return "", rm, nil
}
log.Printf("[DEBUG] (%s) Consul returned %s", d.Display(), pair.Value)
return string(pair.Value), rm, nil
}
// SetDefault is used to set the default value.
func (d *StoreKey) SetDefault(s string) {
d.defaultGiven = true
d.defaultValue = s
}
// CanShare returns a boolean if this dependency is shareable.
func (d *StoreKey) CanShare() bool {
return true
}
// HashCode returns a unique identifier.
func (d *StoreKey) HashCode() string {
if d.defaultGiven {
return fmt.Sprintf("StoreKey|%s|%s", d.rawKey, d.defaultValue)
}
return fmt.Sprintf("StoreKey|%s", d.rawKey)
}
// Display prints the human-friendly output.
func (d *StoreKey) Display() string {
if d.defaultGiven {
return fmt.Sprintf(`"key_or_default(%s, %q)"`, d.rawKey, d.defaultValue)
}
return fmt.Sprintf(`"key(%s)"`, d.rawKey)
}
// Stop halts the dependency's fetch function.
func (d *StoreKey) Stop() {
d.Lock()
defer d.Unlock()
if !d.stopped {
close(d.stopCh)
d.stopped = true
}
}
// ParseStoreKey parses a string of the format a(/b(/c...))
func ParseStoreKey(s string) (*StoreKey, error) {
if len(s) == 0 {
return nil, errors.New("cannot specify empty key dependency")
}
re := regexp.MustCompile(`\A` +
`(?P<key>[^@]+)` +
`(@(?P<datacenter>.+))?` +
`\z`)
names := re.SubexpNames()
match := re.FindAllStringSubmatch(s, -1)
if len(match) == 0 {
return nil, errors.New("invalid key dependency format")
}
r := match[0]
m := map[string]string{}
for i, n := range r {
if names[i] != "" {
m[names[i]] = n
}
}
key, datacenter := m["key"], m["datacenter"]
if key == "" {
return nil, errors.New("key part is required")
}
kd := &StoreKey{
rawKey: s,
Path: key,
DataCenter: datacenter,
stopCh: make(chan struct{}),
}
return kd, nil
}

View File

@@ -0,0 +1,184 @@
package dependency
import (
"encoding/gob"
"errors"
"fmt"
"log"
"regexp"
"strings"
"sync"
"github.com/hashicorp/consul/api"
)
func init() {
gob.Register([]*KeyPair{})
}
// KeyPair is a simple Key-Value pair
type KeyPair struct {
Path string
Key string
Value string
// Lesser-used, but still valuable keys from api.KV
CreateIndex uint64
ModifyIndex uint64
LockIndex uint64
Flags uint64
Session string
}
// StoreKeyPrefix is the representation of a requested key dependency
// from inside a template.
type StoreKeyPrefix struct {
sync.Mutex
rawKey string
Prefix string
DataCenter string
stopped bool
stopCh chan struct{}
}
// Fetch queries the Consul API defined by the given client and returns a slice
// of KeyPair objects
func (d *StoreKeyPrefix) Fetch(clients *ClientSet, opts *QueryOptions) (interface{}, *ResponseMetadata, error) {
d.Lock()
if d.stopped {
defer d.Unlock()
return nil, nil, ErrStopped
}
d.Unlock()
if opts == nil {
opts = &QueryOptions{}
}
consulOpts := opts.consulQueryOptions()
if d.DataCenter != "" {
consulOpts.Datacenter = d.DataCenter
}
consul, err := clients.Consul()
if err != nil {
return nil, nil, fmt.Errorf("store key prefix: error getting client: %s", err)
}
var prefixes api.KVPairs
var qm *api.QueryMeta
dataCh := make(chan struct{})
go func() {
log.Printf("[DEBUG] (%s) querying consul with %+v", d.Display(), consulOpts)
prefixes, qm, err = consul.KV().List(d.Prefix, consulOpts)
close(dataCh)
}()
select {
case <-d.stopCh:
return nil, nil, ErrStopped
case <-dataCh:
}
if err != nil {
return nil, nil, fmt.Errorf("store key prefix: error fetching: %s", err)
}
log.Printf("[DEBUG] (%s) Consul returned %d key pairs", d.Display(), len(prefixes))
keyPairs := make([]*KeyPair, 0, len(prefixes))
for _, pair := range prefixes {
key := strings.TrimPrefix(pair.Key, d.Prefix)
key = strings.TrimLeft(key, "/")
keyPairs = append(keyPairs, &KeyPair{
Path: pair.Key,
Key: key,
Value: string(pair.Value),
CreateIndex: pair.CreateIndex,
ModifyIndex: pair.ModifyIndex,
LockIndex: pair.LockIndex,
Flags: pair.Flags,
Session: pair.Session,
})
}
rm := &ResponseMetadata{
LastIndex: qm.LastIndex,
LastContact: qm.LastContact,
}
return keyPairs, rm, nil
}
// CanShare returns a boolean if this dependency is shareable.
func (d *StoreKeyPrefix) CanShare() bool {
return true
}
// HashCode returns a unique identifier.
func (d *StoreKeyPrefix) HashCode() string {
return fmt.Sprintf("StoreKeyPrefix|%s", d.rawKey)
}
// Display prints the human-friendly output.
func (d *StoreKeyPrefix) Display() string {
return fmt.Sprintf(`"storeKeyPrefix(%s)"`, d.rawKey)
}
// Stop halts the dependency's fetch function.
func (d *StoreKeyPrefix) Stop() {
d.Lock()
defer d.Unlock()
if !d.stopped {
close(d.stopCh)
d.stopped = true
}
}
// ParseStoreKeyPrefix parses a string of the format a(/b(/c...))
func ParseStoreKeyPrefix(s string) (*StoreKeyPrefix, error) {
// a(/b(/c))(@datacenter)
re := regexp.MustCompile(`\A` +
`(?P<prefix>[[:word:],\.\:\-\/]+)?` +
`(@(?P<datacenter>[[:word:]\.\-]+))?` +
`\z`)
names := re.SubexpNames()
match := re.FindAllStringSubmatch(s, -1)
if len(match) == 0 {
return nil, errors.New("invalid key prefix dependency format")
}
r := match[0]
m := map[string]string{}
for i, n := range r {
if names[i] != "" {
m[names[i]] = n
}
}
prefix, datacenter := m["prefix"], m["datacenter"]
// Empty prefix or nil prefix should default to "/"
if len(prefix) == 0 {
prefix = "/"
}
// Remove leading slash
if len(prefix) > 1 && prefix[0] == '/' {
prefix = prefix[1:len(prefix)]
}
kpd := &StoreKeyPrefix{
rawKey: s,
Prefix: prefix,
DataCenter: datacenter,
stopCh: make(chan struct{}),
}
return kpd, nil
}

View File

@@ -0,0 +1,126 @@
package dependency
import (
"fmt"
"sync"
"time"
)
// Test is a special dependency that does not actually speaks to a server.
type Test struct {
Name string
}
func (d *Test) Fetch(clients *ClientSet, opts *QueryOptions) (interface{}, *ResponseMetadata, error) {
time.Sleep(10 * time.Millisecond)
data := "this is some data"
rm := &ResponseMetadata{LastIndex: 1}
return data, rm, nil
}
func (d *Test) CanShare() bool {
return true
}
func (d *Test) HashCode() string {
return fmt.Sprintf("Test|%s", d.Name)
}
func (d *Test) Display() string { return "fakedep" }
func (d *Test) Stop() {}
// TestStale is a special dependency that can be used to test what happens when
// stale data is permitted.
type TestStale struct {
Name string
}
// Fetch is used to implement the dependency interface.
func (d *TestStale) Fetch(clients *ClientSet, opts *QueryOptions) (interface{}, *ResponseMetadata, error) {
time.Sleep(10 * time.Millisecond)
if opts == nil {
opts = &QueryOptions{}
}
if opts.AllowStale {
data := "this is some stale data"
rm := &ResponseMetadata{LastIndex: 1, LastContact: 50 * time.Millisecond}
return data, rm, nil
} else {
data := "this is some fresh data"
rm := &ResponseMetadata{LastIndex: 1}
return data, rm, nil
}
}
func (d *TestStale) CanShare() bool {
return true
}
func (d *TestStale) HashCode() string {
return fmt.Sprintf("TestStale|%s", d.Name)
}
func (d *TestStale) Display() string { return "fakedep" }
func (d *TestStale) Stop() {}
// TestFetchError is a special dependency that returns an error while fetching.
type TestFetchError struct {
Name string
}
func (d *TestFetchError) Fetch(clients *ClientSet, opts *QueryOptions) (interface{}, *ResponseMetadata, error) {
time.Sleep(10 * time.Millisecond)
return nil, nil, fmt.Errorf("failed to contact server")
}
func (d *TestFetchError) CanShare() bool {
return true
}
func (d *TestFetchError) HashCode() string {
return fmt.Sprintf("TestFetchError|%s", d.Name)
}
func (d *TestFetchError) Display() string { return "fakedep" }
func (d *TestFetchError) Stop() {}
// TestRetry is a special dependency that errors on the first fetch and
// succeeds on subsequent fetches.
type TestRetry struct {
sync.Mutex
Name string
retried bool
}
func (d *TestRetry) Fetch(clients *ClientSet, opts *QueryOptions) (interface{}, *ResponseMetadata, error) {
time.Sleep(10 * time.Millisecond)
d.Lock()
defer d.Unlock()
if d.retried {
data := "this is some data"
rm := &ResponseMetadata{LastIndex: 1}
return data, rm, nil
} else {
d.retried = true
return nil, nil, fmt.Errorf("failed to contact server (try again)")
}
}
func (d *TestRetry) CanShare() bool {
return true
}
func (d *TestRetry) HashCode() string {
return fmt.Sprintf("TestRetry|%s", d.Name)
}
func (d *TestRetry) Display() string { return "fakedep" }
func (d *TestRetry) Stop() {}

View File

@@ -0,0 +1,197 @@
package dependency
import (
"fmt"
"log"
"strings"
"sync"
"time"
vaultapi "github.com/hashicorp/vault/api"
)
// Secret is a vault secret.
type Secret struct {
LeaseID string
LeaseDuration int
Renewable bool
// Data is the actual contents of the secret. The format of the data
// is arbitrary and up to the secret backend.
Data map[string]interface{}
}
// VaultSecret is the dependency to Vault for a secret
type VaultSecret struct {
sync.Mutex
Path string
data map[string]interface{}
secret *Secret
stopped bool
stopCh chan struct{}
}
// Fetch queries the Vault API
func (d *VaultSecret) Fetch(clients *ClientSet, opts *QueryOptions) (interface{}, *ResponseMetadata, error) {
d.Lock()
if d.stopped {
defer d.Unlock()
return nil, nil, ErrStopped
}
d.Unlock()
if opts == nil {
opts = &QueryOptions{}
}
log.Printf("[DEBUG] (%s) querying vault with %+v", d.Display(), opts)
// If this is not the first query and we have a lease duration, sleep until we
// try to renew.
if opts.WaitIndex != 0 && d.secret != nil && d.secret.LeaseDuration != 0 {
duration := time.Duration(d.secret.LeaseDuration/2.0) * time.Second
log.Printf("[DEBUG] (%s) pretending to long-poll for %q",
d.Display(), duration)
select {
case <-d.stopCh:
log.Printf("[DEBUG] (%s) received interrupt", d.Display())
return nil, nil, ErrStopped
case <-time.After(duration):
}
}
// Grab the vault client
vault, err := clients.Vault()
if err != nil {
return nil, nil, ErrWithExitf("vault secret: %s", err)
}
// Attempt to renew the secret. If we do not have a secret or if that secret
// is not renewable, we will attempt a (re-)read later.
if d.secret != nil && d.secret.LeaseID != "" && d.secret.Renewable {
renewal, err := vault.Sys().Renew(d.secret.LeaseID, 0)
if err == nil {
log.Printf("[DEBUG] (%s) successfully renewed", d.Display())
log.Printf("[DEBUG] (%s) %#v", d.Display(), renewal)
secret := &Secret{
LeaseID: renewal.LeaseID,
LeaseDuration: d.secret.LeaseDuration,
Renewable: renewal.Renewable,
Data: d.secret.Data,
}
d.Lock()
d.secret = secret
d.Unlock()
return respWithMetadata(secret)
}
// The renewal failed for some reason.
log.Printf("[WARN] (%s) failed to renew, re-obtaining: %s", d.Display(), err)
}
// If we got this far, we either didn't have a secret to renew, the secret was
// not renewable, or the renewal failed, so attempt a fresh read.
var vaultSecret *vaultapi.Secret
if len(d.data) == 0 {
vaultSecret, err = vault.Logical().Read(d.Path)
} else {
vaultSecret, err = vault.Logical().Write(d.Path, d.data)
}
if err != nil {
return nil, nil, ErrWithExitf("error obtaining from vault: %s", err)
}
// The secret could be nil (maybe it does not exist yet). This is not an error
// to Vault, but it is an error to Consul Template, so return an error
// instead.
if vaultSecret == nil {
return nil, nil, fmt.Errorf("no secret exists at path %q", d.Display())
}
// Create our cloned secret
secret := &Secret{
LeaseID: vaultSecret.LeaseID,
LeaseDuration: leaseDurationOrDefault(vaultSecret.LeaseDuration),
Renewable: vaultSecret.Renewable,
Data: vaultSecret.Data,
}
d.Lock()
d.secret = secret
d.Unlock()
log.Printf("[DEBUG] (%s) vault returned the secret", d.Display())
return respWithMetadata(secret)
}
// CanShare returns if this dependency is shareable.
func (d *VaultSecret) CanShare() bool {
return false
}
// HashCode returns the hash code for this dependency.
func (d *VaultSecret) HashCode() string {
return fmt.Sprintf("VaultSecret|%s", d.Path)
}
// Display returns a string that should be displayed to the user in output (for
// example).
func (d *VaultSecret) Display() string {
return fmt.Sprintf(`"secret(%s)"`, d.Path)
}
// Stop halts the given dependency's fetch.
func (d *VaultSecret) Stop() {
d.Lock()
defer d.Unlock()
if !d.stopped {
close(d.stopCh)
d.stopped = true
}
}
// ParseVaultSecret creates a new datacenter dependency.
func ParseVaultSecret(s ...string) (*VaultSecret, error) {
if len(s) == 0 {
return nil, fmt.Errorf("expected 1 or more arguments, got %d", len(s))
}
path, rest := s[0], s[1:len(s)]
if len(path) == 0 {
return nil, fmt.Errorf("vault path must be at least one character")
}
data := make(map[string]interface{})
for _, str := range rest {
parts := strings.SplitN(str, "=", 2)
if len(parts) != 2 {
return nil, fmt.Errorf("invalid value %q - must be key=value", str)
}
k, v := strings.TrimSpace(parts[0]), strings.TrimSpace(parts[1])
data[k] = v
}
vs := &VaultSecret{
Path: path,
data: data,
stopCh: make(chan struct{}),
}
return vs, nil
}
func leaseDurationOrDefault(d int) int {
if d == 0 {
return 5 * 60
}
return d
}

View File

@@ -0,0 +1,134 @@
package dependency
import (
"fmt"
"log"
"sort"
"sync"
"time"
)
// VaultSecrets is the dependency to list secrets in Vault.
type VaultSecrets struct {
sync.Mutex
Path string
stopped bool
stopCh chan struct{}
}
// Fetch queries the Vault API
func (d *VaultSecrets) Fetch(clients *ClientSet, opts *QueryOptions) (interface{}, *ResponseMetadata, error) {
d.Lock()
if d.stopped {
defer d.Unlock()
return nil, nil, ErrStopped
}
d.Unlock()
if opts == nil {
opts = &QueryOptions{}
}
log.Printf("[DEBUG] (%s) querying vault with %+v", d.Display(), opts)
// If this is not the first query and we have a lease duration, sleep until we
// try to renew.
if opts.WaitIndex != 0 {
log.Printf("[DEBUG] (%s) pretending to long-poll", d.Display())
select {
case <-d.stopCh:
return nil, nil, ErrStopped
case <-time.After(sleepTime):
}
}
// Grab the vault client
vault, err := clients.Vault()
if err != nil {
return nil, nil, ErrWithExitf("vault secrets: %s", err)
}
// Get the list as a secret
vaultSecret, err := vault.Logical().List(d.Path)
if err != nil {
return nil, nil, ErrWithExitf("error listing secrets from vault: %s", err)
}
// If the secret or data data is nil, return an empty list of strings.
if vaultSecret == nil || vaultSecret.Data == nil {
return respWithMetadata(make([]string, 0))
}
// If there are no keys at that path, return the empty list.
keys, ok := vaultSecret.Data["keys"]
if !ok {
return respWithMetadata(make([]string, 0))
}
// Convert the interface into a list of interfaces.
list, ok := keys.([]interface{})
if !ok {
return nil, nil, ErrWithExitf("vault returned an unexpected payload for %q", d.Display())
}
// Pull each item out of the list and safely cast to a string.
result := make([]string, len(list))
for i, v := range list {
typed, ok := v.(string)
if !ok {
return nil, nil, ErrWithExitf("vault returned a non-string when listing secrets for %q", d.Display())
}
result[i] = typed
}
sort.Strings(result)
log.Printf("[DEBUG] (%s) vault listed %d secrets(s)", d.Display(), len(result))
return respWithMetadata(result)
}
// CanShare returns if this dependency is shareable.
func (d *VaultSecrets) CanShare() bool {
return false
}
// HashCode returns the hash code for this dependency.
func (d *VaultSecrets) HashCode() string {
return fmt.Sprintf("VaultSecrets|%s", d.Path)
}
// Display returns a string that should be displayed to the user in output (for
// example).
func (d *VaultSecrets) Display() string {
return fmt.Sprintf(`"secrets(%s)"`, d.Path)
}
// Stop halts the dependency's fetch function.
func (d *VaultSecrets) Stop() {
d.Lock()
defer d.Unlock()
if !d.stopped {
close(d.stopCh)
d.stopped = true
}
}
// ParseVaultSecrets creates a new datacenter dependency.
func ParseVaultSecrets(s string) (*VaultSecrets, error) {
// Ensure a trailing slash, always.
if len(s) == 0 {
s = "/"
}
if s[len(s)-1] != '/' {
s = fmt.Sprintf("%s/", s)
}
vs := &VaultSecrets{
Path: s,
stopCh: make(chan struct{}),
}
return vs, nil
}

View File

@@ -0,0 +1,119 @@
package dependency
import (
"log"
"sync"
"time"
)
// VaultToken is the dependency to Vault for a secret
type VaultToken struct {
sync.Mutex
leaseID string
leaseDuration int
stopped bool
stopCh chan struct{}
}
// Fetch queries the Vault API
func (d *VaultToken) Fetch(clients *ClientSet, opts *QueryOptions) (interface{}, *ResponseMetadata, error) {
d.Lock()
if d.stopped {
defer d.Unlock()
return nil, nil, ErrStopped
}
d.Unlock()
if opts == nil {
opts = &QueryOptions{}
}
log.Printf("[DEBUG] (%s) renewing vault token", d.Display())
// If this is not the first query and we have a lease duration, sleep until we
// try to renew.
if opts.WaitIndex != 0 && d.leaseDuration != 0 {
duration := time.Duration(d.leaseDuration/2.0) * time.Second
if duration < 1*time.Second {
log.Printf("[DEBUG] (%s) increasing sleep to 1s (was %q)",
d.Display(), duration)
duration = 1 * time.Second
}
log.Printf("[DEBUG] (%s) sleeping for %q", d.Display(), duration)
select {
case <-d.stopCh:
return nil, nil, ErrStopped
case <-time.After(duration):
}
}
// Grab the vault client
vault, err := clients.Vault()
if err != nil {
return nil, nil, ErrWithExitf("vault_token: %s", err)
}
token, err := vault.Auth().Token().RenewSelf(0)
if err != nil {
return nil, nil, ErrWithExitf("error renewing vault token: %s", err)
}
// Create our cloned secret
secret := &Secret{
LeaseID: token.LeaseID,
LeaseDuration: token.Auth.LeaseDuration,
Renewable: token.Auth.Renewable,
Data: token.Data,
}
leaseDuration := token.Auth.LeaseDuration
if leaseDuration == 0 {
log.Printf("[WARN] (%s) lease duration is 0, setting to 5s", d.Display())
leaseDuration = 5
}
d.Lock()
d.leaseID = secret.LeaseID
d.leaseDuration = leaseDuration
d.Unlock()
log.Printf("[DEBUG] (%s) successfully renewed token", d.Display())
return respWithMetadata(secret)
}
// CanShare returns if this dependency is shareable.
func (d *VaultToken) CanShare() bool {
return false
}
// HashCode returns the hash code for this dependency.
func (d *VaultToken) HashCode() string {
return "VaultToken"
}
// Display returns a string that should be displayed to the user in output (for
// example).
func (d *VaultToken) Display() string {
return "vault_token"
}
// Stop halts the dependency's fetch function.
func (d *VaultToken) Stop() {
d.Lock()
defer d.Unlock()
if !d.stopped {
close(d.stopCh)
d.stopped = true
}
}
// ParseVaultToken creates a new VaultToken dependency.
func ParseVaultToken() (*VaultToken, error) {
return &VaultToken{stopCh: make(chan struct{})}, nil
}

View File

@@ -0,0 +1,453 @@
package manager
import (
"bytes"
"compress/lzw"
"crypto/md5"
"encoding/gob"
"fmt"
"log"
"path"
"sync"
"time"
"github.com/hashicorp/consul-template/config"
dep "github.com/hashicorp/consul-template/dependency"
"github.com/hashicorp/consul-template/template"
consulapi "github.com/hashicorp/consul/api"
)
const (
// sessionCreateRetry is the amount of time we wait
// to recreate a session when lost.
sessionCreateRetry = 15 * time.Second
// lockRetry is the interval on which we try to re-acquire locks
lockRetry = 10 * time.Second
// listRetry is the interval on which we retry listing a data path
listRetry = 10 * time.Second
// templateDataFlag is added as a flag to the shared data values
// so that we can use it as a sanity check
templateDataFlag = 0x22b9a127a2c03520
)
// templateData is GOB encoded share the depdency values
type templateData struct {
Data map[string]interface{}
}
// DedupManager is used to de-duplicate which instance of Consul-Template
// is handling each template. For each template, a lock path is determined
// using the MD5 of the template. This path is used to elect a "leader"
// instance.
//
// The leader instance operations like usual, but any time a template is
// rendered, any of the data required for rendering is stored in the
// Consul KV store under the lock path.
//
// The follower instances depend on the leader to do the primary watching
// and rendering, and instead only watch the aggregated data in the KV.
// Followers wait for updates and re-render the template.
//
// If a template depends on 50 views, and is running on 50 machines, that
// would normally require 2500 blocking queries. Using deduplication, one
// instance has 50 view queries, plus 50 additional queries on the lock
// path for a total of 100.
//
type DedupManager struct {
// config is the consul-template configuration
config *config.Config
// clients is used to access the underlying clinets
clients *dep.ClientSet
// Brain is where we inject udpates
brain *template.Brain
// templates is the set of templates we are trying to dedup
templates []*template.Template
// leader tracks if we are currently the leader
leader map[*template.Template]<-chan struct{}
leaderLock sync.RWMutex
// lastWrite tracks the hash of the data paths
lastWrite map[*template.Template][]byte
lastWriteLock sync.RWMutex
// updateCh is used to indicate an update watched data
updateCh chan struct{}
// wg is used to wait for a clean shutdown
wg sync.WaitGroup
stop bool
stopCh chan struct{}
stopLock sync.Mutex
}
// NewDedupManager creates a new Dedup manager
func NewDedupManager(config *config.Config, clients *dep.ClientSet, brain *template.Brain, templates []*template.Template) (*DedupManager, error) {
d := &DedupManager{
config: config,
clients: clients,
brain: brain,
templates: templates,
leader: make(map[*template.Template]<-chan struct{}),
lastWrite: make(map[*template.Template][]byte),
updateCh: make(chan struct{}, 1),
stopCh: make(chan struct{}),
}
return d, nil
}
// Start is used to start the de-duplication manager
func (d *DedupManager) Start() error {
log.Printf("[INFO] (dedup) starting de-duplication manager")
client, err := d.clients.Consul()
if err != nil {
return err
}
go d.createSession(client)
// Start to watch each template
for _, t := range d.templates {
go d.watchTemplate(client, t)
}
return nil
}
// Stop is used to stop the de-duplication manager
func (d *DedupManager) Stop() error {
d.stopLock.Lock()
defer d.stopLock.Unlock()
if d.stop {
return nil
}
log.Printf("[INFO] (dedup) stopping de-duplication manager")
d.stop = true
close(d.stopCh)
d.wg.Wait()
return nil
}
// createSession is used to create and maintain a session to Consul
func (d *DedupManager) createSession(client *consulapi.Client) {
START:
log.Printf("[INFO] (dedup) attempting to create session")
session := client.Session()
sessionCh := make(chan struct{})
ttl := fmt.Sprintf("%ds", d.config.Deduplicate.TTL/time.Second)
se := &consulapi.SessionEntry{
Name: "Consul-Template de-duplication",
Behavior: "delete",
TTL: ttl,
}
id, _, err := session.Create(se, nil)
if err != nil {
log.Printf("[ERR] (dedup) failed to create session: %v", err)
goto WAIT
}
log.Printf("[INFO] (dedup) created session %s", id)
// Attempt to lock each template
for _, t := range d.templates {
d.wg.Add(1)
go d.attemptLock(client, id, sessionCh, t)
}
// Renew our session periodically
if err := session.RenewPeriodic("15s", id, nil, d.stopCh); err != nil {
log.Printf("[ERR] (dedup) failed to renew session: %v", err)
d.wg.Wait()
}
close(sessionCh)
WAIT:
select {
case <-time.After(sessionCreateRetry):
goto START
case <-d.stopCh:
return
}
}
// IsLeader checks if we are currently the leader instance
func (d *DedupManager) IsLeader(tmpl *template.Template) bool {
d.leaderLock.RLock()
defer d.leaderLock.RUnlock()
lockCh, ok := d.leader[tmpl]
if !ok {
return false
}
select {
case <-lockCh:
return false
default:
return true
}
}
// UpdateDeps is used to update the values of the dependencies for a template
func (d *DedupManager) UpdateDeps(t *template.Template, deps []dep.Dependency) error {
// Calculate the path to write updates to
dataPath := path.Join(d.config.Deduplicate.Prefix, t.HexMD5, "data")
// Package up the dependency data
td := templateData{
Data: make(map[string]interface{}),
}
for _, dp := range deps {
// Skip any dependencies that can't be shared
if !dp.CanShare() {
continue
}
// Pull the current value from the brain
val, ok := d.brain.Recall(dp)
if ok {
td.Data[dp.HashCode()] = val
}
}
// Encode via GOB and LZW compress
var buf bytes.Buffer
compress := lzw.NewWriter(&buf, lzw.LSB, 8)
enc := gob.NewEncoder(compress)
if err := enc.Encode(&td); err != nil {
return fmt.Errorf("encode failed: %v", err)
}
compress.Close()
// Compute MD5 of the buffer
hash := md5.Sum(buf.Bytes())
d.lastWriteLock.RLock()
existing, ok := d.lastWrite[t]
d.lastWriteLock.RUnlock()
if ok && bytes.Equal(existing, hash[:]) {
log.Printf("[INFO] (dedup) de-duplicate data '%s' already current",
dataPath)
return nil
}
// Write the KV update
kvPair := consulapi.KVPair{
Key: dataPath,
Value: buf.Bytes(),
Flags: templateDataFlag,
}
client, err := d.clients.Consul()
if err != nil {
return fmt.Errorf("failed to get consul client: %v", err)
}
if _, err := client.KV().Put(&kvPair, nil); err != nil {
return fmt.Errorf("failed to write '%s': %v", dataPath, err)
}
log.Printf("[INFO] (dedup) updated de-duplicate data '%s'", dataPath)
d.lastWriteLock.Lock()
d.lastWrite[t] = hash[:]
d.lastWriteLock.Unlock()
return nil
}
// UpdateCh returns a channel to watch for depedency updates
func (d *DedupManager) UpdateCh() <-chan struct{} {
return d.updateCh
}
// setLeader sets if we are currently the leader instance
func (d *DedupManager) setLeader(tmpl *template.Template, lockCh <-chan struct{}) {
// Update the lock state
d.leaderLock.Lock()
if lockCh != nil {
d.leader[tmpl] = lockCh
} else {
delete(d.leader, tmpl)
}
d.leaderLock.Unlock()
// Clear the lastWrite hash if we've lost leadership
if lockCh == nil {
d.lastWriteLock.Lock()
delete(d.lastWrite, tmpl)
d.lastWriteLock.Unlock()
}
// Do an async notify of an update
select {
case d.updateCh <- struct{}{}:
default:
}
}
func (d *DedupManager) watchTemplate(client *consulapi.Client, t *template.Template) {
log.Printf("[INFO] (dedup) starting watch for template hash %s", t.HexMD5)
path := path.Join(d.config.Deduplicate.Prefix, t.HexMD5, "data")
// Determine if stale queries are allowed
var allowStale bool
if d.config.MaxStale != 0 {
allowStale = true
}
// Setup our query options
opts := &consulapi.QueryOptions{
AllowStale: allowStale,
WaitTime: 60 * time.Second,
}
START:
// Stop listening if we're stopped
select {
case <-d.stopCh:
return
default:
}
// If we are current the leader, wait for leadership lost
d.leaderLock.RLock()
lockCh, ok := d.leader[t]
d.leaderLock.RUnlock()
if ok {
select {
case <-lockCh:
goto START
case <-d.stopCh:
return
}
}
// Block for updates on the data key
log.Printf("[INFO] (dedup) listing data for template hash %s", t.HexMD5)
pair, meta, err := client.KV().Get(path, opts)
if err != nil {
log.Printf("[ERR] (dedup) failed to get '%s': %v", path, err)
select {
case <-time.After(listRetry):
goto START
case <-d.stopCh:
return
}
}
opts.WaitIndex = meta.LastIndex
// If we've exceeded the maximum staleness, retry without stale
if allowStale && meta.LastContact > d.config.MaxStale {
allowStale = false
log.Printf("[DEBUG] (dedup) %s stale data (last contact exceeded max_stale)", path)
goto START
}
// Re-enable stale queries if allowed
if d.config.MaxStale != 0 {
allowStale = true
}
// Stop listening if we're stopped
select {
case <-d.stopCh:
return
default:
}
// If we are current the leader, wait for leadership lost
d.leaderLock.RLock()
lockCh, ok = d.leader[t]
d.leaderLock.RUnlock()
if ok {
select {
case <-lockCh:
goto START
case <-d.stopCh:
return
}
}
// Parse the data file
if pair != nil && pair.Flags == templateDataFlag {
d.parseData(pair.Key, pair.Value)
}
goto START
}
// parseData is used to update brain from a KV data pair
func (d *DedupManager) parseData(path string, raw []byte) {
// Setup the decompression and decoders
r := bytes.NewReader(raw)
decompress := lzw.NewReader(r, lzw.LSB, 8)
defer decompress.Close()
dec := gob.NewDecoder(decompress)
// Decode the data
var td templateData
if err := dec.Decode(&td); err != nil {
log.Printf("[ERR] (dedup) failed to decode '%s': %v",
path, err)
return
}
log.Printf("[INFO] (dedup) loading %d dependencies from '%s'",
len(td.Data), path)
// Update the data in the brain
for hashCode, value := range td.Data {
d.brain.ForceSet(hashCode, value)
}
// Trigger the updateCh
select {
case d.updateCh <- struct{}{}:
default:
}
}
func (d *DedupManager) attemptLock(client *consulapi.Client, session string, sessionCh chan struct{}, t *template.Template) {
defer d.wg.Done()
START:
log.Printf("[INFO] (dedup) attempting lock for template hash %s", t.HexMD5)
basePath := path.Join(d.config.Deduplicate.Prefix, t.HexMD5)
lopts := &consulapi.LockOptions{
Key: path.Join(basePath, "lock"),
Session: session,
MonitorRetries: 3,
MonitorRetryTime: 3 * time.Second,
}
lock, err := client.LockOpts(lopts)
if err != nil {
log.Printf("[ERR] (dedup) failed to create lock '%s': %v",
lopts.Key, err)
return
}
var retryCh <-chan time.Time
leaderCh, err := lock.Lock(sessionCh)
if err != nil {
log.Printf("[ERR] (dedup) failed to acquire lock '%s': %v",
lopts.Key, err)
retryCh = time.After(lockRetry)
} else {
log.Printf("[INFO] (dedup) acquired lock '%s'", lopts.Key)
d.setLeader(t, leaderCh)
}
select {
case <-retryCh:
retryCh = nil
goto START
case <-leaderCh:
log.Printf("[WARN] (dedup) lost lock ownership '%s'", lopts.Key)
d.setLeader(t, nil)
goto START
case <-sessionCh:
log.Printf("[INFO] (dedup) releasing lock '%s'", lopts.Key)
d.setLeader(t, nil)
lock.Unlock()
case <-d.stopCh:
log.Printf("[INFO] (dedup) releasing lock '%s'", lopts.Key)
lock.Unlock()
}
}

View File

@@ -0,0 +1,31 @@
package manager
import "fmt"
// ErrExitable is an interface that defines an integer ExitStatus() function.
type ErrExitable interface {
ExitStatus() int
}
var _ error = new(ErrChildDied)
var _ ErrExitable = new(ErrChildDied)
// ErrChildDied is the error returned when the child process prematurely dies.
type ErrChildDied struct {
code int
}
// NewErrChildDied creates a new error with the given exit code.
func NewErrChildDied(c int) *ErrChildDied {
return &ErrChildDied{code: c}
}
// Error implements the error interface.
func (e *ErrChildDied) Error() string {
return fmt.Sprintf("child process died with exit code %d", e.code)
}
// ExitStatus implements the ErrExitable interface.
func (e *ErrChildDied) ExitStatus() int {
return e.code
}

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,32 @@
package signals
import (
"reflect"
"github.com/mitchellh/mapstructure"
)
// StringToSignalFunc parses a string as a signal based on the signal lookup
// table. If the user supplied an empty string or nil, a special "nil signal"
// is returned. Clients should check for this value and set the response back
// nil after mapstructure finishes parsing.
func StringToSignalFunc() mapstructure.DecodeHookFunc {
return func(
f reflect.Type,
t reflect.Type,
data interface{}) (interface{}, error) {
if f.Kind() != reflect.String {
return data, nil
}
if t.String() != "os.Signal" {
return data, nil
}
if data == nil || data.(string) == "" {
return SIGNIL, nil
}
return Parse(data.(string))
}
}

View File

@@ -0,0 +1,7 @@
package signals
// NilSignal is a special signal that is blank or "nil"
type NilSignal int
func (s *NilSignal) String() string { return "SIGNIL" }
func (s *NilSignal) Signal() {}

View File

@@ -0,0 +1,32 @@
package signals
import (
"fmt"
"os"
"sort"
"strings"
)
var SIGNIL os.Signal = new(NilSignal)
var ValidSignals []string
func init() {
valid := make([]string, 0, len(SignalLookup))
for k, _ := range SignalLookup {
valid = append(valid, k)
}
sort.Strings(valid)
ValidSignals = valid
}
// Parse parses the given string as a signal. If the signal is not found,
// an error is returned.
func Parse(s string) (os.Signal, error) {
sig, ok := SignalLookup[strings.ToUpper(s)]
if !ok {
return nil, fmt.Errorf("invalid signal %q - valid signals are %q",
sig, ValidSignals)
}
return sig, nil
}

View File

@@ -0,0 +1,39 @@
// +build linux darwin freebsd openbsd solaris netbsd
package signals
import (
"os"
"syscall"
)
var SignalLookup = map[string]os.Signal{
"SIGABRT": syscall.SIGABRT,
"SIGALRM": syscall.SIGALRM,
"SIGBUS": syscall.SIGBUS,
"SIGCHLD": syscall.SIGCHLD,
"SIGCONT": syscall.SIGCONT,
"SIGFPE": syscall.SIGFPE,
"SIGHUP": syscall.SIGHUP,
"SIGILL": syscall.SIGILL,
"SIGINT": syscall.SIGINT,
"SIGIO": syscall.SIGIO,
"SIGIOT": syscall.SIGIOT,
"SIGKILL": syscall.SIGKILL,
"SIGPIPE": syscall.SIGPIPE,
"SIGPROF": syscall.SIGPROF,
"SIGQUIT": syscall.SIGQUIT,
"SIGSEGV": syscall.SIGSEGV,
"SIGSTOP": syscall.SIGSTOP,
"SIGSYS": syscall.SIGSYS,
"SIGTERM": syscall.SIGTERM,
"SIGTRAP": syscall.SIGTRAP,
"SIGTSTP": syscall.SIGTSTP,
"SIGTTIN": syscall.SIGTTIN,
"SIGTTOU": syscall.SIGTTOU,
"SIGURG": syscall.SIGURG,
"SIGUSR1": syscall.SIGUSR1,
"SIGUSR2": syscall.SIGUSR2,
"SIGXCPU": syscall.SIGXCPU,
"SIGXFSZ": syscall.SIGXFSZ,
}

View File

@@ -0,0 +1,24 @@
// +build windows
package signals
import (
"os"
"syscall"
)
var SignalLookup = map[string]os.Signal{
"SIGABRT": syscall.SIGABRT,
"SIGALRM": syscall.SIGALRM,
"SIGBUS": syscall.SIGBUS,
"SIGFPE": syscall.SIGFPE,
"SIGHUP": syscall.SIGHUP,
"SIGILL": syscall.SIGILL,
"SIGINT": syscall.SIGINT,
"SIGKILL": syscall.SIGKILL,
"SIGPIPE": syscall.SIGPIPE,
"SIGQUIT": syscall.SIGQUIT,
"SIGSEGV": syscall.SIGSEGV,
"SIGTERM": syscall.SIGTERM,
"SIGTRAP": syscall.SIGTRAP,
}

View File

@@ -0,0 +1,74 @@
package template
import (
"sync"
dep "github.com/hashicorp/consul-template/dependency"
)
// Brain is what Template uses to determine the values that are
// available for template parsing.
type Brain struct {
sync.RWMutex
// data is the map of individual dependencies (by HashCode()) and the most
// recent data for that dependency.
data map[string]interface{}
// receivedData is an internal tracker of which dependencies have stored data
// in the brain.
receivedData map[string]struct{}
}
// NewBrain creates a new Brain with empty values for each
// of the key structs.
func NewBrain() *Brain {
return &Brain{
data: make(map[string]interface{}),
receivedData: make(map[string]struct{}),
}
}
// Remember accepts a dependency and the data to store associated with that
// dep. This function converts the given data to a proper type and stores
// it interally.
func (b *Brain) Remember(d dep.Dependency, data interface{}) {
b.Lock()
defer b.Unlock()
b.data[d.HashCode()] = data
b.receivedData[d.HashCode()] = struct{}{}
}
// Recall gets the current value for the given dependency in the Brain.
func (b *Brain) Recall(d dep.Dependency) (interface{}, bool) {
b.RLock()
defer b.RUnlock()
// If we have not received data for this dependency, return now.
if _, ok := b.receivedData[d.HashCode()]; !ok {
return nil, false
}
return b.data[d.HashCode()], true
}
// ForceSet is used to force set the value of a dependency
// for a given hash code
func (b *Brain) ForceSet(hashCode string, data interface{}) {
b.Lock()
defer b.Unlock()
b.data[hashCode] = data
b.receivedData[hashCode] = struct{}{}
}
// Forget accepts a dependency and removes all associated data with this
// dependency. It also resets the "receivedData" internal map.
func (b *Brain) Forget(d dep.Dependency) {
b.Lock()
defer b.Unlock()
delete(b.data, d.HashCode())
delete(b.receivedData, d.HashCode())
}

View File

@@ -0,0 +1,173 @@
package template
import (
"bytes"
"crypto/md5"
"encoding/hex"
"errors"
"fmt"
"io/ioutil"
"path/filepath"
"text/template"
dep "github.com/hashicorp/consul-template/dependency"
)
type Template struct {
// Path is the path to this template on disk.
Path string
// LeftDelim and RightDelim are the left and right delimiters to use.
LeftDelim, RightDelim string
// Contents is the string contents for the template. It is either given
// during template creation or read from disk when initialized.
Contents string
// HexMD5 stores the hex version of the MD5
HexMD5 string
}
// NewTemplate creates and parses a new Consul Template template at the given
// path. If the template does not exist, an error is returned. During
// initialization, the template is read and is parsed for dependencies. Any
// errors that occur are returned.
func NewTemplate(path, contents, leftDelim, rightDelim string) (*Template, error) {
// Validate that we are either given the path or the explicit contents
pathEmpty, contentsEmpty := path == "", contents == ""
if !pathEmpty && !contentsEmpty {
return nil, errors.New("Either specify template path or content, not both")
} else if pathEmpty && contentsEmpty {
return nil, errors.New("Must specify template path or content")
}
template := &Template{
Path: path,
Contents: contents,
LeftDelim: leftDelim,
RightDelim: rightDelim,
}
if err := template.init(); err != nil {
return nil, err
}
return template, nil
}
// ID returns an identifier for the template
func (t *Template) ID() string {
return t.HexMD5
}
// Execute evaluates this template in the context of the given brain.
//
// The first return value is the list of used dependencies.
// The second return value is the list of missing dependencies.
// The third return value is the rendered text.
// The fourth return value any error that occurs.
func (t *Template) Execute(brain *Brain) ([]dep.Dependency, []dep.Dependency, []byte, error) {
usedMap := make(map[string]dep.Dependency)
missingMap := make(map[string]dep.Dependency)
name := filepath.Base(t.Path)
funcs := funcMap(brain, usedMap, missingMap)
tmpl, err := template.New(name).
Delims(t.LeftDelim, t.RightDelim).
Funcs(funcs).
Parse(t.Contents)
if err != nil {
return nil, nil, nil, fmt.Errorf("template: %s", err)
}
// TODO: accept an io.Writer instead
buff := new(bytes.Buffer)
if err := tmpl.Execute(buff, nil); err != nil {
return nil, nil, nil, fmt.Errorf("template: %s", err)
}
// Update this list of this template's dependencies
var used []dep.Dependency
for _, dep := range usedMap {
used = append(used, dep)
}
// Compile the list of missing dependencies
var missing []dep.Dependency
for _, dep := range missingMap {
missing = append(missing, dep)
}
return used, missing, buff.Bytes(), nil
}
// init reads the template file and initializes required variables.
func (t *Template) init() error {
// Render the template
if t.Path != "" {
contents, err := ioutil.ReadFile(t.Path)
if err != nil {
return err
}
t.Contents = string(contents)
}
// Compute the MD5, encode as hex
hash := md5.Sum([]byte(t.Contents))
t.HexMD5 = hex.EncodeToString(hash[:])
return nil
}
// funcMap is the map of template functions to their respective functions.
func funcMap(brain *Brain, used, missing map[string]dep.Dependency) template.FuncMap {
return template.FuncMap{
// API functions
"datacenters": datacentersFunc(brain, used, missing),
"file": fileFunc(brain, used, missing),
"key": keyFunc(brain, used, missing),
"key_or_default": keyWithDefaultFunc(brain, used, missing),
"ls": lsFunc(brain, used, missing),
"node": nodeFunc(brain, used, missing),
"nodes": nodesFunc(brain, used, missing),
"secret": secretFunc(brain, used, missing),
"secrets": secretsFunc(brain, used, missing),
"service": serviceFunc(brain, used, missing),
"services": servicesFunc(brain, used, missing),
"tree": treeFunc(brain, used, missing),
"vault": vaultFunc(brain, used, missing),
// Helper functions
"byKey": byKey,
"byTag": byTag,
"contains": contains,
"env": env,
"explode": explode,
"in": in,
"loop": loop,
"join": join,
"trimSpace": trimSpace,
"parseBool": parseBool,
"parseFloat": parseFloat,
"parseInt": parseInt,
"parseJSON": parseJSON,
"parseUint": parseUint,
"plugin": plugin,
"regexReplaceAll": regexReplaceAll,
"regexMatch": regexMatch,
"replaceAll": replaceAll,
"timestamp": timestamp,
"toLower": toLower,
"toJSON": toJSON,
"toJSONPretty": toJSONPretty,
"toTitle": toTitle,
"toUpper": toUpper,
"toYAML": toYAML,
"split": split,
// Math functions
"add": add,
"subtract": subtract,
"multiply": multiply,
"divide": divide,
}
}

View File

@@ -0,0 +1,989 @@
package template
import (
"bytes"
"encoding/json"
"fmt"
"io/ioutil"
"log"
"os"
"os/exec"
"reflect"
"regexp"
"strconv"
"strings"
"time"
"github.com/burntsushi/toml"
dep "github.com/hashicorp/consul-template/dependency"
yaml "gopkg.in/yaml.v2"
)
// now is function that represents the current time in UTC. This is here
// primarily for the tests to override times.
var now = func() time.Time { return time.Now().UTC() }
// datacentersFunc returns or accumulates datacenter dependencies.
func datacentersFunc(brain *Brain,
used, missing map[string]dep.Dependency) func(...string) ([]string, error) {
return func(s ...string) ([]string, error) {
result := []string{}
d, err := dep.ParseDatacenters(s...)
if err != nil {
return result, err
}
addDependency(used, d)
if value, ok := brain.Recall(d); ok {
return value.([]string), nil
}
addDependency(missing, d)
return result, nil
}
}
// fileFunc returns or accumulates file dependencies.
func fileFunc(brain *Brain,
used, missing map[string]dep.Dependency) func(string) (string, error) {
return func(s string) (string, error) {
if len(s) == 0 {
return "", nil
}
d, err := dep.ParseFile(s)
if err != nil {
return "", err
}
addDependency(used, d)
if value, ok := brain.Recall(d); ok {
if value == nil {
return "", nil
}
return value.(string), nil
}
addDependency(missing, d)
return "", nil
}
}
// keyFunc returns or accumulates key dependencies.
func keyFunc(brain *Brain,
used, missing map[string]dep.Dependency) func(string) (string, error) {
return func(s string) (string, error) {
if len(s) == 0 {
return "", nil
}
d, err := dep.ParseStoreKey(s)
if err != nil {
return "", err
}
addDependency(used, d)
if value, ok := brain.Recall(d); ok {
if value == nil {
return "", nil
}
return value.(string), nil
}
addDependency(missing, d)
return "", nil
}
}
// keyWithDefaultFunc returns or accumulates key dependencies that have a
// default value.
func keyWithDefaultFunc(brain *Brain,
used, missing map[string]dep.Dependency) func(string, string) (string, error) {
return func(s, def string) (string, error) {
if len(s) == 0 {
return def, nil
}
d, err := dep.ParseStoreKey(s)
if err != nil {
return "", err
}
d.SetDefault(def)
addDependency(used, d)
if value, ok := brain.Recall(d); ok {
if value == nil {
return def, nil
}
return value.(string), nil
}
addDependency(missing, d)
return def, nil
}
}
// lsFunc returns or accumulates keyPrefix dependencies.
func lsFunc(brain *Brain,
used, missing map[string]dep.Dependency) func(string) ([]*dep.KeyPair, error) {
return func(s string) ([]*dep.KeyPair, error) {
result := []*dep.KeyPair{}
if len(s) == 0 {
return result, nil
}
d, err := dep.ParseStoreKeyPrefix(s)
if err != nil {
return result, err
}
addDependency(used, d)
// Only return non-empty top-level keys
if value, ok := brain.Recall(d); ok {
for _, pair := range value.([]*dep.KeyPair) {
if pair.Key != "" && !strings.Contains(pair.Key, "/") {
result = append(result, pair)
}
}
return result, nil
}
addDependency(missing, d)
return result, nil
}
}
// nodeFunc returns or accumulates catalog node dependency.
func nodeFunc(brain *Brain,
used, missing map[string]dep.Dependency) func(...string) (*dep.NodeDetail, error) {
return func(s ...string) (*dep.NodeDetail, error) {
d, err := dep.ParseCatalogNode(s...)
if err != nil {
return nil, err
}
addDependency(used, d)
if value, ok := brain.Recall(d); ok {
return value.(*dep.NodeDetail), nil
}
addDependency(missing, d)
return nil, nil
}
}
// nodesFunc returns or accumulates catalog node dependencies.
func nodesFunc(brain *Brain,
used, missing map[string]dep.Dependency) func(...string) ([]*dep.Node, error) {
return func(s ...string) ([]*dep.Node, error) {
result := []*dep.Node{}
d, err := dep.ParseCatalogNodes(s...)
if err != nil {
return nil, err
}
addDependency(used, d)
if value, ok := brain.Recall(d); ok {
return value.([]*dep.Node), nil
}
addDependency(missing, d)
return result, nil
}
}
// secretFunc returns or accumulates secret dependencies from Vault.
func secretFunc(brain *Brain,
used, missing map[string]dep.Dependency) func(...string) (*dep.Secret, error) {
return func(s ...string) (*dep.Secret, error) {
result := &dep.Secret{}
if len(s) == 0 {
return result, nil
}
d, err := dep.ParseVaultSecret(s...)
if err != nil {
return result, nil
}
addDependency(used, d)
if value, ok := brain.Recall(d); ok {
result = value.(*dep.Secret)
return result, nil
}
addDependency(missing, d)
return result, nil
}
}
// secretsFunc returns or accumulates a list of secret dependencies from Vault.
func secretsFunc(brain *Brain,
used, missing map[string]dep.Dependency) func(string) ([]string, error) {
return func(s string) ([]string, error) {
result := []string{}
if len(s) == 0 {
return result, nil
}
d, err := dep.ParseVaultSecrets(s)
if err != nil {
return result, nil
}
addDependency(used, d)
if value, ok := brain.Recall(d); ok {
result = value.([]string)
return result, nil
}
addDependency(missing, d)
return result, nil
}
}
// serviceFunc returns or accumulates health service dependencies.
func serviceFunc(brain *Brain,
used, missing map[string]dep.Dependency) func(...string) ([]*dep.HealthService, error) {
return func(s ...string) ([]*dep.HealthService, error) {
result := []*dep.HealthService{}
if len(s) == 0 || s[0] == "" {
return result, nil
}
d, err := dep.ParseHealthServices(s...)
if err != nil {
return nil, err
}
addDependency(used, d)
if value, ok := brain.Recall(d); ok {
return value.([]*dep.HealthService), nil
}
addDependency(missing, d)
return result, nil
}
}
// servicesFunc returns or accumulates catalog services dependencies.
func servicesFunc(brain *Brain,
used, missing map[string]dep.Dependency) func(...string) ([]*dep.CatalogService, error) {
return func(s ...string) ([]*dep.CatalogService, error) {
result := []*dep.CatalogService{}
d, err := dep.ParseCatalogServices(s...)
if err != nil {
return nil, err
}
addDependency(used, d)
if value, ok := brain.Recall(d); ok {
return value.([]*dep.CatalogService), nil
}
addDependency(missing, d)
return result, nil
}
}
// treeFunc returns or accumulates keyPrefix dependencies.
func treeFunc(brain *Brain,
used, missing map[string]dep.Dependency) func(string) ([]*dep.KeyPair, error) {
return func(s string) ([]*dep.KeyPair, error) {
result := []*dep.KeyPair{}
if len(s) == 0 {
return result, nil
}
d, err := dep.ParseStoreKeyPrefix(s)
if err != nil {
return result, err
}
addDependency(used, d)
// Only return non-empty top-level keys
if value, ok := brain.Recall(d); ok {
for _, pair := range value.([]*dep.KeyPair) {
parts := strings.Split(pair.Key, "/")
if parts[len(parts)-1] != "" {
result = append(result, pair)
}
}
return result, nil
}
addDependency(missing, d)
return result, nil
}
}
// vaultFunc is deprecated. Use secretFunc instead.
func vaultFunc(brain *Brain,
used, missing map[string]dep.Dependency) func(string) (*dep.Secret, error) {
return func(s string) (*dep.Secret, error) {
log.Printf("[WARN] the `vault' template function has been deprecated. " +
"Please use `secret` instead!")
return secretFunc(brain, used, missing)(s)
}
}
// byKey accepts a slice of KV pairs and returns a map of the top-level
// key to all its subkeys. For example:
//
// elasticsearch/a //=> "1"
// elasticsearch/b //=> "2"
// redis/a/b //=> "3"
//
// Passing the result from Consul through byTag would yield:
//
// map[string]map[string]string{
// "elasticsearch": &dep.KeyPair{"a": "1"}, &dep.KeyPair{"b": "2"},
// "redis": &dep.KeyPair{"a/b": "3"}
// }
//
// Note that the top-most key is stripped from the Key value. Keys that have no
// prefix after stripping are removed from the list.
func byKey(pairs []*dep.KeyPair) (map[string]map[string]*dep.KeyPair, error) {
m := make(map[string]map[string]*dep.KeyPair)
for _, pair := range pairs {
parts := strings.Split(pair.Key, "/")
top := parts[0]
key := strings.Join(parts[1:], "/")
if key == "" {
// Do not add a key if it has no prefix after stripping.
continue
}
if _, ok := m[top]; !ok {
m[top] = make(map[string]*dep.KeyPair)
}
newPair := *pair
newPair.Key = key
m[top][key] = &newPair
}
return m, nil
}
// byTag is a template func that takes the provided services and
// produces a map based on Service tags.
//
// The map key is a string representing the service tag. The map value is a
// slice of Services which have the tag assigned.
func byTag(in interface{}) (map[string][]interface{}, error) {
m := make(map[string][]interface{})
switch typed := in.(type) {
case nil:
case []*dep.CatalogService:
for _, s := range typed {
for _, t := range s.Tags {
m[t] = append(m[t], s)
}
}
case []*dep.HealthService:
for _, s := range typed {
for _, t := range s.Tags {
m[t] = append(m[t], s)
}
}
default:
return nil, fmt.Errorf("byTag: wrong argument type %T", in)
}
return m, nil
}
// contains is a function that have reverse arguments of "in" and is designed to
// be used as a pipe instead of a function:
//
// {{ l | contains "thing" }}
//
func contains(v, l interface{}) (bool, error) {
return in(l, v)
}
// env returns the value of the environment variable set
func env(s string) (string, error) {
return os.Getenv(s), nil
}
// explode is used to expand a list of keypairs into a deeply-nested hash.
func explode(pairs []*dep.KeyPair) (map[string]interface{}, error) {
m := make(map[string]interface{})
for _, pair := range pairs {
if err := explodeHelper(m, pair.Key, pair.Value, pair.Key); err != nil {
return nil, err
}
}
return m, nil
}
// explodeHelper is a recursive helper for explode.
func explodeHelper(m map[string]interface{}, k, v, p string) error {
if strings.Contains(k, "/") {
parts := strings.Split(k, "/")
top := parts[0]
key := strings.Join(parts[1:], "/")
if _, ok := m[top]; !ok {
m[top] = make(map[string]interface{})
}
nest, ok := m[top].(map[string]interface{})
if !ok {
return fmt.Errorf("not a map: %q: %q already has value %q", p, top, m[top])
}
return explodeHelper(nest, key, v, k)
} else {
if k != "" {
m[k] = v
}
}
return nil
}
// in seaches for a given value in a given interface.
func in(l, v interface{}) (bool, error) {
lv := reflect.ValueOf(l)
vv := reflect.ValueOf(v)
switch lv.Kind() {
case reflect.Array, reflect.Slice:
// if the slice contains 'interface' elements, then the element needs to be extracted directly to examine its type,
// otherwise it will just resolve to 'interface'.
var interfaceSlice []interface{}
if reflect.TypeOf(l).Elem().Kind() == reflect.Interface {
interfaceSlice = l.([]interface{})
}
for i := 0; i < lv.Len(); i++ {
var lvv reflect.Value
if interfaceSlice != nil {
lvv = reflect.ValueOf(interfaceSlice[i])
} else {
lvv = lv.Index(i)
}
switch lvv.Kind() {
case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64:
switch vv.Kind() {
case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64:
if vv.Int() == lvv.Int() {
return true, nil
}
}
case reflect.Float32, reflect.Float64:
switch vv.Kind() {
case reflect.Float32, reflect.Float64:
if vv.Float() == lvv.Float() {
return true, nil
}
}
case reflect.String:
if vv.Type() == lvv.Type() && vv.String() == lvv.String() {
return true, nil
}
}
}
case reflect.String:
if vv.Type() == lv.Type() && strings.Contains(lv.String(), vv.String()) {
return true, nil
}
}
return false, nil
}
// loop accepts varying parameters and differs its behavior. If given one
// parameter, loop will return a goroutine that begins at 0 and loops until the
// given int, increasing the index by 1 each iteration. If given two parameters,
// loop will return a goroutine that begins at the first parameter and loops
// up to but not including the second parameter.
//
// // Prints 0 1 2 3 4
// for _, i := range loop(5) {
// print(i)
// }
//
// // Prints 5 6 7
// for _, i := range loop(5, 8) {
// print(i)
// }
//
func loop(ints ...int64) (<-chan int64, error) {
var start, stop int64
switch len(ints) {
case 1:
start, stop = 0, ints[0]
case 2:
start, stop = ints[0], ints[1]
default:
return nil, fmt.Errorf("loop: wrong number of arguments, expected 1 or 2"+
", but got %d", len(ints))
}
ch := make(chan int64)
go func() {
for i := start; i < stop; i++ {
ch <- i
}
close(ch)
}()
return ch, nil
}
// join is a version of strings.Join that can be piped
func join(sep string, a []string) (string, error) {
return strings.Join(a, sep), nil
}
// TrimSpace is a version of strings.TrimSpace that can be piped
func trimSpace(s string) (string, error) {
return strings.TrimSpace(s), nil
}
// parseBool parses a string into a boolean
func parseBool(s string) (bool, error) {
if s == "" {
return false, nil
}
result, err := strconv.ParseBool(s)
if err != nil {
return false, fmt.Errorf("parseBool: %s", err)
}
return result, nil
}
// parseFloat parses a string into a base 10 float
func parseFloat(s string) (float64, error) {
if s == "" {
return 0.0, nil
}
result, err := strconv.ParseFloat(s, 10)
if err != nil {
return 0, fmt.Errorf("parseFloat: %s", err)
}
return result, nil
}
// parseInt parses a string into a base 10 int
func parseInt(s string) (int64, error) {
if s == "" {
return 0, nil
}
result, err := strconv.ParseInt(s, 10, 64)
if err != nil {
return 0, fmt.Errorf("parseInt: %s", err)
}
return result, nil
}
// parseJSON returns a structure for valid JSON
func parseJSON(s string) (interface{}, error) {
if s == "" {
return map[string]interface{}{}, nil
}
var data interface{}
if err := json.Unmarshal([]byte(s), &data); err != nil {
return nil, err
}
return data, nil
}
// parseUint parses a string into a base 10 int
func parseUint(s string) (uint64, error) {
if s == "" {
return 0, nil
}
result, err := strconv.ParseUint(s, 10, 64)
if err != nil {
return 0, fmt.Errorf("parseUint: %s", err)
}
return result, nil
}
// plugin executes a subprocess as the given command string. It is assumed the
// resulting command returns JSON which is then parsed and returned as the
// value for use in the template.
func plugin(name string, args ...string) (string, error) {
if name == "" {
return "", nil
}
stdout, stderr := new(bytes.Buffer), new(bytes.Buffer)
// Strip and trim each arg or else some plugins get confused with the newline
// characters
jsons := make([]string, 0, len(args))
for _, arg := range args {
if v := strings.TrimSpace(arg); v != "" {
jsons = append(jsons, v)
}
}
cmd := exec.Command(name, jsons...)
cmd.Stdout = stdout
cmd.Stderr = stderr
if err := cmd.Start(); err != nil {
return "", fmt.Errorf("exec %q: %s\n\nstdout:\n\n%s\n\nstderr:\n\n%s",
name, err, stdout.Bytes(), stderr.Bytes())
}
done := make(chan error, 1)
go func() {
done <- cmd.Wait()
}()
select {
case <-time.After(5 * time.Second):
if cmd.Process != nil {
if err := cmd.Process.Kill(); err != nil {
return "", fmt.Errorf("exec %q: failed to kill", name)
}
}
<-done // Allow the goroutine to exit
return "", fmt.Errorf("exec %q: did not finish", name)
case err := <-done:
if err != nil {
return "", fmt.Errorf("exec %q: %s\n\nstdout:\n\n%s\n\nstderr:\n\n%s",
name, err, stdout.Bytes(), stderr.Bytes())
}
}
return strings.TrimSpace(stdout.String()), nil
}
// replaceAll replaces all occurrences of a value in a string with the given
// replacement value.
func replaceAll(f, t, s string) (string, error) {
return strings.Replace(s, f, t, -1), nil
}
// regexReplaceAll replaces all occurrences of a regular expression with
// the given replacement value.
func regexReplaceAll(re, pl, s string) (string, error) {
compiled, err := regexp.Compile(re)
if err != nil {
return "", err
}
return compiled.ReplaceAllString(s, pl), nil
}
// regexMatch returns true or false if the string matches
// the given regular expression
func regexMatch(re, s string) (bool, error) {
compiled, err := regexp.Compile(re)
if err != nil {
return false, err
}
return compiled.MatchString(s), nil
}
// split is a version of strings.Split that can be piped
func split(sep, s string) ([]string, error) {
s = strings.TrimSpace(s)
if s == "" {
return []string{}, nil
}
return strings.Split(s, sep), nil
}
// timestamp returns the current UNIX timestamp in UTC. If an argument is
// specified, it will be used to format the timestamp.
func timestamp(s ...string) (string, error) {
switch len(s) {
case 0:
return now().Format(time.RFC3339), nil
case 1:
if s[0] == "unix" {
return strconv.FormatInt(now().Unix(), 10), nil
}
return now().Format(s[0]), nil
default:
return "", fmt.Errorf("timestamp: wrong number of arguments, expected 0 or 1"+
", but got %d", len(s))
}
}
// toLower converts the given string (usually by a pipe) to lowercase.
func toLower(s string) (string, error) {
return strings.ToLower(s), nil
}
// toJSON converts the given structure into a deeply nested JSON string.
func toJSON(i interface{}) (string, error) {
result, err := json.Marshal(i)
if err != nil {
return "", fmt.Errorf("toJSON: %s", err)
}
return string(bytes.TrimSpace(result)), err
}
// toJSONPretty converts the given structure into a deeply nested pretty JSON
// string.
func toJSONPretty(m map[string]interface{}) (string, error) {
result, err := json.MarshalIndent(m, "", " ")
if err != nil {
return "", fmt.Errorf("toJSONPretty: %s", err)
}
return string(bytes.TrimSpace(result)), err
}
// toTitle converts the given string (usually by a pipe) to titlecase.
func toTitle(s string) (string, error) {
return strings.Title(s), nil
}
// toUpper converts the given string (usually by a pipe) to uppercase.
func toUpper(s string) (string, error) {
return strings.ToUpper(s), nil
}
// toYAML converts the given structure into a deeply nested YAML string.
func toYAML(m map[string]interface{}) (string, error) {
result, err := yaml.Marshal(m)
if err != nil {
return "", fmt.Errorf("toYAML: %s", err)
}
return string(bytes.TrimSpace(result)), nil
}
// toTOML converts the given structure into a deeply nested TOML string.
func toTOML(m map[string]interface{}) (string, error) {
buf := bytes.NewBuffer([]byte{})
enc := toml.NewEncoder(buf)
if err := enc.Encode(m); err != nil {
return "", fmt.Errorf("toTOML: %s", err)
}
result, err := ioutil.ReadAll(buf)
if err != nil {
return "", fmt.Errorf("toTOML: %s", err)
}
return string(bytes.TrimSpace(result)), nil
}
// add returns the sum of a and b.
func add(b, a interface{}) (interface{}, error) {
av := reflect.ValueOf(a)
bv := reflect.ValueOf(b)
switch av.Kind() {
case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64:
switch bv.Kind() {
case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64:
return av.Int() + bv.Int(), nil
case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64:
return av.Int() + int64(bv.Uint()), nil
case reflect.Float32, reflect.Float64:
return float64(av.Int()) + bv.Float(), nil
default:
return nil, fmt.Errorf("add: unknown type for %q (%T)", bv, b)
}
case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64:
switch bv.Kind() {
case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64:
return int64(av.Uint()) + bv.Int(), nil
case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64:
return av.Uint() + bv.Uint(), nil
case reflect.Float32, reflect.Float64:
return float64(av.Uint()) + bv.Float(), nil
default:
return nil, fmt.Errorf("add: unknown type for %q (%T)", bv, b)
}
case reflect.Float32, reflect.Float64:
switch bv.Kind() {
case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64:
return av.Float() + float64(bv.Int()), nil
case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64:
return av.Float() + float64(bv.Uint()), nil
case reflect.Float32, reflect.Float64:
return av.Float() + bv.Float(), nil
default:
return nil, fmt.Errorf("add: unknown type for %q (%T)", bv, b)
}
default:
return nil, fmt.Errorf("add: unknown type for %q (%T)", av, a)
}
}
// subtract returns the difference of b from a.
func subtract(b, a interface{}) (interface{}, error) {
av := reflect.ValueOf(a)
bv := reflect.ValueOf(b)
switch av.Kind() {
case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64:
switch bv.Kind() {
case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64:
return av.Int() - bv.Int(), nil
case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64:
return av.Int() - int64(bv.Uint()), nil
case reflect.Float32, reflect.Float64:
return float64(av.Int()) - bv.Float(), nil
default:
return nil, fmt.Errorf("subtract: unknown type for %q (%T)", bv, b)
}
case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64:
switch bv.Kind() {
case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64:
return int64(av.Uint()) - bv.Int(), nil
case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64:
return av.Uint() - bv.Uint(), nil
case reflect.Float32, reflect.Float64:
return float64(av.Uint()) - bv.Float(), nil
default:
return nil, fmt.Errorf("subtract: unknown type for %q (%T)", bv, b)
}
case reflect.Float32, reflect.Float64:
switch bv.Kind() {
case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64:
return av.Float() - float64(bv.Int()), nil
case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64:
return av.Float() - float64(bv.Uint()), nil
case reflect.Float32, reflect.Float64:
return av.Float() - bv.Float(), nil
default:
return nil, fmt.Errorf("subtract: unknown type for %q (%T)", bv, b)
}
default:
return nil, fmt.Errorf("subtract: unknown type for %q (%T)", av, a)
}
}
// multiply returns the product of a and b.
func multiply(b, a interface{}) (interface{}, error) {
av := reflect.ValueOf(a)
bv := reflect.ValueOf(b)
switch av.Kind() {
case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64:
switch bv.Kind() {
case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64:
return av.Int() * bv.Int(), nil
case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64:
return av.Int() * int64(bv.Uint()), nil
case reflect.Float32, reflect.Float64:
return float64(av.Int()) * bv.Float(), nil
default:
return nil, fmt.Errorf("multiply: unknown type for %q (%T)", bv, b)
}
case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64:
switch bv.Kind() {
case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64:
return int64(av.Uint()) * bv.Int(), nil
case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64:
return av.Uint() * bv.Uint(), nil
case reflect.Float32, reflect.Float64:
return float64(av.Uint()) * bv.Float(), nil
default:
return nil, fmt.Errorf("multiply: unknown type for %q (%T)", bv, b)
}
case reflect.Float32, reflect.Float64:
switch bv.Kind() {
case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64:
return av.Float() * float64(bv.Int()), nil
case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64:
return av.Float() * float64(bv.Uint()), nil
case reflect.Float32, reflect.Float64:
return av.Float() * bv.Float(), nil
default:
return nil, fmt.Errorf("multiply: unknown type for %q (%T)", bv, b)
}
default:
return nil, fmt.Errorf("multiply: unknown type for %q (%T)", av, a)
}
}
// divide returns the division of b from a.
func divide(b, a interface{}) (interface{}, error) {
av := reflect.ValueOf(a)
bv := reflect.ValueOf(b)
switch av.Kind() {
case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64:
switch bv.Kind() {
case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64:
return av.Int() / bv.Int(), nil
case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64:
return av.Int() / int64(bv.Uint()), nil
case reflect.Float32, reflect.Float64:
return float64(av.Int()) / bv.Float(), nil
default:
return nil, fmt.Errorf("divide: unknown type for %q (%T)", bv, b)
}
case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64:
switch bv.Kind() {
case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64:
return int64(av.Uint()) / bv.Int(), nil
case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64:
return av.Uint() / bv.Uint(), nil
case reflect.Float32, reflect.Float64:
return float64(av.Uint()) / bv.Float(), nil
default:
return nil, fmt.Errorf("divide: unknown type for %q (%T)", bv, b)
}
case reflect.Float32, reflect.Float64:
switch bv.Kind() {
case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64:
return av.Float() / float64(bv.Int()), nil
case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64:
return av.Float() / float64(bv.Uint()), nil
case reflect.Float32, reflect.Float64:
return av.Float() / bv.Float(), nil
default:
return nil, fmt.Errorf("divide: unknown type for %q (%T)", bv, b)
}
default:
return nil, fmt.Errorf("divide: unknown type for %q (%T)", av, a)
}
}
// addDependency adds the given Dependency to the map.
func addDependency(m map[string]dep.Dependency, d dep.Dependency) {
if _, ok := m[d.HashCode()]; !ok {
m[d.HashCode()] = d
}
}

View File

@@ -0,0 +1,27 @@
package watch
import (
"reflect"
"github.com/mitchellh/mapstructure"
)
// StringToWaitDurationHookFunc returns a function that converts strings to wait
// value. This is designed to be used with mapstructure for parsing out a wait
// value.
func StringToWaitDurationHookFunc() mapstructure.DecodeHookFunc {
return func(
f reflect.Type,
t reflect.Type,
data interface{}) (interface{}, error) {
if f.Kind() != reflect.String {
return data, nil
}
if t != reflect.TypeOf(new(Wait)) {
return data, nil
}
// Convert it by parsing
return ParseWait(data.(string))
}
}

View File

@@ -0,0 +1,218 @@
package watch
import (
"fmt"
"log"
"reflect"
"sync"
"time"
dep "github.com/hashicorp/consul-template/dependency"
)
const (
// The amount of time to do a blocking query for
defaultWaitTime = 60 * time.Second
)
// View is a representation of a Dependency and the most recent data it has
// received from Consul.
type View struct {
// Dependency is the dependency that is associated with this View
Dependency dep.Dependency
// config is the configuration for the watcher that created this view and
// contains important information about how this view should behave when
// polling including retry functions and handling stale queries.
config *WatcherConfig
// Data is the most-recently-received data from Consul for this View
dataLock sync.RWMutex
data interface{}
receivedData bool
lastIndex uint64
// stopCh is used to stop polling on this View
stopCh chan struct{}
}
// NewView creates a new view object from the given Consul API client and
// Dependency. If an error occurs, it will be returned.
func NewView(config *WatcherConfig, d dep.Dependency) (*View, error) {
if config == nil {
return nil, fmt.Errorf("view: missing config")
}
if d == nil {
return nil, fmt.Errorf("view: missing dependency")
}
return &View{
Dependency: d,
config: config,
stopCh: make(chan struct{}),
}, nil
}
// Data returns the most-recently-received data from Consul for this View.
func (v *View) Data() interface{} {
v.dataLock.RLock()
defer v.dataLock.RUnlock()
return v.data
}
// DataAndLastIndex returns the most-recently-received data from Consul for
// this view, along with the last index. This is atomic so you will get the
// index that goes with the data you are fetching.
func (v *View) DataAndLastIndex() (interface{}, uint64) {
v.dataLock.RLock()
defer v.dataLock.RUnlock()
return v.data, v.lastIndex
}
// poll queries the Consul instance for data using the fetch function, but also
// accounts for interrupts on the interrupt channel. This allows the poll
// function to be fired in a goroutine, but then halted even if the fetch
// function is in the middle of a blocking query.
func (v *View) poll(viewCh chan<- *View, errCh chan<- error) {
defaultRetry := v.config.RetryFunc(1 * time.Second)
currentRetry := defaultRetry
for {
doneCh, fetchErrCh := make(chan struct{}, 1), make(chan error, 1)
go v.fetch(doneCh, fetchErrCh)
select {
case <-doneCh:
// Reset the retry to avoid exponentially incrementing retries when we
// have some successful requests
currentRetry = defaultRetry
log.Printf("[INFO] (view) %s received data", v.display())
select {
case <-v.stopCh:
case viewCh <- v:
}
// If we are operating in once mode, do not loop - we received data at
// least once which is the API promise here.
if v.config.Once {
return
}
case err := <-fetchErrCh:
log.Printf("[ERR] (view) %s %s", v.display(), err)
// Push the error back up to the watcher
select {
case <-v.stopCh:
case errCh <- err:
}
// Sleep and retry
if v.config.RetryFunc != nil {
currentRetry = v.config.RetryFunc(currentRetry)
}
log.Printf("[INFO] (view) %s errored, retrying in %s", v.display(), currentRetry)
time.Sleep(currentRetry)
continue
case <-v.stopCh:
log.Printf("[DEBUG] (view) %s stopping poll (received on view stopCh)", v.display())
return
}
}
}
// fetch queries the Consul instance for the attached dependency. This API
// promises that either data will be written to doneCh or an error will be
// written to errCh. It is designed to be run in a goroutine that selects the
// result of doneCh and errCh. It is assumed that only one instance of fetch
// is running per View and therefore no locking or mutexes are used.
func (v *View) fetch(doneCh chan<- struct{}, errCh chan<- error) {
log.Printf("[DEBUG] (view) %s starting fetch", v.display())
var allowStale bool
if v.config.MaxStale != 0 {
allowStale = true
}
for {
// If the view was stopped, short-circuit this loop. This prevents a bug
// where a view can get "lost" in the event Consul Template is reloaded.
select {
case <-v.stopCh:
return
default:
}
opts := &dep.QueryOptions{
AllowStale: allowStale,
WaitTime: defaultWaitTime,
WaitIndex: v.lastIndex,
}
data, rm, err := v.Dependency.Fetch(v.config.Clients, opts)
if err != nil {
// ErrStopped is returned by a dependency when it prematurely stopped
// because the upstream process asked for a reload or termination. The
// most likely cause is that the view was stopped due to a configuration
// reload or process interrupt, so we do not want to propagate this error
// to the runner, but we want to stop the fetch routine for this view.
if err != dep.ErrStopped {
errCh <- err
}
return
}
if rm == nil {
errCh <- fmt.Errorf("consul returned nil response metadata; this " +
"should never happen and is probably a bug in consul-template")
return
}
if allowStale && rm.LastContact > v.config.MaxStale {
allowStale = false
log.Printf("[DEBUG] (view) %s stale data (last contact exceeded max_stale)", v.display())
continue
}
if v.config.MaxStale != 0 {
allowStale = true
}
if rm.LastIndex == v.lastIndex {
log.Printf("[DEBUG] (view) %s no new data (index was the same)", v.display())
continue
}
v.dataLock.Lock()
if rm.LastIndex < v.lastIndex {
log.Printf("[DEBUG] (view) %s had a lower index, resetting", v.display())
v.lastIndex = 0
v.dataLock.Unlock()
continue
}
v.lastIndex = rm.LastIndex
if v.receivedData && reflect.DeepEqual(data, v.data) {
log.Printf("[DEBUG] (view) %s no new data (contents were the same)", v.display())
v.dataLock.Unlock()
continue
}
v.data = data
v.receivedData = true
v.dataLock.Unlock()
close(doneCh)
return
}
}
// display returns a string that represents this view.
func (v *View) display() string {
return v.Dependency.Display()
}
// stop halts polling of this view.
func (v *View) stop() {
v.Dependency.Stop()
close(v.stopCh)
}

View File

@@ -0,0 +1,87 @@
package watch
import (
"errors"
"fmt"
"strings"
"time"
)
// Wait is the Min/Max duration used by the Watcher
type Wait struct {
// Min and Max are the minimum and maximum time, respectively, to wait for
// data changes before rendering a new template to disk.
Min time.Duration `json:"min" mapstructure:"min"`
Max time.Duration `json:"max" mapstructure:"max"`
}
// ParseWait parses a string of the format `minimum(:maximum)` into a Wait
// struct.
func ParseWait(s string) (*Wait, error) {
if len(strings.TrimSpace(s)) < 1 {
return nil, errors.New("cannot specify empty wait interval")
}
parts := strings.Split(s, ":")
var min, max time.Duration
var err error
if len(parts) == 1 {
min, err = time.ParseDuration(strings.TrimSpace(parts[0]))
if err != nil {
return nil, err
}
max = 4 * min
} else if len(parts) == 2 {
min, err = time.ParseDuration(strings.TrimSpace(parts[0]))
if err != nil {
return nil, err
}
max, err = time.ParseDuration(strings.TrimSpace(parts[1]))
if err != nil {
return nil, err
}
} else {
return nil, errors.New("invalid wait interval format")
}
if min < 0 || max < 0 {
return nil, errors.New("cannot specify a negative wait interval")
}
if max < min {
return nil, errors.New("wait interval max must be larger than min")
}
return &Wait{min, max}, nil
}
// IsActive returns true if this wait is active (non-zero).
func (w *Wait) IsActive() bool {
return w.Min != 0 && w.Max != 0
}
// WaitVar implements the Flag.Value interface and allows the user to specify
// a watch interval using Go's flag parsing library.
type WaitVar Wait
// Set sets the value in the format min[:max] for a wait timer.
func (w *WaitVar) Set(value string) error {
wait, err := ParseWait(value)
if err != nil {
return err
}
w.Min = wait.Min
w.Max = wait.Max
return nil
}
// String returns the string format for this wait variable
func (w *WaitVar) String() string {
return fmt.Sprintf("%s:%s", w.Min, w.Max)
}

View File

@@ -0,0 +1,211 @@
package watch
import (
"fmt"
"log"
"sync"
"time"
dep "github.com/hashicorp/consul-template/dependency"
)
// RetryFunc is a function that defines the retry for a given watcher. The
// function parameter is the current retry (which might be nil), and the
// return value is the new retry. In this way, you can build complex retry
// functions that are based off the previous values.
type RetryFunc func(time.Duration) time.Duration
// DefaultRetryFunc is the default return function, which just echos whatever
// duration it was given.
var DefaultRetryFunc RetryFunc = func(t time.Duration) time.Duration {
return t
}
// dataBufferSize is the default number of views to process in a batch.
const dataBufferSize = 2048
// Watcher is a top-level manager for views that poll Consul for data.
type Watcher struct {
sync.Mutex
// DataCh is the chan where Views will be published.
DataCh chan *View
// ErrCh is the chan where any errors will be published.
ErrCh chan error
// config is the internal configuration of this watcher.
config *WatcherConfig
// depViewMap is a map of Templates to Views. Templates are keyed by
// HashCode().
depViewMap map[string]*View
}
// WatcherConfig is the configuration for a particular Watcher.
type WatcherConfig struct {
// Client is the mechanism for communicating with the Consul API.
Clients *dep.ClientSet
// Once is used to determine if the views should poll for data exactly once.
Once bool
// MaxStale is the maximum staleness of a query. If specified, Consul will
// distribute work among all servers instead of just the leader. Specifying
// this option assumes the use of AllowStale.
MaxStale time.Duration
// RetryFunc is a RetryFunc that represents the way retrys and backoffs
// should occur.
RetryFunc RetryFunc
// RenewVault determines if the watcher should renew the Vault token as a
// background job.
RenewVault bool
}
// NewWatcher creates a new watcher using the given API client.
func NewWatcher(config *WatcherConfig) (*Watcher, error) {
watcher := &Watcher{config: config}
if err := watcher.init(); err != nil {
return nil, err
}
return watcher, nil
}
// Add adds the given dependency to the list of monitored depedencies
// and start the associated view. If the dependency already exists, no action is
// taken.
//
// If the Dependency already existed, it this function will return false. If the
// view was successfully created, it will return true. If an error occurs while
// creating the view, it will be returned here (but future errors returned by
// the view will happen on the channel).
func (w *Watcher) Add(d dep.Dependency) (bool, error) {
w.Lock()
defer w.Unlock()
log.Printf("[INFO] (watcher) adding %s", d.Display())
if _, ok := w.depViewMap[d.HashCode()]; ok {
log.Printf("[DEBUG] (watcher) %s already exists, skipping", d.Display())
return false, nil
}
v, err := NewView(w.config, d)
if err != nil {
return false, err
}
log.Printf("[DEBUG] (watcher) %s starting", d.Display())
w.depViewMap[d.HashCode()] = v
go v.poll(w.DataCh, w.ErrCh)
return true, nil
}
// Watching determines if the given dependency is being watched.
func (w *Watcher) Watching(d dep.Dependency) bool {
w.Lock()
defer w.Unlock()
_, ok := w.depViewMap[d.HashCode()]
return ok
}
// ForceWatching is used to force setting the internal state of watching
// a depedency. This is only used for unit testing purposes.
func (w *Watcher) ForceWatching(d dep.Dependency, enabled bool) {
w.Lock()
defer w.Unlock()
if enabled {
w.depViewMap[d.HashCode()] = nil
} else {
delete(w.depViewMap, d.HashCode())
}
}
// Remove removes the given dependency from the list and stops the
// associated View. If a View for the given dependency does not exist, this
// function will return false. If the View does exist, this function will return
// true upon successful deletion.
func (w *Watcher) Remove(d dep.Dependency) bool {
w.Lock()
defer w.Unlock()
log.Printf("[INFO] (watcher) removing %s", d.Display())
if view, ok := w.depViewMap[d.HashCode()]; ok {
log.Printf("[DEBUG] (watcher) actually removing %s", d.Display())
view.stop()
delete(w.depViewMap, d.HashCode())
return true
}
log.Printf("[DEBUG] (watcher) %s did not exist, skipping", d.Display())
return false
}
// Size returns the number of views this watcher is watching.
func (w *Watcher) Size() int {
w.Lock()
defer w.Unlock()
return len(w.depViewMap)
}
// Stop halts this watcher and any currently polling views immediately. If a
// view was in the middle of a poll, no data will be returned.
func (w *Watcher) Stop() {
w.Lock()
defer w.Unlock()
log.Printf("[INFO] (watcher) stopping all views")
for _, view := range w.depViewMap {
if view == nil {
continue
}
log.Printf("[DEBUG] (watcher) stopping %s", view.Dependency.Display())
view.stop()
}
// Reset the map to have no views
w.depViewMap = make(map[string]*View)
// Close any idle TCP connections
w.config.Clients.Stop()
}
// init sets up the initial values for the watcher.
func (w *Watcher) init() error {
if w.config == nil {
return fmt.Errorf("watcher: missing config")
}
if w.config.RetryFunc == nil {
w.config.RetryFunc = DefaultRetryFunc
}
// Setup the channels
w.DataCh = make(chan *View, dataBufferSize)
w.ErrCh = make(chan error)
// Setup our map of dependencies to views
w.depViewMap = make(map[string]*View)
// Start a watcher for the Vault renew if that config was specified
if w.config.RenewVault {
vt, err := dep.ParseVaultToken()
if err != nil {
return fmt.Errorf("watcher: %s", err)
}
if _, err := w.Add(vt); err != nil {
return fmt.Errorf("watcher: %s", err)
}
}
return nil
}

47
vendor/github.com/mattn/go-shellwords/README.md generated vendored Normal file
View File

@@ -0,0 +1,47 @@
# go-shellwords
[![Coverage Status](https://coveralls.io/repos/mattn/go-shellwords/badge.png?branch=master)](https://coveralls.io/r/mattn/go-shellwords?branch=master)
[![Build Status](https://travis-ci.org/mattn/go-shellwords.svg?branch=master)](https://travis-ci.org/mattn/go-shellwords)
Parse line as shell words.
## Usage
```go
args, err := shellwords.Parse("./foo --bar=baz")
// args should be ["./foo", "--bar=baz"]
```
```go
os.Setenv("FOO", "bar")
p := shellwords.NewParser()
p.ParseEnv = true
args, err := p.Parse("./foo $FOO")
// args should be ["./foo", "bar"]
```
```go
p := shellwords.NewParser()
p.ParseBacktick = true
args, err := p.Parse("./foo `echo $SHELL`")
// args should be ["./foo", "/bin/bash"]
```
```go
shellwords.ParseBacktick = true
p := shellwords.NewParser()
args, err := p.Parse("./foo `echo $SHELL`")
// args should be ["./foo", "/bin/bash"]
```
# Thanks
This is based on cpan module [Parse::CommandLine](https://metacpan.org/pod/Parse::CommandLine).
# License
under the MIT License: http://mattn.mit-license.org/2014
# Author
Yasuhiro Matsumoto (a.k.a mattn)

134
vendor/github.com/mattn/go-shellwords/shellwords.go generated vendored Normal file
View File

@@ -0,0 +1,134 @@
package shellwords
import (
"errors"
"os"
"regexp"
"strings"
)
var (
ParseEnv bool = false
ParseBacktick bool = false
)
var envRe = regexp.MustCompile(`\$({[a-zA-Z0-9_]+}|[a-zA-Z0-9_]+)`)
func isSpace(r rune) bool {
switch r {
case ' ', '\t', '\r', '\n':
return true
}
return false
}
func replaceEnv(s string) string {
return envRe.ReplaceAllStringFunc(s, func(s string) string {
s = s[1:]
if s[0] == '{' {
s = s[1 : len(s)-1]
}
return os.Getenv(s)
})
}
type Parser struct {
ParseEnv bool
ParseBacktick bool
}
func NewParser() *Parser {
return &Parser{ParseEnv, ParseBacktick}
}
func (p *Parser) Parse(line string) ([]string, error) {
line = strings.TrimSpace(line)
args := []string{}
buf := ""
var escaped, doubleQuoted, singleQuoted, backQuote bool
backtick := ""
for _, r := range line {
if escaped {
buf += string(r)
escaped = false
continue
}
if r == '\\' {
if singleQuoted {
buf += string(r)
} else {
escaped = true
}
continue
}
if isSpace(r) {
if singleQuoted || doubleQuoted || backQuote {
buf += string(r)
backtick += string(r)
} else if buf != "" {
if p.ParseEnv {
buf = replaceEnv(buf)
}
args = append(args, buf)
buf = ""
}
continue
}
switch r {
case '`':
if !singleQuoted && !doubleQuoted {
if p.ParseBacktick {
if backQuote {
out, err := shellRun(backtick)
if err != nil {
return nil, err
}
buf = out
}
backtick = ""
backQuote = !backQuote
continue
}
backtick = ""
backQuote = !backQuote
}
case '"':
if !singleQuoted {
doubleQuoted = !doubleQuoted
continue
}
case '\'':
if !doubleQuoted {
singleQuoted = !singleQuoted
continue
}
}
buf += string(r)
if backQuote {
backtick += string(r)
}
}
if buf != "" {
if p.ParseEnv {
buf = replaceEnv(buf)
}
args = append(args, buf)
}
if escaped || singleQuoted || doubleQuoted || backQuote {
return nil, errors.New("invalid command line string")
}
return args, nil
}
func Parse(line string) ([]string, error) {
return NewParser().Parse(line)
}

19
vendor/github.com/mattn/go-shellwords/util_posix.go generated vendored Normal file
View File

@@ -0,0 +1,19 @@
// +build !windows
package shellwords
import (
"errors"
"os"
"os/exec"
"strings"
)
func shellRun(line string) (string, error) {
shell := os.Getenv("SHELL")
b, err := exec.Command(shell, "-c", line).Output()
if err != nil {
return "", errors.New(err.Error() + ":" + string(b))
}
return strings.TrimSpace(string(b)), nil
}

17
vendor/github.com/mattn/go-shellwords/util_windows.go generated vendored Normal file
View File

@@ -0,0 +1,17 @@
package shellwords
import (
"errors"
"os"
"os/exec"
"strings"
)
func shellRun(line string) (string, error) {
shell := os.Getenv("COMSPEC")
b, err := exec.Command(shell, "/c", line).Output()
if err != nil {
return "", errors.New(err.Error() + ":" + string(b))
}
return strings.TrimSpace(string(b)), nil
}

62
vendor/vendor.json vendored
View File

@@ -181,6 +181,12 @@
"path": "github.com/boltdb/bolt",
"revision": "c6ba97b89e0454fec9aa92e1d33a4e2c5fc1f631"
},
{
"checksumSHA1": "InIrfOI7Ys1QqZpCgTB4yW1G32M=",
"path": "github.com/burntsushi/toml",
"revision": "99064174e013895bbd9b025c31100bd1d9b590ca",
"revisionTime": "2016-07-17T15:07:09Z"
},
{
"checksumSHA1": "qNHg4l2+bg50XQUR094857jPOoQ=",
"path": "github.com/circonus-labs/circonus-gometrics",
@@ -309,7 +315,7 @@
"revision": "02caa73df411debed164f520a6a1304778f8b88c",
"revisionTime": "2016-05-28T10:48:36Z"
},
{
{
"checksumSHA1": "BwY1agr5vzusgMT4bM7UeD3YVpw=",
"path": "github.com/docker/engine-api/types/mount",
"revision": "4290f40c056686fcaa5c9caf02eac1dde9315adf",
@@ -476,6 +482,48 @@
"path": "github.com/gorhill/cronexpr/cronexpr",
"revision": "a557574d6c024ed6e36acc8b610f5f211c91568a"
},
{
"checksumSHA1": "+JUQvWp1JUVeRT5weWL9hi6Fu4Y=",
"path": "github.com/hashicorp/consul-template/child",
"revision": "a8f654d612969519c9fde20bc8eb21418d763f73",
"revisionTime": "2016-10-03T19:46:06Z"
},
{
"checksumSHA1": "tSuVPDoqSzoWmo2oEF5NGkIJHxQ=",
"path": "github.com/hashicorp/consul-template/config",
"revision": "a8f654d612969519c9fde20bc8eb21418d763f73",
"revisionTime": "2016-10-03T19:46:06Z"
},
{
"checksumSHA1": "3xeTTZejqagwfwaYT8M3rq1Ixko=",
"path": "github.com/hashicorp/consul-template/dependency",
"revision": "a8f654d612969519c9fde20bc8eb21418d763f73",
"revisionTime": "2016-10-03T19:46:06Z"
},
{
"checksumSHA1": "Z1QKRRJ/6/FjRIw1LJXTtOUxxX8=",
"path": "github.com/hashicorp/consul-template/manager",
"revision": "a8f654d612969519c9fde20bc8eb21418d763f73",
"revisionTime": "2016-10-03T19:46:06Z"
},
{
"checksumSHA1": "ByMIKPf7bXpyhhy80IjKLKYrjpo=",
"path": "github.com/hashicorp/consul-template/signals",
"revision": "a8f654d612969519c9fde20bc8eb21418d763f73",
"revisionTime": "2016-10-03T19:46:06Z"
},
{
"checksumSHA1": "pvmWk53vtXCCL6p+c3XU2aQfkx4=",
"path": "github.com/hashicorp/consul-template/template",
"revision": "a8f654d612969519c9fde20bc8eb21418d763f73",
"revisionTime": "2016-10-03T19:46:06Z"
},
{
"checksumSHA1": "sR2e74n4zQEUDR6ssjvMr9lP7sI=",
"path": "github.com/hashicorp/consul-template/watch",
"revision": "a8f654d612969519c9fde20bc8eb21418d763f73",
"revisionTime": "2016-10-03T19:46:06Z"
},
{
"comment": "v0.6.3-363-gae32a3c",
"path": "github.com/hashicorp/consul/api",
@@ -695,6 +743,12 @@
"path": "github.com/mattn/go-isatty",
"revision": "56b76bdf51f7708750eac80fa38b952bb9f32639"
},
{
"checksumSHA1": "ajImwVZHzsI+aNwsgzCSFSbmJqs=",
"path": "github.com/mattn/go-shellwords",
"revision": "f4e566c536cf69158e808ec28ef4182a37fdc981",
"revisionTime": "2015-03-21T17:42:21Z"
},
{
"path": "github.com/miekg/dns",
"revision": "7e024ce8ce18b21b475ac6baf8fa3c42536bf2fa"
@@ -896,6 +950,12 @@
"path": "gopkg.in/tomb.v2",
"revision": "14b3d72120e8d10ea6e6b7f87f7175734b1faab8",
"revisionTime": "2014-06-26T14:46:23Z"
},
{
"checksumSHA1": "12GqsW8PiRPnezDDy0v4brZrndM=",
"path": "gopkg.in/yaml.v2",
"revision": "a5b47d31c556af34a302ce5d659e6fea44d90de0",
"revisionTime": "2016-09-28T15:37:09Z"
}
],
"rootPath": "github.com/hashicorp/nomad"