--- layout: docs page_title: group block in the job specification description: |- Define a series of co-located tasks in the `group` block of the Nomad job specification. Specify constraints, affinities, spreads, number of instances, Consul settings, ephemeral disks, disconnect strategy, metadata, migration strategy, network requirements, rescheduling strategy, restart policy, service discovery, shutdown delay, task update strategy, Vault policies, and required volumes. Review count, constraints, metadata, network, and service discovery examples. --- # `group` block in the job specification The `group` block defines a series of tasks that should be co-located on the same Nomad client. Any [task][] within a group will be placed on the same client. ```hcl job "docs" { group "example" { # ... } } ``` ## Parameters - `constraint` ([Constraint][]: nil) - This can be provided multiple times to define additional constraints. - `affinity` ([Affinity][]: nil) - This can be provided multiple times to define preferred placement criteria. - `spread` ([Spread][spread]: nil) - This can be provided multiple times to define criteria for spreading allocations across a node attribute or metadata. See the [Nomad spread reference](/nomad/docs/job-specification/spread) for more details. - `count` `(int)` - Specifies the number of instances that should be running under for this group. This value must be non-negative. This defaults to the `min` value specified in the [`scaling`](/nomad/docs/job-specification/scaling) block, if present; otherwise, this defaults to `1`. - `consul` ([Consul][consul]: nil) - Specifies Consul configuration options specific to the group. These options will be applied to all tasks and services in the group unless a task has its own `consul` block. - `ephemeral_disk` ([EphemeralDisk][]: nil) - Specifies the ephemeral disk requirements of the group. Ephemeral disks can be marked as sticky and support live data migrations. - `disconnect` ([disconnect][]: nil) - Specifies the disconnect strategy for the server and client for all tasks in this group in case of a network partition. The tasks can be left unconnected, stopped or replaced when the client disconnects. The policy for reconciliation in case the client regains connectivity is also specified here. - `meta` ([Meta][]: nil) - Specifies a key-value map that annotates with user-defined metadata. - `migrate` ([Migrate][]: nil) - Specifies the group strategy for migrating off of draining nodes. Only service jobs with a count greater than 1 support migrate blocks. - `network` ([Network][]: <optional>) - Specifies the network requirements and configuration, including static and dynamic port allocations, for the group. - `reschedule` ([Reschedule][]: nil) - Allows to specify a rescheduling strategy. Nomad will then attempt to schedule the task on another node if any of the group allocation statuses become "failed". - `restart` ([Restart][]: nil) - Specifies the restart policy for all tasks in this group. If omitted, a default policy exists for each job type, which can be found in the [restart block documentation][restart]. - `service` ([Service][]: nil) - Specifies integrations with Nomad or [Consul](/nomad/docs/configuration/consul) for service discovery. Nomad automatically registers each service when an allocation is started and de-registers them when the allocation is destroyed. - `shutdown_delay` `(string: "0s")` - Specifies the duration to wait when stopping a group's tasks. The delay occurs between Consul or Nomad service deregistration and sending each task a shutdown signal. Ideally, services would fail health checks once they receive a shutdown signal. Alternatively, `shutdown_delay` may be set to give in-flight requests time to complete before shutting down. A group level `shutdown_delay` will run regardless if there are any defined group [services](/nomad/docs/job-specification/group#service) and only applies to these services. In addition, tasks may have their own [`shutdown_delay`](/nomad/docs/job-specification/task#shutdown_delay) which waits between de-registering task services and stopping the task. - `task` ([Task][]: <required>) - Specifies one or more tasks to run within this group. This can be specified multiple times, to add a task as part of the group. - `update` ([Update][update]: nil) - Specifies the task's update strategy. When omitted, a default update strategy is applied. - `vault` ([Vault][]: nil) - Specifies the set of Vault policies required by all tasks in this group. Overrides a `vault` block set at the `job` level. - `volume` ([Volume][]: nil) - Specifies the volumes that are required by tasks within the group. ## Examples The following examples only show the `group` blocks. Remember that the `group` block is only valid in the placements listed above. ### Specifying Count This example specifies that 5 instances of the tasks within this group should be running: ```hcl group "example" { count = 5 } ``` ### Tasks with constraint This example shows two abbreviated tasks with a constraint on the group. This will restrict the tasks to 64-bit operating systems. ```hcl group "example" { constraint { attribute = "${attr.cpu.arch}" value = "amd64" } task "cache" { # ... } task "server" { # ... } } ``` ### Metadata This example show arbitrary user-defined metadata on the group: ```hcl group "example" { meta { my-key = "my-value" } } ``` ### Network This example shows network constraints as specified in the [network][] block which uses the `bridge` networking mode, dynamically allocates two ports, and statically allocates one port: ```hcl group "example" { network { mode = "bridge" port "http" {} port "https" {} port "lb" { static = "8889" } } } ``` ### Service discovery This example creates a service in Consul. To read more about service discovery in Nomad, please see the [Nomad service discovery documentation][service_discovery]. ```hcl group "example" { network { port "api" {} } service { name = "example" port = "api" tags = ["default"] check { type = "tcp" interval = "10s" timeout = "2s" } } task "api" { ... } } ``` [task]: /nomad/docs/job-specification/task 'Nomad task Job Specification' [job]: /nomad/docs/job-specification/job 'Nomad job Job Specification' [constraint]: /nomad/docs/job-specification/constraint 'Nomad constraint Job Specification' [consul]: /nomad/docs/job-specification/consul [consul_namespace]: /nomad/docs/commands/job/run#consul-namespace [spread]: /nomad/docs/job-specification/spread 'Nomad spread Job Specification' [affinity]: /nomad/docs/job-specification/affinity 'Nomad affinity Job Specification' [ephemeraldisk]: /nomad/docs/job-specification/ephemeral_disk 'Nomad ephemeral_disk Job Specification' [`heartbeat_grace`]: /nomad/docs/configuration/server#heartbeat_grace [`disable_rescheduling`]: /nomad/docs/job-specification/reschedule#disabling-rescheduling [meta]: /nomad/docs/job-specification/meta 'Nomad meta Job Specification' [migrate]: /nomad/docs/job-specification/migrate 'Nomad migrate Job Specification' [network]: /nomad/docs/job-specification/network 'Nomad network Job Specification' [reschedule]: /nomad/docs/job-specification/reschedule 'Nomad reschedule Job Specification' [disconnect]: /nomad/docs/job-specification/disconnect 'Nomad disconnect Job Specification' [restart]: /nomad/docs/job-specification/restart 'Nomad restart Job Specification' [service]: /nomad/docs/job-specification/service 'Nomad service Job Specification' [service_discovery]: /nomad/docs/integrations/consul-integration#service-discovery 'Nomad Service Discovery' [update]: /nomad/docs/job-specification/update 'Nomad update Job Specification' [vault]: /nomad/docs/job-specification/vault 'Nomad vault Job Specification' [volume]: /nomad/docs/job-specification/volume 'Nomad volume Job Specification' [`consul.name`]: /nomad/docs/configuration/consul#name [disconect_migration]: /nomad/docs/job-specification/group#migration_to_disconnect_block