Files
nomad/enos/modules/upgrade_client/scripts/get_expected_allocs.sh
Tim Gross 6c9f2fdd29 reduce upgrade testing flakes (#25839)
This changeset includes several adjustments to the upgrade testing scripts to
reduce flakes and make problems more understandable:

* When a node is drained prior to the 3rd client upgrade, it's entirely
  possible the 3rd client to be upgraded is the drained node. This results in
  miscounting the expected number of allocations because many of them will be
  "complete" (service/batch) or "pending" (system). Leave the system jobs running
  during drains and only count the running allocations at that point as the
  expected set. Move the inline script that gets this count into a script file for
  legibility.

* When the last initial workload is deployed, it's possible for it to be
  briefly still in "pending" when we move to the next step. Poll for a short
  window for the expected count of jobs.

* Make sure that any scripts that are being run right after a server or client
 is coming back up can handle temporary unavailability gracefully.

* Change the debugging output of several scripts to avoid having the debug
  output run into the error message (Ex. "some allocs are not running" looked like
  the first allocation running was the missing allocation).

* Add some notes to the README about running locally with `-dev` builds and
  tagging a cluster with your own name.

Ref: https://hashicorp.atlassian.net/browse/NMD-162
2025-05-13 08:40:22 -04:00

25 lines
981 B
Bash

#!/usr/bin/env bash
# Copyright (c) HashiCorp, Inc.
# SPDX-License-Identifier: BUSL-1.1
set -euo pipefail
# note: the stdout from this script gets read in as JSON to a later step, so
# it's critical we only emit other text if we're failing anyways
error_exit() {
printf 'Error: %s' "${1}"
exit 1
}
# we have a client IP and not a node ID, so query that node via 'node status
# -self' to get its ID
NODE_ID=$(nomad node status \
-allocs -address="https://${CLIENT_IP}:4646" -self -json | jq -r '.ID')
# dump the allocs for this node only, keeping only client-relevant data and not
# the full jobspec. We only want the running allocations because we might have
# previously drained this node, which will mess up our expected counts.
nomad alloc status -json | \
jq -r --arg NODE_ID "$NODE_ID" \
'[ .[] | select(.NodeID == $NODE_ID and .ClientStatus == "running") | {ID: .ID, Name: .Name, ClientStatus: .ClientStatus, TaskStates: .TaskStates}]'