mirror of
https://github.com/kemko/nomad.git
synced 2026-01-01 16:05:42 +03:00
* [ui] Service job status panel (#16134) * it begins * Hacky demo enabled * Still very hacky but seems deece * Floor of at least 3 must be shown * Width from on-high * Other statuses considered * More sensible allocTypes listing * Beginnings of a legend * Total number of allocs running now maps over job.groups * Lintfix * base the number of slots to hold open on actual tallies, which should never exceed totalAllocs * Versions get yer versions here * Versions lookin like versions * Mirage fixup * Adds Remaining as an alloc chart status and adds historical status option * Get tests passing again by making job status static for a sec * Historical status panel click actions moved into their own component class * job detail tests plz chill * Testing if percy is fickle * Hyper-specfic on summary distribution bar identifier * Perhaps the 2nd allocSummary item no longer exists with the more accurate afterCreate data * UI Test eschewing the page pattern * Bones of a new acceptance test * Track width changes explicitly with window-resize * testlintfix * Alloc counting tests * Alloc grouping test * Alloc grouping with complex resizing * Refined the list of showable statuses * PR feedback addressed * renamed allocation-row to allocation-status-row * [ui, job status] Make panel status mode a queryParam (#16345) * queryParam changing * Test for QP in panel * Adding @tracked to legacy controller * Move the job of switching to Historical out to larger context * integration test mock passed func * [ui] Service job deployment status panel (#16383) * A very fast and loose deployment panel * Removing Unknown status from the panel * Set up oldAllocs list in constructor, rather than as a getter/tracked var * Small amount of template cleanup * Refactored latest-deployment new logic back into panel.js * Revert now-unused latest-deployment component * margin bottom when ungrouped also * Basic integration tests for job deployment status panel * Updates complete alloc colour to green for new visualizations only (#16618) * Updates complete alloc colour to green for new visualizations only * Pale green instead of dark green for viz in general * [ui] Job Deployment Status: History and Update Props (#16518) * Deployment history wooooooo * Styled deployment history * Update Params * lintfix * Types and groups for updateParams * Live-updating history * Harden with types, error states, and pending states * Refactor updateParams to use trigger component * [ui] Deployment History search (#16608) * Functioning searchbox * Some nice animations for history items * History search test * Fixing up some old mirage conventions * some a11y rule override to account for scss keyframes * Split panel into deploying and steady components * HandleError passed from job index * gridified panel elements * TotalAllocs added to deploying.js * Width perc to px * [ui] Splitting deployment allocs by status, health, and canary status (#16766) * Initial attempt with lots of scratchpad work * Style mods per UI discussion * Fix canary overflow bug * Dont show canary or health for steady/prev-alloc blocks * Steady state * Thanks Julie * Fixes steady-state versions * Legen, wait for it... * Test fixes now that we have a minimum block size * PR prep * Shimmer effect on pending and unplaced allocs (#16801) * Shimmer effect on pending and unplaced * Dont show animation in the legend * [ui, deployments] Linking allocblocks and legends to allocation / allocations index routes (#16821) * Conditional link-to component and basic linking to allocations and allocation routes * Job versions filter added to allocations index page * Steady state legends link * Legend links * Badge count links for versions * Fix: faded class on steady-state legend items * version link now wont show completed ones * Fix a11y violations with link labels * Combining some template conditional logic * [ui, deployments] Conversions on long nanosecond update params (#16882) * Conversions on long nanosecond nums * Early return in updateParamGroups comp prop * [ui, deployments] Mirage Actively Deploying Job and Deployment Integration Tests (#16888) * Start of deployment alloc test scaffolding * Bit of test cleanup and canary for ungrouped allocs * Flakey but more robust integrations for deployment panel * De-flake acceptance tests and add an actively deploying job to mirage * Jitter-less alloc status distribution removes my bad math * bugfix caused by summary.desiredTotal non-null * More interesting mirage active deployment alloc breakdown * Further tests for previous-allocs row * Previous alloc legend tests * Percy snapshots added to integration test * changelog
186 lines
5.5 KiB
JavaScript
186 lines
5.5 KiB
JavaScript
// @ts-check
|
|
import Component from '@glimmer/component';
|
|
import { task } from 'ember-concurrency';
|
|
import { tracked } from '@glimmer/tracking';
|
|
import { alias } from '@ember/object/computed';
|
|
import messageFromAdapterError from 'nomad-ui/utils/message-from-adapter-error';
|
|
|
|
export default class JobStatusPanelDeployingComponent extends Component {
|
|
@alias('args.job') job;
|
|
@alias('args.handleError') handleError = () => {};
|
|
|
|
allocTypes = [
|
|
'running',
|
|
'pending',
|
|
'failed',
|
|
// 'unknown',
|
|
// 'lost',
|
|
// 'queued',
|
|
// 'complete',
|
|
'unplaced',
|
|
].map((type) => {
|
|
return {
|
|
label: type,
|
|
};
|
|
});
|
|
|
|
@tracked oldVersionAllocBlockIDs = [];
|
|
|
|
// Called via did-insert; sets a static array of "outgoing"
|
|
// allocations we can track throughout a deployment
|
|
establishOldAllocBlockIDs() {
|
|
this.oldVersionAllocBlockIDs = this.job.allocations.filter(
|
|
(a) =>
|
|
a.clientStatus === 'running' &&
|
|
a.jobVersion !== this.deployment.get('versionNumber')
|
|
);
|
|
}
|
|
|
|
@task(function* () {
|
|
try {
|
|
yield this.job.latestDeployment.content.promote();
|
|
} catch (err) {
|
|
this.handleError({
|
|
title: 'Could Not Promote Deployment',
|
|
description: messageFromAdapterError(err, 'promote deployments'),
|
|
});
|
|
}
|
|
})
|
|
promote;
|
|
|
|
@task(function* () {
|
|
try {
|
|
yield this.job.latestDeployment.content.fail();
|
|
} catch (err) {
|
|
this.handleError({
|
|
title: 'Could Not Fail Deployment',
|
|
description: messageFromAdapterError(err, 'fail deployments'),
|
|
});
|
|
}
|
|
})
|
|
fail;
|
|
|
|
@alias('job.latestDeployment') deployment;
|
|
@alias('deployment.desiredTotal') desiredTotal;
|
|
|
|
get oldVersionAllocBlocks() {
|
|
return this.job.allocations
|
|
.filter((allocation) => this.oldVersionAllocBlockIDs.includes(allocation))
|
|
.reduce((alloGroups, currentAlloc) => {
|
|
const status = currentAlloc.clientStatus;
|
|
|
|
if (!alloGroups[status]) {
|
|
alloGroups[status] = {
|
|
healthy: { nonCanary: [] },
|
|
unhealthy: { nonCanary: [] },
|
|
};
|
|
}
|
|
alloGroups[status].healthy.nonCanary.push(currentAlloc);
|
|
|
|
return alloGroups;
|
|
}, {});
|
|
}
|
|
|
|
get newVersionAllocBlocks() {
|
|
let availableSlotsToFill = this.desiredTotal;
|
|
let allocationsOfDeploymentVersion = this.job.allocations.filter(
|
|
(a) => a.jobVersion === this.deployment.get('versionNumber')
|
|
);
|
|
|
|
let allocationCategories = this.allocTypes.reduce((categories, type) => {
|
|
categories[type.label] = {
|
|
healthy: { canary: [], nonCanary: [] },
|
|
unhealthy: { canary: [], nonCanary: [] },
|
|
};
|
|
return categories;
|
|
}, {});
|
|
|
|
for (let alloc of allocationsOfDeploymentVersion) {
|
|
if (availableSlotsToFill <= 0) {
|
|
break;
|
|
}
|
|
let status = alloc.clientStatus;
|
|
let health = alloc.isHealthy ? 'healthy' : 'unhealthy';
|
|
let canary = alloc.isCanary ? 'canary' : 'nonCanary';
|
|
|
|
if (allocationCategories[status]) {
|
|
allocationCategories[status][health][canary].push(alloc);
|
|
availableSlotsToFill--;
|
|
}
|
|
}
|
|
|
|
// Fill unplaced slots if availableSlotsToFill > 0
|
|
if (availableSlotsToFill > 0) {
|
|
allocationCategories['unplaced'] = {
|
|
healthy: { canary: [], nonCanary: [] },
|
|
unhealthy: { canary: [], nonCanary: [] },
|
|
};
|
|
allocationCategories['unplaced']['healthy']['nonCanary'] = Array(
|
|
availableSlotsToFill
|
|
)
|
|
.fill()
|
|
.map(() => {
|
|
return { clientStatus: 'unplaced' };
|
|
});
|
|
}
|
|
|
|
return allocationCategories;
|
|
}
|
|
|
|
get newRunningHealthyAllocBlocks() {
|
|
return [
|
|
...this.newVersionAllocBlocks['running']['healthy']['canary'],
|
|
...this.newVersionAllocBlocks['running']['healthy']['nonCanary'],
|
|
];
|
|
}
|
|
|
|
// #region legend
|
|
get newAllocsByStatus() {
|
|
return Object.entries(this.newVersionAllocBlocks).reduce(
|
|
(counts, [status, healthStatusObj]) => {
|
|
counts[status] = Object.values(healthStatusObj)
|
|
.flatMap((canaryStatusObj) => Object.values(canaryStatusObj))
|
|
.flatMap((canaryStatusArray) => canaryStatusArray).length;
|
|
return counts;
|
|
},
|
|
{}
|
|
);
|
|
}
|
|
|
|
get newAllocsByCanary() {
|
|
return Object.values(this.newVersionAllocBlocks)
|
|
.flatMap((healthStatusObj) => Object.values(healthStatusObj))
|
|
.flatMap((canaryStatusObj) => Object.entries(canaryStatusObj))
|
|
.reduce((counts, [canaryStatus, items]) => {
|
|
counts[canaryStatus] = (counts[canaryStatus] || 0) + items.length;
|
|
return counts;
|
|
}, {});
|
|
}
|
|
|
|
get newAllocsByHealth() {
|
|
return {
|
|
healthy: this.newRunningHealthyAllocBlocks.length,
|
|
'health unknown':
|
|
this.totalAllocs - this.newRunningHealthyAllocBlocks.length,
|
|
};
|
|
}
|
|
// #endregion legend
|
|
|
|
get oldRunningHealthyAllocBlocks() {
|
|
return this.oldVersionAllocBlocks.running?.healthy?.nonCanary || [];
|
|
}
|
|
get oldCompleteHealthyAllocBlocks() {
|
|
return this.oldVersionAllocBlocks.complete?.healthy?.nonCanary || [];
|
|
}
|
|
|
|
// TODO: eventually we will want this from a new property on a job.
|
|
// TODO: consolidate w/ the one in steady.js
|
|
get totalAllocs() {
|
|
// v----- Experimental method: Count all allocs. Good for testing but not a realistic representation of "Desired"
|
|
// return this.allocTypes.reduce((sum, type) => sum + this.args.job[type.property], 0);
|
|
|
|
// v----- Realistic method: Tally a job's task groups' "count" property
|
|
return this.args.job.taskGroups.reduce((sum, tg) => sum + tg.count, 0);
|
|
}
|
|
}
|