When accessing a region running a version of Nomad without node pools an
error was thrown because the request is handled by the nodes endpoint
which fails because it assumes `pools` is the node ID.
When a request is made to an RPC service that doesn't exist (for
example, a cross-region request from a newer version of Nomad to an
older version that doesn't implement the endpoint) the application
should return an empty list as well.
* Boot the user off the job if it gets deleted
* de-yoink
* watching the job watcher
* Unload record so history.back has to refire a (failing) request
* Acceptance tests for boot-out and notification
* Text and code wrapping as a localStorage var
* task-log uses wrapping and kb shortcut
* Word wrap keyboard labels
* Wrapper as a toggle not a button
* Changelog and fixed an extra space trailing log lines
* Moves toggle to inside
* Acceptance tests for ww and toggle click
* CSS alignment and spacing for job status panel
* Only fade the count, not the legend icon, when count is 0
* Unrounded version corners
* changelog
* css has to only remove border radius when count is present
* Seed stabilization for services test
* Try consolidating the testfixes from before
* Total test isolation and bonus logs
* Drop the isolation but keep the logs
* Remove bonus logging
* Versions added to deploying status panel
* Wrap the running and healthy title in a span
* Versions in the deployment UI next to titles
* Version count and label styles updated
* Degraded vs Healthy etc. status
* Standardize the look of a deploying status panel
* badge styles
* remove job.status from title component in favour of in-panel status
* Remove a redundant check
* re-attrd fail-deployment button considered
* Tooltip on individual allocs in the panel
* Isolate allocation cells to their own component
* Tipsy trigger
* Aria label for failed-or-lost tooltips
* Buildfix
* Try adding percy exec back to exam run
* Failed or lost cell condensed
* Latest Deployment cell
* Stylistic changes and deploying state fixup
* Rewritten tooltip message and updated lost/failed tests
* failed-or-lost cell updates to job status panel acceptance tests
* Treated same-route as sub-route and didnt cancel watchers
* Adds panel to child jobs and sub-sorts
* removed the safety check in module-for-job tests
* [ui] Adds status panel to Sysbatch jobs (#17243)
* In working out periodic/param child jobs, realized the intersection with sysbatch is high enough that it ought to be worked on now
* Further removal of jobclientstatussummary
* Explicitly making mocked jobs in no-deployment mode
* remove last remnants of job-client-status-summary component
* Screwed up my sorting order a few commits ago; this corrects it
* noActiveDeployment gonna be the death of me
* Batch jobs, aside from child jobs, get the new status panel
* Clean up the imported jobAllocStatuses
* Note for mirage that batch jobs now have a historical status panel
* Batch job test for complete status
* Parameterized and periodic child jobs get the panel treatment
* Undo parameterized and periodic child test situations
* System jobs get a panel and lost status reinstated
* Leveraging nodes and not worrying about rescheds for system jobs
* Consistency w restarted as well
* Text shadow removed and early return where possible
* System jobs added to the Historical Click list
* System alloc and client summary panels removed
* Bones of some new system jobs tests
* [ui, deployments] handle node read permissions for system job panel (#17073)
* Do the next-best-thing when we cant read nodes for system jobs
* Whitespace control handlebars expr
* Simplifies system jobs to not attempt to show a desired count, since it is a particularly complex number depending on constraints, number of nodes, etc.
* [ui, deployments] Fix order in which allocations are ascribed to the status chart (#17063)
* Discovery of alloc.isOld
* Correct sorting and better types
* A more honest walk-back that prioritizes running and pending allocs first
* Test scenario for descending-order allocs to show
* isOld mandates that we set a job version for our created job. Could also do this in the factory but maybe side-effecty
* Type simplification
* Fixed up a test that needed system job summary to be updated
* Tests for modifications to the job summary
* Explicitly mark the service jobs in test as not-deploying
* Status panel shows failed and lost, but probably dont have the condition quite right
* Rescheduled and Replaced cells instead of a general failed/lost one
* Tests moving to acceptance
* Fixed desiredTotal and added acceptance test for restarted
* moved integration test into acceptance test generally
* Now that we represent Lost in the graph, have to make our unplaced testcase as Unknown
* No need to declare new vars for immediately returned getters
* Literal restart and resched add to the tallies, rather than 'would have but ran out of attampts' like before
* Testfixes now that weve redefined what restarts and reschedules are indicated by