Releases: jippi/hashi-ui
Basic Task Group count management
hashi-ui can now manage task group counts for all jobs directly in the UI.
You can scale up
/ down
/ stop
(setting count = 0) - this is pretty basic stuff, but also the first step into actually modifying your infrastructure, rather than just observing it.
Exciting and scary stuff for me.
If you want hashi-ui to remain a read-only view into Nomad, you can use the NOMAD_READ_ONLY=1
environment variable or --nomad.read-only
CLI flags.
I would love feedback from everyone on what other modifications hashi-ui should be able to do, other than full JSON edit (probably a task for tomorrow) - I'm more interested in subtle features in UI in general that could help management of your Nomad cluster in daily operations.
The default mode is for hash-ui treat Nomad as writable
A quick screenshot on how it currently look.
I also updated the Nomad SDK to 0.5.2-rc1
, so older nomad clusters might be partly functional if any breaking changes was introduced in the SDK.
Faster allocation & evaluation tables
The primary change is #159 - which greatly increase the loading speed of large allocation and evaluation lists.
The speedup is due to the usage of https://github.com/schrodinger/fixed-data-table-2 - which only render the rows in the viewable area, instead all multi-thousand rows all at once.
The upside is very very fast rendering performance, the downside is normal search in the browser does not work for rows outside the viewable area.
I've added a new Allocation ID
filter, which wildcard matches on the allocation id.
Optional NewRelic instrumentation has also been added, in case you want to monitor the internals of hash-ui, or assist in debugging errors.
Better error handling (and a bugfix)
Error handling
If a websocket connection dies a loud error will be shown to the user like this:
If any uncaught exceptions happen (e.g. can't connect to the server, js errors or otherwise) you will see an error page like this
This is not 100% perfect, but way better than the current state where nothing is shown to the user, and zero effort to recover is done.
Bug fix
Previously broadcasting of updates to jobs, allocations, evaluations and other lists in the system was only sent to one websocket, now its correctly sent to all connections subscribed to the same resource :)
0.5.4 - client resource stats is here!
v0.5.3 is released
The changes in this release is mostly behind the scenes
- network i/o has been greatly reduced by only shipping changes to the websocket, rather than the full data every ~10s (the internal wait timeout for REST apis) - the members API which don't support Wait Index has been wrapped in a sha1 check to ensure it will only ship updates on the websocket if the data changes.
- fixed cleanup of dead websocket connections in Go, their subscriptions will be immediately cleared on disconnect instead next time we try to transmit data on the socket
- allocations render should be faster now, as we chop of the task states (~60% of the data size) on the Go side by using the new
WATCH_ALLOCS_SHALLOW
subscription model. - made a dedicated Go channel for all lists, to ease understanding of the code
nomad-ui is now hashi-ui
Hi!
You might have noticed a few things have changed around here lately! Nomad UI
is now Hashi UI
and is now maintained by me (and live under my account) since @iverberk no longer had the available resources to work on it going forward.
So the project got a new home, previously iverberk/nomad-ui
, now jippi/hashi-ui
.
The same goes for DockerHub by the way, though the old tags no longer exist, please use 0.5.2
(this release) or latest
So why the name change? My long term wish is to tie Nomad, Consul and Vault closer together in one glorious UI which makes it super easy to get an overview of your infrastructure across different hashicorp products.
It's a huge task, and I can't do it alone, but I hope for help from everyone else in the HashiCorp community!
I've spend a week cleaning up the code, putting things in more consistent structure, removed dead code, named things more consistent, improved the Makefile workload and tried to document the development workflow
My near term focus is to further improve the performance of the allocation list, and figure out a way to surface resource utilization for the cluster as a whole, as well as for each allocation.
I would like to give a huge thank you to @iverberk who kickstarted the project a few months ago, which enabled me, and so many others, to even consider Nomad a viable option as scheduler.
Not having a (nice) UI to assist me in day-to-day overview and maintenance of my scheduler cluster was, and is, a complete deal breaker. So thank you @iverberk ! I hope time permits an awesome contribution now and then, if nothing else to help fix some of my half-buggy Go code :)
0.5.1 - small bugfixes edition
Based on community feedback 4 bugs (#155) has been fixed
0.5.0 - the full-rewrite-and-change-all-the-things release
So, everything is new.. The refactor PR (#153) ended up with +10,311 −17,665
changes
New interface, tons of performance improvements and bug fixes, probably a few new bugs too.
Please see the new screenshots list for all the new exiciting interface changes.
This has been a busy full-week refactor, but will hopefully allow me to more rapidly add new features now that the codebase is in a robust state.
Known issues: #155
v0.4.0
- Add support for Nomad 0.5
- Force monospace font for log tail
- Rework file tailing
- Add more details to Client view
- Reduce precision (milliseconds) from "duration" output
This will actually still work with Nomad 0.4, the only known breakage here is the "servers" endpoint that changed signature in 0.5, and thus broke previous versions of nomad-u i:)