What went wrong with wireless USB (oldvcr.blogspot.com)
310 points by goldenskye 1d ago 188 comments
When flat rate movers won't answer your calls (aphyr.com)
365 points by kevincox 1d ago 296 comments
Graceful Shutdown in Go: Practical Patterns
76 mkl95 18 5/4/2025, 9:09:16 PM victoriametrics.com ↗
> The exact wait time depends on your readiness probe configuration
A terminating pod is not ready by definition. The service will also mark the endpoint as terminating (and as not ready). This occurs on the transition into Terminating; you don't have to fail a readiness check to cause it.
(I don't know about the ordering of the SIGTERM & the various updates to the objects such as Pod.status or the endpoint slice; there might be a small window after SIGTERM where you could still get a connection, but it isn't the large "until we fail a readiness check" TFA implies.)
(And as someone who manages clusters, honestly that infintesimal window probably doesn't matter. Just stop accepting new connections, gracefully close existing ones, and terminate reasonably fast. But I feel like half of the apps I work with fall into either a bucket of "handle SIGTERM & take forever to terminate" or "fail to handle SIGTERM (and take forever to terminate)".
it's also possible, if you're not careful, to lose the last few seconds of logs from when your service is shutting down. for example, if you write to a log file that is watched by a sidecar process such as Promtail or Vector, and on startup the service truncates and starts writing to that same path, you've got a race condition that can cause you to lose logs from the shutdown.
It (will) cover the stuff like "sync the logs"/"wait for ingresses to catch up with the liveness handler"/etc.
https://github.com/utrack/caisson-go/blob/main/caiapp/caiapp...
https://github.com/utrack/caisson-go/tree/main/closer
The docs are sparse and some things aren't covered yet; however I'm planning to do the first release once I'm back from a holiday.
In the end, this will be a meta-platform (carefully crafted building blocks), and a reference platform library, covering a typical k8s/otel/grpc+http infrastructure.
Then after you go through all that effort most of the data is utterly ignored and rarely are the business insights much better then the trailer park version ssh'ing into a box and greping a log file to find the error output.
Like we put so much effort into this ecosystem but I don't think it has paid us back with any significant increase in uptime, performance, or ergonomics.
This dilutes the good and helpful stuff with marketing bullshit.
Grafana seemingly inventing new time series databases and engines every few months is absolutely painful to try keep up to date with in order to make informed decisions.
So much so I've started using rrdtool/smokeping again.
but many systems are more complicated than that. the observability ecosystem exists for a reason, there is a real problem that it's solving.
for example, your app might outgrow running on a single box. now you need to SSH into N different hosts and grep the log file from all of them. or you invent your own version of log-shipping with a shell script that does SCP in a loop.
going a step further, you might put those boxes into an auto-scaling group so that they would scale up and down automatically based on demand. now you really want some form of automatic log-shipping, or every time a host in the ASG gets terminated, you're throwing away the logs of whatever traffic it served during its lifetime.
or, maybe you notice a performance regression and narrow it down to one particular API endpoint being slow. often it's helpful to be able to graph the response duration of that endpoint over time. has it been slowing down gradually, or did the response time increase suddenly? if it was a sudden increase, what else happened around the same time? maybe a code deployment, maybe a database configuration change, etc.
perhaps the service you operate isn't standalone, but instead interacts with services written by other teams at your company. when something goes wrong with the system as a whole, how do go about root-causing the problem? how do you trace the lifecycle of a request or operation through all those different systems?
when something goes wrong, you SSH to the box and look at the log file...but how do you know something went wrong to begin with? do you rely solely on user complaints hitting your support@ email? or do you have monitoring rules that will proactively notify you if a "huh, that should never happen" thing is happening?
The only way to make sure a system can handle components hard crashing, is if hard crashing is a normal thing that happens all the time.
All glory to the chaos monkey!
That my application went down from sig int makes a big difference compared to kill.
Blue-Green migrations for example require a graceful exit behavior.
it may not always be necessary. e.g. if you are deploying a new version of a stateless backend service, and there is a load balancer forwarding traffic to current version and new version backends, the load balancer could be responsible for cutting over, allowing in flight requests to be processed by the current version backends while only forwarding new requests to the new backends. then the old backends could be ungracefully terminated once the LB says they are not processing any requests.