When Sigterm Does Nothing: A Postgres Mystery

81 saisrirampur 24 7/15/2025, 4:43:08 AM clickhouse.com ↗

Comments (24)

gsliepen · 8h ago

  pg_usleep(1000L);
Virtually any time you put a fixed sleep in your program, it's going to be wrong. It is either too long or too short, and even if it is perfect, there is no guarantee your program will actually sleep for exactly the requested amount of time. A condition variable or something similar would be the right thing to use here.

Of course, if this code path really is only taken very infrequently, you can get away with it. But assumptions are not always right, it's better to make the code robust in all cases.

anarazel · 4h ago
The usleep here isn't one that was intended to be taken frequently (and it's taken in a loop, so a too short sleep is ok). As it was introduced, it was intended to address a very short race that would have been very expensive to avoid altogether. Most of the waiting is intended to be done by waiting for a "transaction lock".

Unfortunately somebody decided that transaction locks don't need to be maintained when running as a hot standby. Which turns that code into a loop around the usleep...

(I do agree that sleep loops suck and almost always are a bad idea)

aspbee555 · 1h ago
I was working through some ffmpeg timing stuff and the ai kept insisting on putting in sleep functions to try and "solve" it
jeffbee · 1h ago
Generative coding models are trained on GitHub. GitHub is completely full of inadvisable code. The model thinks this is normal.
nightfly · 31m ago
Sort of by definition it is normal
OptionOfT · 2h ago
If I were reviewing that code I would've asked if they tried yielding. Same effect, but doesn't delay when there is no one to yield to.
delusional · 7h ago
> there is no guarantee your program will actually sleep for exactly the requested amount of time

This is technically true, but in the same way that under a preemptive operating system there's no guarantee that you'll ever be scheduled. The sleep is a minimum time you will sleep for, beyond that you already have no guarantee about when the next instruction executes.

dwattttt · 6h ago
Spurious wakeups enters the chat.
quietbritishjim · 5h ago
Any reasonable sleep API will handle spurious wake ups under the hood and not return control.
Joker_vD · 5h ago
It will also ignore those pesky signals and restart the sleep :)

On a more serious note, any reasonable sleep API should either report all wake ups, no matter the reason, or none except for the timeout expiration itself. Any additional "reasonable" filtering is a loaded footgun because different API users have different ideas of what is "reasonable" for them.

No comments yet

gpavanb · 1h ago
The fundamental problem is that a synchronous, lock-based approach is being used in an asynchronous, event-driven context (logical replication).

The Postgres team needs to go back to the drawing board and avoid polling altogether and more importantly have event-listener based approaches for primary and replicas separately

It's great to see ClickHouse contributing to Postgres though. Cross-pollination between two large database communities can have a multiplying effect.

anarazel · 42m ago
> The fundamental problem is that a synchronous, lock-based approach is being used in an asynchronous, event-driven context (logical replication).

This isn't hit in ongoing replication, this is when creating the resources for a new replica on the publisher. And even there the poll based thing is only hit due to a function being used in a context that wasn't possible at the time it was being written (waiting for transactions to complete in a hot standby).

> The Postgres team needs to go back to the drawing board and avoid polling altogether and more importantly have event-listener based approaches for primary and replicas separately

That's, gasp, actually how it already works.

OkPin · 4h ago
Fascinating root cause: a missing CHECK_FOR_INTERRUPTS() left pg_create_logical_replication_slot basically unkillable on hot standbys. Simple fix, but huge impact.

Makes me wonder how many other Postgres processes might ignore SIGTERM under edge conditions. Do folks here test signal handling during failovers or replica maintenance? Seems like something worth adding to chaos tests.

bhaak · 3h ago
> While the Postgres community has an older and sometimes daunting contribution process of submitting patches to a mailing list, [...]

The Linux kernel works the same way.

What are other big open source projects using if they are not using mailing lists?

I could imagine some using GitHub but IME it's way less efficient than mailing list. (If I were a regular contributor to a big project using GitHub, me being me, I would probably look for or write myself a GitHub to e-mail gateway).

Bigpet · 4m ago
You can already subscribe to projects or single issues/PRs on github and reply via email to post comments.

I can understand not wanting to use GitHub/GitLab/etc. for various reasons. But I don't understand how usability vs mailing lists is one.

How is a set of 9+ mailing lists any better? It has significantly worse discovery and search tools unless you download all the archives. So you're creating a hurdle for people there already.

Then you have people use all kinds of custom formatting in their email clients, so consistent readability is out the window.

People will keep top-posting (TOFU), transforming the inconsistent styles in the process. Creating an unnecessarily complicated problem for your email client to "detect quotes", or you have to keep reminding people.

Enforcing structure of any kind in email lists seems so tedious. I'm not advocating for bugzilla style "file out these 20 nonsensical fields before you can report anything" but some minimal structure enforced by some tooling as opposed to manual moderation seems very helpful to me.

Joker_vD · 8h ago
The tl;dr is that Postgres, as any long-running "server" process (especially as a DBMS server!) does not run with SIG_DFL as the handler for SIGTERM; it instead sets up the signal handler that merely records the fact that the signal has happened, in hopes that whatever loops are going on will eventually pick it up. As usual, some loops don't but it's very hard to notice.
namibj · 4h ago
Feels like they should come with watchdog timers that alert when a loop fails to check in on this status often enough? Then the only thing that still needs manual checking to cover these more-exotic states is that the code path for loop watchdog-and-signal-checkin will indeed lead to early exit from the loop it's placed in (and all surrounding loops).

Sure, it's more like fuzzing then, but running in production on systems that are willing to send in big reports for logged watchdog alerts would give nearly free monitoring of very diverse and realistic workloads to hopefully cover at least relevant-in-practice codepaths with many samples before a signal ever happens to those more-rare but still relevant code paths...

bbarnett · 5h ago
Indeed. I've seen DBMSes take close to 10 minutes to gracefully exit, even when idle.

Timeout in sysvinit and service files, for a graceful exit, is typically 900 seconds.

Most modern DBMS daemons will recover if SIGKILL, most of the time, especially if you're using transactions. But startup will then be lagged, as it churns through and resolves on startup.

(unless you've modified the code to short cut things, hello booking.com "we're smarter than you" in 2015 at least, if not today)

sjsdaiuasgdia · 3h ago
Yeah, as I've said on many incident calls...we can pay for transaction recovery at shutdown or we can pay for it at startup, but it's got to happen somewhere.

The "SIGKILL or keep waiting?" decision comes down to whether you believe the DBMS is actually making progress towards shutdown. Proving that the shutdown is progressing can be difficult, depending on the situation.

eunos · 5h ago
With quirk like these, honestly I'm not even confident how can PostgreSQL or any software in general can be used in mission critical system.
vladms · 5h ago
Have you ever read the contraindication list of a medicine? It is the same kind of trade-off. Someone can choose to use something in a "mission critical system" (whatever that is) because the alternatives are worse, not because the chosen solution is perfect (that does not mean I would advise using PostgreSQL - just accepting that it can happen).

On the other hand "quirks like these" are the reason you should not update too often mission critical systems.

cedws · 4h ago
When you look hard enough it’s a miracle anything works at all.
contravariant · 5h ago
Might be a bit late now to start telling people that maybe using software for everything wasn't a good idea.
eunos · 45m ago
But I want my Software-Defined-Aviation now