Ask HN: Why hasn't x86 caught up with Apple M series?
438 points by stephenheron 3d ago 616 comments
Ask HN: Best codebases to study to learn software design?
106 points by pixelworm 5d ago 92 comments
SQLite's Durability Settings Are a Mess
89 ciconia 30 8/29/2025, 5:30:29 PM agwa.name ↗
From the documentation, it seems like synchronous being FULL does provide durability of the database in DELETE mode, as FULL means it calls fsync after the transaction is completed. I think you may be confusing durability of the journal file with durability of the database. I don't think WAL can ever really have a durable transaction; it is essentially based on leaving a transaction open until it gets "check-pointed" or actually committed to the database file.
In general: WAL means you write the transaction to the WAL, fsync (in sqlite, this depends upon the sync mode], and then return it's done to the application. The transaction is then durable: even if the database crashes, the contents of the WAL will be applied to the database file.
Checkpointing later just lets you throw away that part of the WAL and not have to replay as much to the database file.
If the directory containing the rollback journal is not fsynced after the journal file is deleted, then the journal file might rematerialize after a power failure, causing sqlite to roll back a committed transaction. And fsyncing the directory doesn't seem to happen unless you set synchronous to EXTRA, per the docs cited in the blog post.
I don't follow. How would fsyncing the rollback journal affect the durability of the actual database? Do you actually think that the database would reapply an already committed journal whose ID in the header already indicates that the transaction was committed, when the database is already consistent? I really think you should re-review the definition of durability of a database, especially before saying the creator of SQLite is incorrect about its implementation.
It's possible I've misunderstood how DELETE mode works. But here's the thing - I shouldn't have to understand how DELETE mode works to know what SQLite setting I need to use to get durability. Unfortunately, the SQLite docs don't clearly say what guarantees each setting provides - instead they talk about about what SQLite does when you choose the setting, leaving the reader to try to figure out if those actions provide durability. And the docs really make it seem like you need synchronous=EXTRA in DELETE mode to get durability, for the reasons explained above.
This is a docs problem; I'm not saying SQLite is buggy.
FWIW I don't think it's wrong per se. The article links to a HN comment in which Richard Hipp explains why this is the default behavior, and it does make sense: https://news.ycombinator.com/item?id=45014296. At the same time, clearly, the definition of "durable" here could use some clarification.
Note that the comment by Richard Hipp is justifying why WAL mode is not durable by default. It's a completely reasonable explanation, and would be for DELETE mode too, yet his comment claims that DELETE mode is durable by default, which I can't reconcile with the docs.
Guys. The journal would not be a hot journal though, as the hot journal selection only applies if the database is in a inconsistent state. Otherwise, the database knows from the ID of the journal not to reapply an already applied rollback journal. The process you are talking about ONLY happens when the journal database has been corrupted state, and it has to try and file a file to help recover the database.
In terminal 1, I created a database and added a table to it:
In terminal 2, I attached gdb to sqlite3 and set a breakpoint on unlink: Back in terminal 1, I inserted data into the table: In terminal 3, I saved a copy of testdb-journal: Then in terminal 2, I resumed executing sqlite3: In terminal 1, the INSERT completed without error.Back in terminal 3, I sent SIGKILL to sqlite3, simulating a power failure:
I then restored testdb-journal, simulating what could happen after a power failure when the parent directory is not fsynced: I then opened testdb again and ran `SELECT * FROM test` and it returned zero rows.This proves int_19h and I are right - if the journal file comes back, SQLite will apply it and roll back a committed transaction.
I then confirmed with strace that, as the documentation says, the directory is only fsynced after unlink when synchronous=EXTRA. It doesn't happen with synchronous=FULL. So you need synchronous=EXTRA to get durability in DELETE mode.
But it's not "already applied", that's the whole point. The transaction was committed, not rolled back, so the changes in transaction were persisted to disk and the journal was just thrown away. If it magically reappears again, how is SQLite supposed to know that it needs to be discarded again rather than applied to revert the change?
I believe they would all be satisfied.
I don't see any mention of checking IDs. Not saying you're wrong - I think the docs could very well be wrong - but could you provide a citation for that behavior?
If you're not going to provide citations for your claims, yet criticize me for "picking out sections" when I provide citations, then continuing this conversation won't be productive.
v3.0 was first released in 2004—over 20 years ago—and the industry has changed dramatically since then.
I can’t help but wish for a “v4.0” release: one that deliberately breaks backward compatibility and outdated defaults, in order to offer a cleaner, more modern foundation.
Note: I'm not asking for new functionality per se. But just a version of SQLite that defaulted to how it should be used, deployed in 2025.
The page you link even mentions scenarios they know about that do happen and that they still assume won't happen. So even sqlite doesn't make anywhere near as strong a claim as you make.
> SQLite assumes that the operating system will buffer writes and that a write request will return before data has actually been stored in the mass storage device. SQLite further assumes that write operations will be reordered by the operating system. For this reason, SQLite does a "flush" or "fsync" operation at key points. SQLite assumes that the flush or fsync will not return until all pending write operations for the file that is being flushed have completed. We are told that the flush and fsync primitives are broken on some versions of Windows and Linux. This is unfortunate. It opens SQLite up to the possibility of database corruption following a power loss in the middle of a commit. However, there is nothing that SQLite can do to test for or remedy the situation. SQLite assumes that the operating system that it is running on works as advertised. If that is not quite the case, well then hopefully you will not lose power too often.
Were this a one off, you would have a point. It isn't, however. My experience over many years has been that you can't ever be certain about what is actually going on, based on the documentation alone, and that you wind up in Reddit and Stack Overflow and a plethora of blog posts attempting to figure it out. With LLMs, we have only more sources of contradictory and chronically obsolescent input.
There is an actual problem here. However I can see that, based on the contributions from the SQLite downmod mafia, this talk isn't welcome, so I'm off to some other thing. Have a nice weekend, I suppose.
Oh, come on. There's no open or secret attempt at censoring talk about sqlite on HN. (The story is #11 on the frontpage the minute the comment was made.)