>SQLITE_DEFAULT_SYNCHRONOUS=<0-3> This macro determines the default value of the PRAGMA synchronous setting. If not overridden at compile-time, the default setting is 2 (FULL).
>SQLITE_DEFAULT_WAL_SYNCHRONOUS=<0-3> This macro determines the default value of the PRAGMA synchronous setting for database files that open in WAL mode. If not overridden at compile-time, this value is the same as SQLITE_DEFAULT_SYNCHRONOUS.
Many wrappers for sqlite take this advice and change the default, but the default is FULL.
Which is also quite nasty. I want my databases to be fully durable by default, and not lose anything once they have acknowledged a transaction. The typical example for ACID DBs are bank transactions; imagine a bank accidentally undoing a transaction upon server crash, after already having acknowledged it to a third-party over the network.
> EXTRA synchronous is like FULL with the addition that the directory containing a rollback journal is synced after that journal is unlinked to commit a transaction in DELETE mode
So it only has an effect in DELETE mode; WAL mode doesn't use a rollback journal.
That said, the documentation about this is pretty confusing.
nh2 · 8h ago
Yes, I'm talking about the fact that sqlite in its default (journal_mode = DELETE) is not durable.
Which in my opinion is worse than whatever may apply to WAL mode, because WAL is something a user needs to explicitly enable.
If it is true as stated, then I also don't find it very confusing, but would definitely appreciate if it were more explicit, replacing "will not corrupt the database" by "will not corrupt the database (but may still lose committed transactions on power loss)", and I certainly find that a very bad default.
ncruces · 5h ago
For DELETE (rollback) mode, and given the way fsync works, FULL should not lose committed transactions (plural) but it might lose at most one committed transaction.
Because DELETE depends on deleting a file (not modifying file contents), it much depends on the specific file system's (not SQLite's) journaling behavior.
nh2 · 2h ago
I don't see how the journal delete depends any more on a specific file system's behaviour than the main data write: If a specific file system can decide to automatically fsync unlink(), it can equally decide to automatic fsync write().
In either case it is clear that (on Linux), if you want guarantees, you need to fsync (the file and the dir respectively).
nh2 · 7h ago
> That said, the documentation about this is pretty confusing.
I now filed a suggestion to clarify the docs on this:
>EXTRA synchronous is like FULL with the addition that the directory containing a rollback journal is synced after that journal is unlinked to commit a transaction in DELETE mode. EXTRA provides additional durability if the commit is followed closely by a power loss.
It depends on your filesystem whether this is necessary. In any case I'm pretty sure it's not relevant for WAL mode.
mrkeen · 8h ago
> The typical example for ACID DBs are bank transactions; imagine a bank accidentally undoing a transaction upon server crash
That's why they don't try to do it that way! But it's still an informative way to think about it.
Also, while we're discussing defaults, your ACID db is probably running at READ COMMITTED by default, meaning that your bank transactions are vanishing/creating money:
* You read accounts A and B($30) in order to move $5 between them. The new balance for B should be $35. Just before you write the $35, someone else's transaction sets B to $100. Your transaction will proceed and blindly set it to $35 anyway.
But to your overall point, I'm also frustrated that these systems aren't as safe as they look on the box.
nh2 · 8h ago
> your ACID db is probably running at READ COMMITTED by default
You're probably refering to PostgreSQL. Yes, I am also frustrated that that doesn't default to SERIALIZABLE.
I do wish the top two open-source "ACID" DBs (Postgres and SQLite) used guaranteed-safe, zero-surprise defaults.
lanstin · 7h ago
It isn’t worth it. Mostly financial transactions are done via append only ledgers, not updating; two phase auth and then capture; a settlement process to actually move money, and a reconciliation process to check all the accounts and totals. Even without DB corruptions they have enough problems (fraud and buggy code) with creating money and having to go back and ask people to more money or to give them more money so they have those systems in place any ways.
billywhizz · 6h ago
yes. most folks don't seem to understand this. but, you can get something approaching such guarantees if you are able to limit yourself to something as (seemingly) simple as updating a ledger. this approach is used in a lot of places where high performance and strong consistency is needed (see e.g. LMAX disruptor for similar).
https://tigerbeetle.com/
layer8 · 3h ago
SERIALIZABLE isn't zero-surprise, since applications must be prepared to retry transactions under that isolation level. There is no silver bullet here.
fkrlook · 1h ago
This makes me so sick. For years and years I’ve gotten the vibe from SQLite that it never took being a reliable database seriously, but I bought into the hype for the past several years that it was finally a great DB for using in production, and then this. I swear. Sure, change the default config for now and make it actually behave in a sane way so that it doesn’t lose your data. But later- use a real database.
avinassh · 9h ago
hey, I just tested and `NORMAL` is default:
$ sqlite3 test.db
SQLite version 3.43.2 2023-10-10 13:08:14
Enter ".help" for usage hints.
sqlite> PRAGMA journal_mode=wal;
wal
sqlite> PRAGMA synchronous;
1
sqlite>
edit: fresh installation from homebrew shows default as FULL:
/opt/homebrew/opt/sqlite/bin/sqlite3 test.db
SQLite version 3.50.4 2025-07-30 19:33:53
Enter ".help" for usage hints.
sqlite> PRAGMA journal_mode=wal;
wal
sqlite> PRAGMA synchronous;
2
sqlite>
I will update the post, thanks!
larschdk · 9h ago
Just checked debian/ubuntu/alpine/fedora/arch docker images. All are FULL by default.
eatonphil · 9h ago
Is this sqlite built from source or a distro sqlite? It's possible the defaults differ with build settings.
supriyo-biswas · 8h ago
The one which avinassh shows is MacOS's SQLite under /usr/bin/sqlite3. In general it also has some other weird settings, like not having concat() method, last I checked.
ncruces · 4h ago
The Apple built macOS SQLite is something.
Another oddity: misteriously reserving 12 bytes per page for whatever reason, making databases created with it forever incompatible with the checksum VFS.
Other: having 3 different layers of fsync to avoid actually doing any F_FULLFSYNC ever, even when you ask it for a fullfsync (read up on F_BARRIERFSYNC).
mediumsmart · 7h ago
Same with macports here - 2 (opt/local/bin/sqlite3)
and /usr/bin/sqlite3 is 1
dangoodmanUT · 9h ago
SQLite defaults have many, many weird choices that make unsuspecting users choose dangerous configurations.
And it's not reasonable to expect them to read through all the docs to figure out that the defaults are not safe.
Defaults should be safe, tune for performance. Not the other way around.
jabwd · 9h ago
Sqlite defaults in many ways perfectly fine, you get the foot guns when you need the performance. Read the article rather than commenting on HN because WAL is not default.
ehutch79 · 3h ago
But I constantly read advice to change to WAL on this very site. No further nuance, just that if I do it’ll out perform a MySQL cluster.
btown · 8h ago
There's some nuance here. The compiler flags SQLITE_DEFAULT_SYNCHRONOUS and SQLITE_DEFAULT_WAL_SYNCHRONOUS are set to FULL by default, which does fsync on each commit.
But there is a thing called NORMAL mode which, in WAL and non-WAL mode, does not fsync on each commit. In WAL mode, at least this doesn't cause corruption, but it can still lose data.
https://www.sqlite.org/pragma.html#pragma_synchronous is very explicit that the thing called NORMAL does have these risks. But it's still called NORMAL, and I'd think that's something of a foot-slingshot for newcomers, if not a full-fledged footgun.
nh2 · 8h ago
> Sqlite defaults in many ways perfectly fine
In the sibling comment we are discussing how the default of sqlite is not durable, so it's only ACI, not ACID.
I appreciate the parent commenting on HN, because they seem to be ... right.
tobias3 · 7h ago
The default with the build in sqlite in macos. Means Apple engineers made this choice.
conradev · 8h ago
This is perhaps my favorite one. By default, SQLite doesn’t actually delete data:
The default setting for secure_delete is determined by the SQLITE_SECURE_DELETE compile-time option and is normally off. The off setting for secure_delete improves performance by reducing the number of CPU cycles and the amount of disk I/O.
That isn't much unlike most filesystems (in fact, on CoW filesystems, even with this setting the data is likely to still be hanging around on disk)
sgarland · 8h ago
Yes, yes it does [0]. I fully understand the need for backwards-compatibility given the sheer number of SQLite installations, I find their attitude towards flexible types [1] appalling. At least they’ve added STRICT.
Similarly, the fact that FKs aren’t actually enforced by default is a horrifying discovery if you’re new to it, but tbf a. MySQL until quite recently allowed CHECK constraints to be declared, which it then ignored b. The FK enforcement decision is for backwards-compatibility.
I swear I think people choose WAL mode because they read something about it online, where that something obviously isn't the documentation. This behavior shouldn't be catching any engineer by surprise.
conradev · 10h ago
One Apple platforms, you also want to use F_FULLFSYNC instead of fsync if you need durability:
If Apple compiled your SQLite library, not even the fullfsync PRAGMAs will do F_FULLFSYNC. Apple has “secretly” patched their SQLite to do F_BARRIERFSYNC instead.
Yes, that’s why those systems can cheat on benchmarks by defaulting to half-arsed fsync() in the first place.
londons_explore · 9h ago
Half arsed fsync is all I want.
I am happy to lose 5 or 10 seconds of data in a power failure. However I'm not okay with a file becoming so corrupted that it is unmountable when the power recovers.
Half arsed fsync provides exactly that - and considering you get way more performance this seems like a good tradeoff.
Avamander · 9h ago
Fsync is also overused at times, like some random web app's local storage does not need to be forced onto the disk.
londons_explore · 9h ago
That fsync behaviour I think is a good part of the reason that apps seem to run faster/better on osx than windows/Linux.
I wish Linux and windows would have settings to change all fsyncs to barriers too.
Unfortunately I think Linux recently removed such an ability on the basis the code complexity wasn't worth it.
Avamander · 8h ago
Would it not be possible to achieve this with something akin to how libeatmydata can be LD_PRELOAD-ed?
conradev · 8h ago
You don’t need fsync at all for that, just WAL. fsync is only half arsed on Apple platforms.
jmull · 7h ago
The takeaway should be:
Don't assume sqlite is doing an fsync on each commit.
While that's the "default default", you may be using a sqlite3 compiled by someone else or using non-default build options, or using a wrapper that sets its own defaults.
More generally, be careful about assuming any of the various levers sqlite3 has are set in a specific way. That is, take control of the ones that are important to your use case (well, and before that, review the levers so you know which ones those are).
chasil · 9h ago
There is enough going on with WAL mode that it really shouldn't be enabled unless all the limitations are understood.
In addition to the need for all clients to see shared memory, it disables acid transactions on attached databases (2nd to last paragraph below):
> In addition to the need for all clients to see shared memory, it disables acid transactions on attached databases
It only disables global cross-db transactions. It does not disable transactions in the attached db.
The fact that global transactions are a thing at all is the surprising bit to me. I think the WAL mode is the more expected behaviour.
boris · 7h ago
The main reason you would attach a database and then jump through hoops like qualifying tables is to have transactions cover all the attached databases. If you don't need that, then you can just open separate connections to each database without needing to jump through any hoops. So the fact that WAL does not provide that is a big drawback.
SoftTalker · 9h ago
This is true for any database, for any concurrency or durability settings. You must understand the implications of the defaults and your choices if you change them.
liuliu · 9h ago
Does it matter? For all we know, it keeps the serializability. At this point (of computer hardware history), you would care more about serializability than making sure data written to disk after power loss, the latter would now depend on so many layers of drivers doing correct things (for things that is hard to test correctly).
duped · 6h ago
Durability just guarantees that you don't return that a write transaction has completed successfully until after all the layers are done writing to disk. fsync is the high level abstraction that file systems implement to mean "this data has actually gone to disk" (although handling errors is a rabbit hole worth reading about). It absolutely has a performance cost which is why applications that can live without durability sometimes get away with it.
If your application can tolerate writes silently failing then you can live without it. But a lot of applications can't, so it does matter.
nolist_policy · 9h ago
Without the "making sure data written to disk after power loss" part you won't get serialization either in modern storage stacks.
wellpast · 9h ago
Wouldn’t it depend on use case?
If the app confirms to me my crypto transaction has been reliably queued, I probably don’t want to hear that it was unqueued because a node using SQLite in the cluster had died at an inconvenient specific time.
bawolff · 9h ago
If you had a power failure between when the transaction was queued and the sqlite transaction was comitted, no amount of fsync will save you.
If that is the threat you want to defend against this is not the right setting. Maybe it would reduce the window for it a little bit, but power failures are basically a non existent threat anyways, does a solution that mildly reduces but not eliminate the risk really matter when the risk is negligible?
astrobe_ · 7h ago
> but power failures are basically a non existent threat anyways
Not in the contexts sqlite3 is often used. Remember, this is an embedded database, not a fat MySQL server sitting in a comfy datacenter with redundant power backups, RAID 6 and AC regulated to the millidegree. More like embedded systems with unreliable or no power backup. Like Curl, you can find it in unexpected places.
bawolff · 6h ago
I think in that context, durability is even less expected.
kevincox · 7h ago
A better example is probably
1. I general a keypair and commit it.
2. I send the public key to someone.
I *really* want to be sure that 1 is persisted. Because if they for example send me $1M worth of crypto it will really suck if I don't have the key anymore. There are definitely cases where it is critical to know that data has been persisted.
This is also assuming that what you are syncing to is more than one local disc, ideally you are running the fsync on multiple geographically distant discs. But there are also cryptography related applications where you must never reuse state otherwise very bad things happen. This can apply even for one local disc (like a laptop). In this case if you did something like 1. Encrypt some data. 2. Commit that this nonce, key, OTP, whatever has been used. 3. Send that datasome where. Then You want to be sure that either that data was comitted or the disc was permanently destroyed (or at least somehow wouldn't be used accidentally to be encrypt more data).
duped · 6h ago
I believe in the comment they're referring to the "crypto transaction" not the SQLite transaction.
wellpast · 7h ago
Of course it will because same programmers don’t ack their customers until their (distributed, replicated) db says ack.
wellpast · 7h ago
sane*
throwawaymaths · 9h ago
if you are doing crypto you really ought to have a different way of checking that your tx has gone though that is the actual source of truth, like, for exple, the blockchain.
wellpast · 7h ago
I knew I shouldn’t have said crypto, but it is why I said queued. I knew a pedant was going to nitpick. Probably subconsciously was inviting it. I think my point still stands.
sethev · 9h ago
One of the things I like about SQLite is how easy it is to understand its behavior if you read the docs. This is one of the potentially surprising defaults - but it seems reasonable for an embedded database, where a power loss is likely to cause the application to "forget" that something was committed at the same time that SQLite does.
However, it depends on the application - hence the need for clear docs and an understandable model.
No comments yet
arpinum · 8h ago
This is popping up because SurrealDB was found to turn fsync off by default. But there are important differences:
- SurrealDB provides poor documentation about this default
- SQLite is typically run client side, while SurrealDB is typically run as a remote server
- SQLite is actually full sync by default, but distros may package it with other defaults
- SurrealDB explicitly did this for benchmarking reasons (for comparison fairness) while SQLite distros turn off fsync for typically practical reasons as it's run pure client side.
wiradikusuma · 9h ago
Does anyone, preferably from Cloudflare, know the settings used in D1, since it's a SQLite dialect?
koakuma-chan · 8h ago
Is D1 a dialect? I thought they actually run SQLite.
wiradikusuma · 8h ago
I hit some issues where they worked in localhost but not in production. Fortunately, they're not dealbreakers. Unfortunately, my mind is very good at forgetting bad experiences (especially if they have a workaround), so I don't remember the issues.
I do know the transaction is handled "differently".
weinzierl · 9h ago
I don't know why SQLite has this default, I suspect primarily performance, but I think even with modern SSD's there is a fsync frequency where it starts to hurt lifetime.
Also, you probably do not gain much with synchronization=FULL if the SSD does not really honor flushes correctly.
Some SSDs historically lied - acknowledging flushes before actually persisting them.
I don't know if this is still true.
hoppp · 9h ago
Calling fsync on each commit makes it slower, doesn't it?
Its an extra syscall. It depends on the use-case if its needed or not
senderista · 6h ago
You can enqueue fsync as an op in io_uring, using link flag to sequence it after previous ops and drain flag to ensure no new ops are started until it completes.
praptak · 8h ago
That's why big DBs do group commits, trading latency for throughput.
nodesocket · 6h ago
Related, I use Lightstream and their documentation[1] actually suggested to use synchronous=NORMAL. Any idea if this is a wise change? Should I revert back to the default of FULL using WAL + Lightstream?
If you're using lightstream you still might loose transactions since the last checkpoint, if the server goes down, so I don't think it makes sense to make sure every single transaction is persisted to disk before success is returned. If my understanding is correct it will just slow the app down without reducing your chances of data loss.
Skinney · 1h ago
Litestream is already backing up your changes, so synchronous=normal seems reasonable.
https://sqlite.org/compile.html#default_synchronous
>SQLITE_DEFAULT_SYNCHRONOUS=<0-3> This macro determines the default value of the PRAGMA synchronous setting. If not overridden at compile-time, the default setting is 2 (FULL).
>SQLITE_DEFAULT_WAL_SYNCHRONOUS=<0-3> This macro determines the default value of the PRAGMA synchronous setting for database files that open in WAL mode. If not overridden at compile-time, this value is the same as SQLITE_DEFAULT_SYNCHRONOUS.
Many wrappers for sqlite take this advice and change the default, but the default is FULL.
> EXTRA provides additional durability if the commit is followed closely by a power loss.
means?
How can one have "additional" durability, if FULL already "ensures that an operating system crash or power failure will not corrupt the database"?
Is it that FULL only protects against "corruption" as stated, but will still lose committed transactions?
It seems so from the points on https://stackoverflow.com/questions/58113560/during-power-lo...
Which is also quite nasty. I want my databases to be fully durable by default, and not lose anything once they have acknowledged a transaction. The typical example for ACID DBs are bank transactions; imagine a bank accidentally undoing a transaction upon server crash, after already having acknowledged it to a third-party over the network.
> EXTRA synchronous is like FULL with the addition that the directory containing a rollback journal is synced after that journal is unlinked to commit a transaction in DELETE mode
So it only has an effect in DELETE mode; WAL mode doesn't use a rollback journal.
That said, the documentation about this is pretty confusing.
Which in my opinion is worse than whatever may apply to WAL mode, because WAL is something a user needs to explicitly enable.
If it is true as stated, then I also don't find it very confusing, but would definitely appreciate if it were more explicit, replacing "will not corrupt the database" by "will not corrupt the database (but may still lose committed transactions on power loss)", and I certainly find that a very bad default.
Because DELETE depends on deleting a file (not modifying file contents), it much depends on the specific file system's (not SQLite's) journaling behavior.
In either case it is clear that (on Linux), if you want guarantees, you need to fsync (the file and the dir respectively).
I now filed a suggestion to clarify the docs on this:
https://sqlite.org/forum/forumpost/ec171a77a3
It depends on your filesystem whether this is necessary. In any case I'm pretty sure it's not relevant for WAL mode.
That's why they don't try to do it that way! But it's still an informative way to think about it.
Also, while we're discussing defaults, your ACID db is probably running at READ COMMITTED by default, meaning that your bank transactions are vanishing/creating money:
* You read accounts A and B($30) in order to move $5 between them. The new balance for B should be $35. Just before you write the $35, someone else's transaction sets B to $100. Your transaction will proceed and blindly set it to $35 anyway.
But to your overall point, I'm also frustrated that these systems aren't as safe as they look on the box.
You're probably refering to PostgreSQL. Yes, I am also frustrated that that doesn't default to SERIALIZABLE.
I do wish the top two open-source "ACID" DBs (Postgres and SQLite) used guaranteed-safe, zero-surprise defaults.
Another oddity: misteriously reserving 12 bytes per page for whatever reason, making databases created with it forever incompatible with the checksum VFS.
Other: having 3 different layers of fsync to avoid actually doing any F_FULLFSYNC ever, even when you ask it for a fullfsync (read up on F_BARRIERFSYNC).
and /usr/bin/sqlite3 is 1
And it's not reasonable to expect them to read through all the docs to figure out that the defaults are not safe.
Defaults should be safe, tune for performance. Not the other way around.
https://sqlite.org/compile.html
But there is a thing called NORMAL mode which, in WAL and non-WAL mode, does not fsync on each commit. In WAL mode, at least this doesn't cause corruption, but it can still lose data.
https://www.sqlite.org/pragma.html#pragma_synchronous is very explicit that the thing called NORMAL does have these risks. But it's still called NORMAL, and I'd think that's something of a foot-slingshot for newcomers, if not a full-fledged footgun.
In the sibling comment we are discussing how the default of sqlite is not durable, so it's only ACI, not ACID.
https://news.ycombinator.com/item?id=45005866
So you do get the foot guns automatically.
> rather than commenting on HN
I appreciate the parent commenting on HN, because they seem to be ... right.
Similarly, the fact that FKs aren’t actually enforced by default is a horrifying discovery if you’re new to it, but tbf a. MySQL until quite recently allowed CHECK constraints to be declared, which it then ignored b. The FK enforcement decision is for backwards-compatibility.
[0]: https://www.sqlite.org/quirks.html
[1]: https://www.sqlite.org/flextypegood.html
https://www.sqlite.org/pragma.html#pragma_checkpoint_fullfsy...
https://bonsaidb.io/blog/acid-on-apple/
I am happy to lose 5 or 10 seconds of data in a power failure. However I'm not okay with a file becoming so corrupted that it is unmountable when the power recovers.
Half arsed fsync provides exactly that - and considering you get way more performance this seems like a good tradeoff.
I wish Linux and windows would have settings to change all fsyncs to barriers too.
Unfortunately I think Linux recently removed such an ability on the basis the code complexity wasn't worth it.
Don't assume sqlite is doing an fsync on each commit.
While that's the "default default", you may be using a sqlite3 compiled by someone else or using non-default build options, or using a wrapper that sets its own defaults.
More generally, be careful about assuming any of the various levers sqlite3 has are set in a specific way. That is, take control of the ones that are important to your use case (well, and before that, review the levers so you know which ones those are).
In addition to the need for all clients to see shared memory, it disables acid transactions on attached databases (2nd to last paragraph below):
https://sqlite.org/lang_attach.html
It only disables global cross-db transactions. It does not disable transactions in the attached db.
The fact that global transactions are a thing at all is the surprising bit to me. I think the WAL mode is the more expected behaviour.
If your application can tolerate writes silently failing then you can live without it. But a lot of applications can't, so it does matter.
If the app confirms to me my crypto transaction has been reliably queued, I probably don’t want to hear that it was unqueued because a node using SQLite in the cluster had died at an inconvenient specific time.
If that is the threat you want to defend against this is not the right setting. Maybe it would reduce the window for it a little bit, but power failures are basically a non existent threat anyways, does a solution that mildly reduces but not eliminate the risk really matter when the risk is negligible?
Not in the contexts sqlite3 is often used. Remember, this is an embedded database, not a fat MySQL server sitting in a comfy datacenter with redundant power backups, RAID 6 and AC regulated to the millidegree. More like embedded systems with unreliable or no power backup. Like Curl, you can find it in unexpected places.
1. I general a keypair and commit it.
2. I send the public key to someone.
I *really* want to be sure that 1 is persisted. Because if they for example send me $1M worth of crypto it will really suck if I don't have the key anymore. There are definitely cases where it is critical to know that data has been persisted.
This is also assuming that what you are syncing to is more than one local disc, ideally you are running the fsync on multiple geographically distant discs. But there are also cryptography related applications where you must never reuse state otherwise very bad things happen. This can apply even for one local disc (like a laptop). In this case if you did something like 1. Encrypt some data. 2. Commit that this nonce, key, OTP, whatever has been used. 3. Send that datasome where. Then You want to be sure that either that data was comitted or the disc was permanently destroyed (or at least somehow wouldn't be used accidentally to be encrypt more data).
However, it depends on the application - hence the need for clear docs and an understandable model.
No comments yet
- SurrealDB provides poor documentation about this default
- SQLite is typically run client side, while SurrealDB is typically run as a remote server
- SQLite is actually full sync by default, but distros may package it with other defaults - SurrealDB explicitly did this for benchmarking reasons (for comparison fairness) while SQLite distros turn off fsync for typically practical reasons as it's run pure client side.
I do know the transaction is handled "differently".
Also, you probably do not gain much with synchronization=FULL if the SSD does not really honor flushes correctly. Some SSDs historically lied - acknowledging flushes before actually persisting them. I don't know if this is still true.
Its an extra syscall. It depends on the use-case if its needed or not
[1] https://litestream.io/tips/#synchronous-pragma