To be clear, this is different to what we do (and why we do it) in TigerBeetle.
For example, we never externalize commits without full sync, to preserve durability.
Further, the motivation for why TigerBeetle has both a prepare WAL plus a header WAL is also different, not performance (we get performance elsewhere, through batching) but correctness (cf. “Protocol-Aware Recovery for Consensus-Based Storage”).
leentee · 18m ago
First, I think the article provides false claim, the solution doesn't guarantee durability. Second, I believe good synchronous code is better than bad asynchronous code, and it's way easier to write good synchronous code than asynchronous code, especially with io_uring. Modern NVMe are fast, even with synchronous IO, enough for most applications. Before thinking about asynchronous, make sure your application use synchronous IO well.
jmpman · 3h ago
“Write intent record (async)
Perform operation in memory
Write completion record (async)
Return success to client
During recovery, I only apply operations that have both intent and completion records. This ensures consistency while allowing much higher throughput.
“
Does this mean that a client could receive a success for a request, which if the system crashed immediately afterwards, when replayed, wouldn’t necessarily have that request recorded?
How does that not violate ACID?
zozbot234 · 13m ago
> Does this mean that a client could receive a success for a request, which if the system crashed immediately afterwards, when replayed, wouldn’t necessarily have that request recorded?
Yup. OP says "the intent record could just be sitting in a kernel buffer", but then the exact same issue applies to the completion record. So confirmation to the client cannot be issued until the completion record has been written to durable storage. Not really seeing the point of this blogpost.
JasonSage · 2h ago
As best I can tell, the author understands that the async write-ahead fails to be a guarantee where the sync one does… then turns their async write into two async writes… but there’s still no guarantee comparable to the synchronous version.
So I fail to see how the two async writes are any guarantee at all. It sounds like they just happen to provide better consistency than the one async write because it forces an arbitrary amount of time to pass.
m11a · 1h ago
Yeah, I feel like I’m missing the point of this. The original purpose of the WAL was for recovery, so WAL entries are supposed to be flushed to disk.
Seems like OP’s async approach removes that, so there’s no durability guarantee, so why even maintain a WAL to begin with?
nephalegm · 54m ago
Reading through the article it’s explained in the recovery process. He reads the intent log entries and the completion entries and only applies them if they both exist.
So there is no guarantee that operations are committed by virtue of not being acknowledged to the application (asynchronous) the recovery replay will be consistent.
I could see it would be problematic for any data where the order of operations is important, but that’s the trade off for performance. This does seem to be an improvement to ensure asynchronous IO will always result in a consistent recovery.
avinassh · 1h ago
I don't get this scheme at all. The protocol violates durability, because once the client receives success from server, it should be durable. However, completion record is async, it is possible that it never completes and server crashes.
During recovery, since the server applies only the operations which have both records, you will not recover a record which was successful to the client.
tlb · 3h ago
The recovery process is to "only apply operations that have both intent and completion records." But then I don't see the point of logging the intent record separately. If no completion is logged, the intent is ignored. So you could log the two together.
Presumably the intent record is large (containing the key-value data) while the completion record is tiny (containing just the index of the intent record). Is the point that the completion record write is guaranteed to be atomic because it fits in a disk sector, while the intent record doesn't?
ta8645 · 3h ago
It's really not clear in the article. But I _think_ the gains are to be had because you can do the in-memory updating during the time that the WAL is being written to disk (rather than waiting for it to flush before proceeding). So I'm guessing the protocol as presented, is actually missing a key step:
Write intent record (async)
Perform operation in memory
Write completion record (async)
* * Wait for intent and completion to be flushed to disk * *
Return success to client
gsliepen · 2h ago
But this makes me wonder how it works when there are concurrent requests. What if a second thread requests data that is being written to memory by the first thread? Shouldn't it also wait for both the write intent record and completion record having been flushed to disk? Otherwise you could end up with a query that returns data that after a crash won't exist anymore.
Manuel_D · 2h ago
It's not the write ahead log that prevents that scenario, it's transaction isolation. And note that the more permissive isolation levels offered by Postgres, for example, do allow that failure mode to occur.
avinassh · 1h ago
* * Wait for intent and completion to be flushed to disk * *
if you wait for both to complete, then how it can be faster than doing a single IO?
ozgrakkurt · 2h ago
Great to see someone going into this. I wanted to do a simple LSM tree using io_uring in Zig for some time but couldn't get into it yet.
I always use this approach for crash-resistance:
- Append to the data (WAL) file normally.
- Have a seperate small file that is like a hash + length for WAL state.
- First append to WAL file.
- Start fsync call on the WAL file, create a new hash/length file with different name and fsync it in parallel.
- Rename the length file onto the real one for making sure it is fully atomic.
- Update in-memory state to reflect the files and return from the write function call.
Curious if anyone knows tradeoffs between this and doing double WAL. Maybe doing fsync on everything is too slow to maintain fast writes?
I learned about append/rename approach from this article in case anyone is interested:
There are also CoW B Trees not entirely similar, but kinda same.
toolslive · 17m ago
there's a whole class of persistent persistent (the repetition is intentional here) data structures. Some of them even combine performance with elegance.
tobias3 · 1h ago
I don't get this. How can two(+) WAL operations be faster than one (double the sync IOPS)?
I think this database doesn't have durability at all.
nromiun · 2h ago
Slightly off topic but anyone knows when/if Google is going to enable io_uring for Android?
LAC-Tech · 50m ago
Great article, but I have a question:
The problem with naive async I/O in a database context at least, is that you lose the durability guarantee that makes databases useful. When a client receives a success response, their expectation is the data will survive a system crash. But with async I/O, by the time you send that response, the data might still be sitting in kernel buffers, not yet written to stable storage.
Shouldn't you just tie the successful response to a successful fsync?
Async or sync, I'm not sure what's different here.
jtregunna · 5h ago
Post talks about how to use io_uring, in the context of building a "database" (a demonstration key-value cache with a write-ahead log), to maintain durability.
For example, we never externalize commits without full sync, to preserve durability.
Further, the motivation for why TigerBeetle has both a prepare WAL plus a header WAL is also different, not performance (we get performance elsewhere, through batching) but correctness (cf. “Protocol-Aware Recovery for Consensus-Based Storage”).
During recovery, I only apply operations that have both intent and completion records. This ensures consistency while allowing much higher throughput. “
Does this mean that a client could receive a success for a request, which if the system crashed immediately afterwards, when replayed, wouldn’t necessarily have that request recorded?
How does that not violate ACID?
Yup. OP says "the intent record could just be sitting in a kernel buffer", but then the exact same issue applies to the completion record. So confirmation to the client cannot be issued until the completion record has been written to durable storage. Not really seeing the point of this blogpost.
So I fail to see how the two async writes are any guarantee at all. It sounds like they just happen to provide better consistency than the one async write because it forces an arbitrary amount of time to pass.
Seems like OP’s async approach removes that, so there’s no durability guarantee, so why even maintain a WAL to begin with?
So there is no guarantee that operations are committed by virtue of not being acknowledged to the application (asynchronous) the recovery replay will be consistent.
I could see it would be problematic for any data where the order of operations is important, but that’s the trade off for performance. This does seem to be an improvement to ensure asynchronous IO will always result in a consistent recovery.
During recovery, since the server applies only the operations which have both records, you will not recover a record which was successful to the client.
Presumably the intent record is large (containing the key-value data) while the completion record is tiny (containing just the index of the intent record). Is the point that the completion record write is guaranteed to be atomic because it fits in a disk sector, while the intent record doesn't?
I always use this approach for crash-resistance:
- Append to the data (WAL) file normally.
- Have a seperate small file that is like a hash + length for WAL state.
- First append to WAL file.
- Start fsync call on the WAL file, create a new hash/length file with different name and fsync it in parallel.
- Rename the length file onto the real one for making sure it is fully atomic.
- Update in-memory state to reflect the files and return from the write function call.
Curious if anyone knows tradeoffs between this and doing double WAL. Maybe doing fsync on everything is too slow to maintain fast writes?
I learned about append/rename approach from this article in case anyone is interested:
- https://discuss.hypermode.com/t/making-badger-crash-resilien...
- https://research.cs.wisc.edu/adsl/Publications/alice-osdi14....
I think this database doesn't have durability at all.
The problem with naive async I/O in a database context at least, is that you lose the durability guarantee that makes databases useful. When a client receives a success response, their expectation is the data will survive a system crash. But with async I/O, by the time you send that response, the data might still be sitting in kernel buffers, not yet written to stable storage.
Shouldn't you just tie the successful response to a successful fsync?
Async or sync, I'm not sure what's different here.