Using Postgres pg_test_fsync tool for testing low latency writes

21 mfiguiere 3 5/28/2025, 2:15:15 AM tanelpoder.com ↗

Comments (3)

singron · 1h ago
Note that this workload is a worst case for iops and you will get higher iops in nearly any optimized workload. E.g. postgres needs to sync the WAL in order to commit (which does look like this test), but there are a ton of other writes that happen in parallel on the heap and index pages in addition to any reading you do. IME the consumer drives that benchmark at 500K iops and get only 500 iops on this test might get 10K or 20K iops on a more typical mixed workload.
tanelpoder · 1h ago
Throughput with enough I/O concurrency, yes. That's actually why I wrote this blog entry, just to bring attention to this - having nice IOPS numbers do not translate to nice individual I/O latency numbers. If an individual WAL write takes ~1.5 ms (instead of tens of microseconds), this means that your app transactions also take 1.5+ ms and not sub-millisecond. Not everyone cares about this (and often don't even need to care about this), but worth being aware of.

I tend to set up a small, but completely separate block device (usually on enterprise SAN storage or cloud block store) just for WAL/redo logs to have a different device with its own queue for that. So that when that big database checkpoint or fsync happens against datafiles, the thousands of concurrently submitted IO requests won't get in the way of WAL writes that still need to complete fast. I've done something similar in the past with separate filesystem journal devices too (for niche use cases...)

Edit: Another use case for this is that ZFS users can put the ZIL on low-latency devices, while keeping the main storage on lower cost devices.

natmaka · 27s ago
> I tend to set up a small, but completely separate block device (usually on enterprise SAN storage or cloud block store) just for WAL/redo logs

I'm not sure about this, as this separate device may handle more of the total (aggregated) work by being a member of an unique pool (RAID made of all available non-spare devices) used by the PostgreSQL server.

It seems to me that in most cases the most efficient setup, even when trying hard to reduce the maximal latency (and therefore to sacrifice some throughput), is an unique pool AND an adequate I/O scheduling enforcing a "max latency" parameter.

If, during peaks of activity, your WAL-dedicated device isn't permanently at 100% usage while the data pool is, then dedicating it may (overall) bump up the max latency and reduce throughput.

Tweaking some parameters (bgwriter, full_page_writes, wal_compression, wal_writer_delay, max_wal_senders, wal_level, wal_buffers, wal_init_zero...) with respect to the usage profile (max tolerated latency, OLTP, OLAP, proportion of SELECTs and INSERTs/UPDATEs, I/O subsystem characteristics and performance, kernel parameters...) is key.