OpenBSD IO Benchmarking: How Many Jobs Are Worth It?

27 PaulHoule 4 6/8/2025, 10:26:43 PM rsadowski.de ↗

Comments (4)

wtallis · 3h ago
The drive in question is a low-end consumer part using QLC NAND and a DRAMless controller. It's hard to conclude much about the scalability of an operating system's IO stack when using a single drive that has such limited performance potential. It also appears the test methodology didn't take into account the impact of SLC caching on write tests: it's usually a good idea to limit the test to a specific quantity of data rather than a specific runtime, and to have substantial idle time between test runs to allow the drive to flush the cache back to a consistent starting state. On a drive like this, it's also important to ensure that the OpenBSD and Linux tests had the drive at about the same state of fullness.

All that notwithstanding, it does look like the defaults for fio on Linux have real, significant performance advantages. The logical next step would be to analyze what IO engine fio defaults to on each OS (ie. what API is used to perform IO), and what kind of caching and prefetching each OS does by default. On Linux, io_uring makes it possible to saturate a drive with just one thread submitting IO to the OS, so you can stress the OS and drive without burning a ton of CPU cores on application-level overhead and context switches.

dfc · 5h ago
There is something weird with the charts. X is never 0 at the origin. And in the Linux vs OpenBSD graphs the data is missing for Linux when job=1.
synack · 5h ago
It doesn’t say what type of CPU they tested on, but core count should be correlated with the results.

I’m curious how FreeBSD performs on the same test.

tiffanyh · 5h ago
Post says AMD64.

But that’s all I saw.