This is wrong, because your mmap code is being stalled for page faults (including soft page faults that you have when the data is in memory, but not mapped to your process).
The io_uring code looks like it is doing all the fetch work in the background (with 6 threads), then just handing the completed buffers to the counter.
Do the same with 6 threads that would first read the first byte on each page and then hand that page section to the counter, you'll find similar performance.
And you can use both madvice / huge pages to control the mmap behavior
lucketone · 2h ago
It would seem you summarised whole post.
That’s the point: “mmap” is slow because it is serial.
arghwhat · 1h ago
mmap isn't "serial", the code that was using the mapping was "serial". The kernel will happily fill different portions of the mapping in parallel if you have multiple threads fault on different pages.
(That doesn't undermine that io_uring and disk access can be fast, but it's comparing a lazy implementation using approach A with a quite optimized one using approach B, which does not make sense.)
amelius · 16m ago
OK, so we need a comparison between a multi threaded mmap approach and io_uring. Which would be faster?
bawolff · 9h ago
Shouldn't you also compare to mmap with huge page option? My understanding is its presicely meant for this circumstance. I don't think its a fair comparison without it.
Respectfully, the title feels a little clickbaity to me. Both methods are still ultimately reading out of memory, they are just using different i/o methods.
jared_hulbert · 8h ago
The original blog post title is intentionally clickbaity. You know, to bait people into clicking. Also I do want to challenge people to really think here.
Seeing if the cached file data can be accessed quickly is the point of the experiment. I can't get mmap() to open a file with huge pages.
MAP_HUGETLB can't be used for mmaping files on disk, it can only be used with MAP_ANONYMOUS, with a memfd, or with a file on a hugetlbfs pseudo-filesystem (which is also in memory).
mananaysiempre · 3h ago
It looks like there is in theory support for that[1]? But the patches for ext4[2] did not go through.
> MAP_HUGETLB can't be used for mmaping files on disk
False. I've successfully used it to memory-map networked files.
squirrellous · 3h ago
This is quite interesting since I, too, was under the impression that mmap cannot be used on disk-backed files with huge pages. I tried and failed to find any official kernel documentation around this, but I clearly remember trying to do this at work (on a regular ECS machine with Ubuntu) and getting errors.
Based on this SO discussion [1], it is possibly a limitation with popular filesystems like ext4?
If anyone knows more about this, I'd love to know what exactly are the requirements for using hugepages this way.
That doesn’t sound like the intended meaning of “on disk”.
inetknght · 6h ago
Kernel doesn't really care about "on disk", it cares about "on filesystem".
The "on disk" distinction is a simplification.
pclmulqdq · 5h ago
The kernel absolutely does care about the "on disk" distinction because it determines what driver to use.
ddtaylor · 4h ago
The interface is handled by the kernel.
loloquwowndueo · 6h ago
Share your code?
inetknght · 6h ago
I don't work there any more (it was a decade ago) and I'm pretty busy right now with a new job coming up (offered today).
Do you have kernel documentation that says that hugetlb doesn't work for files? I don't see that stated anywhere.
Sesse__ · 30m ago
It's filesystem-dependent. In particular, tmpfs will work. To the best of my knowledge, no “normal” filesystems (e.g., ext4, xfs) will.
jandrewrogers · 7h ago
Read the man pages, there are restrictions on using the huge page option with mmap() that mean it won’t do what you might intuit it will in many cases. Getting reliable huge page mappings is a bit fussy on Linux. It is easier to control in a direct I/O context.
nextaccountic · 1h ago
The real difference is that with io_uring and O_DIRECT you manage the cache yourself (and can't share with other processes, and the OS can't reclaim the cache automatically if under memory pressure), and with mmap this is managed by the OS.
If Linux had an API to say "manage this buffer you handled me from io_uring as if it were a VFS page cache (and as such it can be shared with other processes, like mmap), if you want it back just call this callback (so I can cleanup my references to it) and you are good to go", then io_uring could really replace mmap.
What Linux has currently is PSI, which lets the OS reclaim memory when needed but doesn't help with the buffer sharing thing
avallach · 1h ago
Maybe I'm misunderstanding, but after reading it sounds to me not like "io_uring is faster than mmap" but "raid0 with 8 SSDs has more throughput than 3 channel DRAM".
nine_k · 1h ago
The title has been edited incorrectly. The original page title is "Memory is slow, Disk is fast", and it states exactly what you say: an NVMe RAID can offer more bandwidth than RAM.
nialv7 · 19m ago
Things to try: MADV_SEQUENTIAL, hugetlb, background prefetching thread.
modeless · 6h ago
Wait, PCIe bandwidth is higher than memory bandwidth now? That's bonkers, when did that happen? I haven't been keeping up.
Just looked at the i9-14900k and I guess it's true, but only if you add all the PCIe lanes together. I'm sure there are other chips where it's even more true. Crazy!
rwmj · 38m ago
That's the promise (or requirement?) of CXL - have your memory managed centrally, servers access it over PCIe. https://en.wikipedia.org/wiki/Compute_Express_Link I wonder how many are actually using CXL. I haven't heard of any customers deploying it so far.
DiabloD3 · 6h ago
"No."
DDR5-8000 is 64GB/s per channel. Desktop CPUs have two channels.
PCI-E 5.0 in x16 is 64GB/s. Desktops have one x16.
pseudosavant · 4h ago
One x16 slot. They'll use PCIe lanes in other slots (x4, x1, M2 SSDs) and also for devices off the chipset (network, USB, etc). The current top AMD/Intel CPUs can do ~100GB/sec over 28 lanes of mostly PCIe 5.
modeless · 5h ago
Hmm, Intel specs the max memory bandwidth as 89.6 GB/s. DDR5-8000 would be out of spec. But I guess it's pretty common to run higher specced memory, while you can't overclock PCIe (AFAIK?). Even so I guess my math was wrong, it doesn't quite add up to more than memory bandwidth. But it's pretty darn close!
DiabloD3 · 4h ago
There is a difference between recommended and max achievable.
Zen 5 can hit that (and that's what I run), and Arrow Lake can also.
The recommended from AMD on Zen 4 and 5 is 6000 (or 48x2), for Arrow Lake is 6400 (or 51.2x2); both of them continue increase in performance up to 8000, both of them have extreme trouble going past 8000 and getting a stable machine.
AnthonyMouse · 3h ago
> Wait, PCIe bandwidth is higher than memory bandwidth now?
Hmm.
Somebody make me a PCIe card with RDIMM slots on it.
on server chips it's kind of ridiculous. 5th gen Epyc has 128 lanes of PCIEx5 for over 1TB/s of pcie bandwith (compared to 600GB/s RAM bandwidth from 12 channel ddr5 at 6400)
andersa · 6h ago
Your math is a bit off. 128 lanes gen5 is 8 times x16, which has a combined theoretical bandwidth of 512GB/s, and more like 440GB/s in practice after protocol overhead.
Unless we are considering both read and write bandwidth, but that seems strange to compare to memory read bandwidth.
pclmulqdq · 5h ago
People like to add read and write bandwidth for some silly reason. Your units are off, too, though: gen 5 is 32 GT/s, meaning 64 GB/s (or 512 gigabits per second) each direction on an x16 link.
andersa · 5h ago
I meant for all 128 lanes being used, not each x16. Then you get 512GB/s.
wmf · 4h ago
PCIe is full duplex while DDR5 is half duplex so in theory PCIe is higher. It's rare to max out PCIe in both directions though.
mrcode007 · 4h ago
happens frequently in fact when training neural nets on modern hw
hsn915 · 9h ago
Shouldn't this be "io_uring is faster than mmap"?
I guess that would not get much engagement though!
That said, cool write up and experiment.
nine_k · 1h ago
No. "io_uring faster than mmap" is sort of a truism: sequential page faults are slower than carefully orchestrated async I/O. The point of the article is that reading directly from a PCIe device, such as an NVMe flash, can actually be faster than caching things in RAM first.
dang · 6h ago
Let's use that. Since HN's guidelines say ""Please use the original title, unless it is misleading or linkbait", that "unless" clause seems to kick in here, so I've changed the title above. Thanks!
If anyone can suggest a better title (i.e. more accurate and neutral) we can change it again.
jared_hulbert · 9h ago
Lol. Thanks.
MaxikCZ · 3h ago
Its not even about clickbait for me, but I really dont want to go parse an article to figure out what is meant by "Memory is slow, Disk is fast". You want "clickbait" to make people click and think, we want descriptive tittles to know what the article is about before we read it. That used to be original purpose of tittles, we like it that way.
Its like as if youd label your food product "you wont believe this", and forced customers to figure what it is from ingredients list.
robertlagrant · 1h ago
> Its like as if youd label your food product "you wont believe this", and forced customers to figure what it is from ingredients list.
Because PCIe bandwidth is higher than memory bandwidth
This doesn't sound right, a PCIe 5.0 x16 slot offers up to 64 GB/s. That's fully saturated, a fairly old Xeon server can sustain >100 GB/s memory reads per numa node without much trouble.
Some newer HBM enabled, like a Xeon Max 9480 can go over 1.6TBs for HBM (up to 64GB) and
DDR5 can reach > 300 GB/s.
Even saturating all PCIe lanes (196 on a dual socket Xeon 6), you could at most theoretically get ~784GB/s, which coincidentally is the max memory bandwidth of such CPUs (12 Channels x 8,800 MT/s = 105,600 MT/s total bandwidth or roughly ~784GB/s).
I mean, solid state IO is getting really close, but it's not so fast on non-sequential access patterns.
I agree that many workloads could be shifted to SSDs but it's still quite nuanced.
jared_hulbert · 6h ago
Not by a ton but if you add up the DDR5 channel bandwidth and the PCIe lanes most systems the PCIe bandwidth is higher. Yes. HBM and L3 cache will be higher than the PCIe.
themafia · 6h ago
no madvise() call with MADV_SEQUENTIAL?
pixelpoet · 4h ago
> A few notes for the "um actually" haters commenting on Hacker News
Stay classy; any criticism is of course "hating", right?
The fact that your title is clickbaity and your results suspect should encourage you to get the most accurate picture, not shoot the messenger.
menaerus · 14m ago
People should just ignore such low-quality material and stop feeding the troll. Information found in both the first and second part of "Memory is slow, disk is fast" series is wrong on so many levels that it isn't worth correcting or commenting. It is obviously written with the help of the AI without actually fact-checking at all and all under the impression that the author is worthwhile which he isn't.
Just look at this bs:
> Early x86 processors took a few clocks to execute most instructions, modern processors have been able parallelize to where they can actually execute 2 instructions every clock.
unwind · 1h ago
I can bite (softly) on part of that, since there is C code in the post. :)
is wrong, since `count` has type `size_t` you should print it using `%zu` which is the dedicated purpose-built formatting code for `size_t` values. Also passing an unsigned value to `%d` which is for (signed) `int` is wrong, too.
The (C17 draft) standard says "If any argument is not the correct type for the corresponding conversion specification, the behavior is undefined" so this is not intended as pointless language-lawyering, it's just that it can be important to get silly details like this right in C.
titanomachy · 9h ago
Very interesting article, thanks for publishing these tests!
Is the manual loop unrolling really necessary to get vectorized machine code? I would have guessed that the highest optimization levels in LLVM would be able to figure it out from the basic code. That's a very uneducated guess, though.
Also, curious if you tried using the MAP_POPULATE option with mmap. Could that improve the bandwidth of the naive in-memory solution?
> humanity doesn't have the silicon fabs or the power plants to support this for every moron vibe coder out there making an app.
lol. I bet if someone took the time to make a high-quality well-documented fast-IO library based on your io_uring solution, it would get use.
jared_hulbert · 7h ago
YES! gcc and clang don't like to optimize this. But they do if you hardcode the size_bytes to an aligned value. It kind of makes sense, what if a user passes size_bytes as 3? With enough effort the compilers could handle this, but it's a lot to ask.
I just ran MAP_POPULATE the results are interesting.
It speeds up the counting loop. Same speed or higher as the my read() to a malloced buffer tests.
HOWEVER... It takes a longer time overall to do the population of the buffer. The end result is it's 2.5 seconds slower to run the full test when compared to the original. I did not guess that one correctly.
time ./count_10_unrolled ./mnt/datafile.bin 53687091200
unrolled loop found 167802249 10s processed at 5.39 GB/s
./count_10_unrolled ./mnt/datafile.bin 53687091200 5.58s user 6.39s system 99% cpu 11.972 total
time ./count_10_populate ./mnt/datafile.bin 53687091200
unrolled loop found 167802249 10s processed at 8.99 GB/s
./count_10_populate ./mnt/datafile.bin 53687091200 5.56s user 8.99s system 99% cpu 14.551 total
titanomachy · 4h ago
Hmm, I expected some slowdown from POPULATE, but I thought it would still come out ahead. Interesting!
mischief6 · 7h ago
it could be interesting to see what ispc does with similar code.
inetknght · 9h ago
Nice write-up with good information, but not the best. Comments below.
Are you using linux? I assume so since stating use of mmap() and mention using EPYC hardware (which counts out macOS). I suppose you could use any other *nix though.
> We'll use a 50GB dataset for most benchmarking here, because when I started this I thought the test system only had 64GB and it stuck.*
So the OS will (or could) prefetch the file into memory. OK.
> Our expectation is that the second run will be faster because the data is already in memory and as everyone knows, memory is fast.*
Indeed.
> We're gonna make it very obvious to the compiler that it's safe to use vector instructions which could process our integers up to 8x faster.
There are even-wider vector instructions by the way. But, you mention another page down:
> NOTE: These are 128-bit vector instructions, but I expected 256-bit. I dug deeper here and found claims that Gen1 EPYC had unoptimized 256-bit instructions. I forced the compiler to use 256-bit instructions and found it was actually slower. Looks like the compiler was smart enough to know that here.
Yup, indeed :)
Also note that AVX2 and/or AVX512 instructions are notorious for causing thermal throttling on certain (older by now?) CPUs.
> Consider how the default mmap() mechanism works, it is a background IO pipeline to transparently fetch the data from disk. When you read the empty buffer from userspace it triggers a fault, the kernel handles the fault by reading the data from the filesystem, which then queues up IO from disk. Unfortunately these legacy mechanisms just aren't set up for serious high performance IO. Note that at 610MB/s it's faster than what a disk SATA can do. On the other hand, it only managed 10% of our disk's potential. Clearly we're going to have to do something else.
In the worst case, that's true. But you can also get the kernel to prefetch the data.
See several of the flags, but if you're doing sequential reading you can use MAP_POPULATE [0] which tells the OS to start prefetching pages.
You also mention 4K page table entries. Page table entries can get to be very expensive in CPU to look up. I had that happen at a previous employer with an 800GB file; most of the CPU was walking page tables. I fixed it by using (MAP_HUGETLB | MAP_HUGE_1GB) [0] which drastically reduces the number of page tables needed to memory map huge files.
Importantly: when the OS realizes that you're accessing the same file a lot, it will just keep that file in memory cache. If you're only mapping it with PROT_READ and PROT_SHARED, then it won't even need to duplicate the physical memory to a new page: it can just re-use existing physical memory with a new process-specific page table entry. This often ends up caching the file on first-access.
I had done some DNA calculations with fairly trivial 4-bit-wide data, each bit representing one of DNA basepairs (ACGT). The calculation was pure bitwise operations: or, and, shift, etc. When I reached the memory bus throughput limit, I decided I was done optimizing. The system had 1.5TB of RAM, so I'd cache the file just by reading it upon boot. Initially caching the file would take 10-15 minutes, but then the calculations would run across the whole 800GB file in about 30 seconds. There were about 2000-4000 DNA samples to calculate three or four times a day. Before all of this was optimized, the daily inputs would take close to 10-16 hours to run. By the time I was done, the server was mostly idle.
This doesn't work with a file on my ext4 volume. What am I missing?
inetknght · 7h ago
What issue are you having? Are you receiving an error? This is the kind of question that StackOverflow or perhaps an LLM might be able to help you with. I highly suggest reading the documentation for mmap to understand what issues could happen and/or what a given specific error code might indicate; see the NOTES section:
> Huge page (Huge TLB) mappings
> For mappings that employ huge pages, the requirements for the arguments of mmap() and munmap() differ somewhat from the requirements for mappings that use the native system page size.
> For mmap(), offset must be a multiple of the underlying huge page size. The system automatically aligns length to be a multiple of the underlying huge page size.
Ensure that the file is at least the page size, and preferably sized to align with a page boundary. Then also ensure that the length parameter (size_bytes in your example) is also aligned to a boundary.
There are also other important things to understand for these flags, which are described in the documentation, such as information available from /sys/kernel/mm/hugepages
Someone who’s read it in more detail, it looks like the uring code is optimized for async, while the mmap code doesn’t do any prefetching so just chokes when the OS has to do work?
wahern · 4h ago
My first thought is that what's different here isn't async, per se, but parallelism. io_uring uses a kernel thread pool to service I/O requests, so you actually end up with multiple threads running in parallel handling bookkeeping work. AFAIU, SSD controllers also can service requests in parallel, even if the request stream is serialized. These two sources of parallelism is why the I/O results come back out-of-order.
Generic readahead, which is what the mmap case is relying on, benefits from at least one async thread running in parallel, but I suspect for any particular file you effectively get at most one thread running in parallel to fill the page cache.
What may also be important is the VM management. The splice and vmsplice syscalls came about because someone requested that Linux adopt a FreeBSD optimization--for sufficiently sized write calls (i.e. page size or larger), the OS would mark the page(s) CoW and zero-copy the data to disk or the network. But Linus measured that the cost of fiddling with VM page attributes on each call was too costly and erased most of the zero-copy benefit. So another thing to take note of is that the io_uring case doesn't induce any page faults at all or require any costly VM fiddling (the shared io_uring buffers are installed upfront), whereas in the mmap case there are many page faults and fixups, possibly as many as one for every 4K page. The io_uring case may even result in additional data copies, but with less cost than the VM fiddling, which is even greater now than 20 years ago.
jared_hulbert · 10h ago
Cool. Original author here. AMA.
whizzter · 3h ago
Like people mention, hugetlb,etc could be an improvement, but the core issue holding it it down probably has to do with mmap, 4k pages and paging behaviours, mmap will cause faults for each "small" 4k page not in memory, causing a kernel jump and then whatever machinery to fill in the page-cache (and bring up data from disk with the associated latency).
This in contrast with the io_uring worker method where you keep the thread busy by submitting requests and letting the kernel do the work without expensive crossings.
The 2g fully in-mem shows the CPU's real perf, the dip to 50gb is interesting, perhaps when going over 50% memory the Linux kernel evicts pages or something similar that is hurting perf, maybe plot a graph of perf vs test-size to see if there is an obvious cliff.
nchmy · 9h ago
I just saw this post so am starting with Part 1. Could you replace the charts with ones on some sort of log scale? It makes it look like nothing happened til 2010, but I'd wager its just an optical illusion...
And, even better, put all the lines on the same chart, or at least with the same y axis scale (perhaps make them all relative to their base on the left), so that we can the relative rate of growth?
jared_hulbert · 9h ago
I tried with the log scale before. They failed to express the exponential hockey stick growth unless you really spend the time with the charts and know what log scale is. I'll work on incorporating log scale due to popular demand. They do show the progress has been nice and exponential over time.
When I put the lines on the same chart it made the y axis impossible to understand. The units are so different. Maybe I'll revisit that.
Yeah around 2000-2010 the doubling is noticeable. Interestingly it's also when alot of factors started to stagnate.
nchmy · 6h ago
The hockey stick growth is the entire problem - it's an optical illusion resulting from the fact that going from 100 to 200 is the same rate as 200 to 400. And 800, 1600. You understand exponents.
Log axis solves this, and turns meaningless hockey sticks into generally a straightish line that you can actually parse. If it still deviates from straight, then you really know there's true changes in the trendline.
Lines on same chart can all be divided by their initial value, anchoring them all at 1. Sometimes they're still a mess, but it's always worth a try.
You're enormously knowledgeable and the posts were fascinating. But this is stats 101. Not doing this sort of thing, especially explicitly in favour of showing a hockey stick, undermines the fantastic analysis.
john-h-k · 9h ago
You mention modern server CPUs have capability to “read direct to L3, skipping memory”. Can you elaborate on this?
The PCIe bus and memory bus both originate from the processor or IO die of the "CPU" when you use an NVMe drive you are really just sending it a bunch of structured DMA requests. Normally you are telling the drive to DMA to an address that maps to the memory, so you can direct it cache and bypass sending it out on the DRAM bus.
In theory... the specifics of what is supported exactly? I can't vouch for that.
josephg · 7h ago
I’d be fascinated to see a comparison with SPDK. That bypasses the kernel’s NVMe / SSD driver and controls the whole device from user space - which is supposed to avoid a lot of copies and overhead.
You might be able to set up SPDK to send data directly into the cpu cache? It’s one of those things I’ve wanted to play with for years but honestly I don’t know enough about it.
spdk and I go way back. I'm confident it'd be about the same, possibly ~200-300MB/s more, I was pretty close to the rated throughput of the drives. Io_uring has really closed the gap that used to exist between the in kernel and userspace solutions.
With the Intel connection they might have explicit support for DDIO. Good idea.
Jap2-0 · 10h ago
Would huge pages help with the mmap case?
jared_hulbert · 9h ago
Oh man... I'd have look into that. Off the top of my head I don't know how you'd make that happen. Way back when I'd have said no. Now with all the folio updates to the Linux kernel memory handling I'm not sure. I think you'd have to take care to make sure the data gets into to page cache as huge pages. If not then when you tried to madvise() or whatever the buffer to use huge pages it would likely just ignore you. In theory it could aggregate the small pages into huge pages but that would be more latency bound work and it's not clear how that impacts the page cache.
But the arm64 systems with 16K or 64K native pages would have fewer faults.
inetknght · 9h ago
> I'd have look into that. Off the top of my head I don't know how you'd make that happen.
Pass these flags to your mmap call: (MAP_HUGETLB | MAP_HUGE_1GB)
jared_hulbert · 9h ago
Would this actually create huge page page cache entries?
inetknght · 9h ago
It's right in the documentation for mmap() [0]! And, from my experience, using it with an 800GB file provided a significant speed-up, so I do believe the documentation is correct ;)
And, you can poke around in the linux kernel's source code to determine how it works. I had a related issue that I ended up digging around to find the answer to: what happens if you use mremap() to expand the mapping and it fails; is the old mapping still valid or not? Answer: it's still valid. I found that it was actually fairly easy to read linux kernel C code, compared to a lot (!) of other C libraries I've tried to understand.
The io_uring code looks like it is doing all the fetch work in the background (with 6 threads), then just handing the completed buffers to the counter.
Do the same with 6 threads that would first read the first byte on each page and then hand that page section to the counter, you'll find similar performance.
And you can use both madvice / huge pages to control the mmap behavior
That’s the point: “mmap” is slow because it is serial.
(That doesn't undermine that io_uring and disk access can be fast, but it's comparing a lazy implementation using approach A with a quite optimized one using approach B, which does not make sense.)
Respectfully, the title feels a little clickbaity to me. Both methods are still ultimately reading out of memory, they are just using different i/o methods.
Seeing if the cached file data can be accessed quickly is the point of the experiment. I can't get mmap() to open a file with huge pages.
void* buffer = mmap(NULL, size_bytes, PROT_READ, (MAP_HUGETLB | MAP_HUGE_1GB), fd, 0); doesn't work.
You can can see my code here https://github.com/bitflux-ai/blog_notes. Any ideas?
[1] https://lwn.net/Articles/686690/
[2] https://lwn.net/Articles/718102/
False. I've successfully used it to memory-map networked files.
Based on this SO discussion [1], it is possibly a limitation with popular filesystems like ext4?
If anyone knows more about this, I'd love to know what exactly are the requirements for using hugepages this way.
[1] https://stackoverflow.com/questions/44060678/huge-pages-for-...
The "on disk" distinction is a simplification.
Do you have kernel documentation that says that hugetlb doesn't work for files? I don't see that stated anywhere.
If Linux had an API to say "manage this buffer you handled me from io_uring as if it were a VFS page cache (and as such it can be shared with other processes, like mmap), if you want it back just call this callback (so I can cleanup my references to it) and you are good to go", then io_uring could really replace mmap.
What Linux has currently is PSI, which lets the OS reclaim memory when needed but doesn't help with the buffer sharing thing
Just looked at the i9-14900k and I guess it's true, but only if you add all the PCIe lanes together. I'm sure there are other chips where it's even more true. Crazy!
DDR5-8000 is 64GB/s per channel. Desktop CPUs have two channels. PCI-E 5.0 in x16 is 64GB/s. Desktops have one x16.
Zen 5 can hit that (and that's what I run), and Arrow Lake can also.
The recommended from AMD on Zen 4 and 5 is 6000 (or 48x2), for Arrow Lake is 6400 (or 51.2x2); both of them continue increase in performance up to 8000, both of them have extreme trouble going past 8000 and getting a stable machine.
Hmm.
Somebody make me a PCIe card with RDIMM slots on it.
https://www.servethehome.com/micron-cz120-cxl-memory-module-...
Unless we are considering both read and write bandwidth, but that seems strange to compare to memory read bandwidth.
I guess that would not get much engagement though!
That said, cool write up and experiment.
If anyone can suggest a better title (i.e. more accurate and neutral) we can change it again.
Its like as if youd label your food product "you wont believe this", and forced customers to figure what it is from ingredients list.
Indeed[0].
[0] https://en.wikipedia.org/wiki/I_Can't_Believe_It's_Not_Butte...!
Some newer HBM enabled, like a Xeon Max 9480 can go over 1.6TBs for HBM (up to 64GB) and DDR5 can reach > 300 GB/s.
Even saturating all PCIe lanes (196 on a dual socket Xeon 6), you could at most theoretically get ~784GB/s, which coincidentally is the max memory bandwidth of such CPUs (12 Channels x 8,800 MT/s = 105,600 MT/s total bandwidth or roughly ~784GB/s).
I mean, solid state IO is getting really close, but it's not so fast on non-sequential access patterns.
I agree that many workloads could be shifted to SSDs but it's still quite nuanced.
Stay classy; any criticism is of course "hating", right?
The fact that your title is clickbaity and your results suspect should encourage you to get the most accurate picture, not shoot the messenger.
Just look at this bs:
> Early x86 processors took a few clocks to execute most instructions, modern processors have been able parallelize to where they can actually execute 2 instructions every clock.
This:
is wrong, since `count` has type `size_t` you should print it using `%zu` which is the dedicated purpose-built formatting code for `size_t` values. Also passing an unsigned value to `%d` which is for (signed) `int` is wrong, too.The (C17 draft) standard says "If any argument is not the correct type for the corresponding conversion specification, the behavior is undefined" so this is not intended as pointless language-lawyering, it's just that it can be important to get silly details like this right in C.
Is the manual loop unrolling really necessary to get vectorized machine code? I would have guessed that the highest optimization levels in LLVM would be able to figure it out from the basic code. That's a very uneducated guess, though.
Also, curious if you tried using the MAP_POPULATE option with mmap. Could that improve the bandwidth of the naive in-memory solution?
> humanity doesn't have the silicon fabs or the power plants to support this for every moron vibe coder out there making an app.
lol. I bet if someone took the time to make a high-quality well-documented fast-IO library based on your io_uring solution, it would get use.
I just ran MAP_POPULATE the results are interesting.
It speeds up the counting loop. Same speed or higher as the my read() to a malloced buffer tests.
HOWEVER... It takes a longer time overall to do the population of the buffer. The end result is it's 2.5 seconds slower to run the full test when compared to the original. I did not guess that one correctly.
time ./count_10_unrolled ./mnt/datafile.bin 53687091200 unrolled loop found 167802249 10s processed at 5.39 GB/s ./count_10_unrolled ./mnt/datafile.bin 53687091200 5.58s user 6.39s system 99% cpu 11.972 total time ./count_10_populate ./mnt/datafile.bin 53687091200 unrolled loop found 167802249 10s processed at 8.99 GB/s ./count_10_populate ./mnt/datafile.bin 53687091200 5.56s user 8.99s system 99% cpu 14.551 total
Are you using linux? I assume so since stating use of mmap() and mention using EPYC hardware (which counts out macOS). I suppose you could use any other *nix though.
> We'll use a 50GB dataset for most benchmarking here, because when I started this I thought the test system only had 64GB and it stuck.*
So the OS will (or could) prefetch the file into memory. OK.
> Our expectation is that the second run will be faster because the data is already in memory and as everyone knows, memory is fast.*
Indeed.
> We're gonna make it very obvious to the compiler that it's safe to use vector instructions which could process our integers up to 8x faster.
There are even-wider vector instructions by the way. But, you mention another page down:
> NOTE: These are 128-bit vector instructions, but I expected 256-bit. I dug deeper here and found claims that Gen1 EPYC had unoptimized 256-bit instructions. I forced the compiler to use 256-bit instructions and found it was actually slower. Looks like the compiler was smart enough to know that here.
Yup, indeed :)
Also note that AVX2 and/or AVX512 instructions are notorious for causing thermal throttling on certain (older by now?) CPUs.
> Consider how the default mmap() mechanism works, it is a background IO pipeline to transparently fetch the data from disk. When you read the empty buffer from userspace it triggers a fault, the kernel handles the fault by reading the data from the filesystem, which then queues up IO from disk. Unfortunately these legacy mechanisms just aren't set up for serious high performance IO. Note that at 610MB/s it's faster than what a disk SATA can do. On the other hand, it only managed 10% of our disk's potential. Clearly we're going to have to do something else.
In the worst case, that's true. But you can also get the kernel to prefetch the data.
See several of the flags, but if you're doing sequential reading you can use MAP_POPULATE [0] which tells the OS to start prefetching pages.
You also mention 4K page table entries. Page table entries can get to be very expensive in CPU to look up. I had that happen at a previous employer with an 800GB file; most of the CPU was walking page tables. I fixed it by using (MAP_HUGETLB | MAP_HUGE_1GB) [0] which drastically reduces the number of page tables needed to memory map huge files.
Importantly: when the OS realizes that you're accessing the same file a lot, it will just keep that file in memory cache. If you're only mapping it with PROT_READ and PROT_SHARED, then it won't even need to duplicate the physical memory to a new page: it can just re-use existing physical memory with a new process-specific page table entry. This often ends up caching the file on first-access.
I had done some DNA calculations with fairly trivial 4-bit-wide data, each bit representing one of DNA basepairs (ACGT). The calculation was pure bitwise operations: or, and, shift, etc. When I reached the memory bus throughput limit, I decided I was done optimizing. The system had 1.5TB of RAM, so I'd cache the file just by reading it upon boot. Initially caching the file would take 10-15 minutes, but then the calculations would run across the whole 800GB file in about 30 seconds. There were about 2000-4000 DNA samples to calculate three or four times a day. Before all of this was optimized, the daily inputs would take close to 10-16 hours to run. By the time I was done, the server was mostly idle.
[0]: https://www.man7.org/linux/man-pages/man2/mmap.2.html
This doesn't work with a file on my ext4 volume. What am I missing?
> Huge page (Huge TLB) mappings
> For mappings that employ huge pages, the requirements for the arguments of mmap() and munmap() differ somewhat from the requirements for mappings that use the native system page size.
> For mmap(), offset must be a multiple of the underlying huge page size. The system automatically aligns length to be a multiple of the underlying huge page size.
Ensure that the file is at least the page size, and preferably sized to align with a page boundary. Then also ensure that the length parameter (size_bytes in your example) is also aligned to a boundary.
There are also other important things to understand for these flags, which are described in the documentation, such as information available from /sys/kernel/mm/hugepages
https://www.man7.org/linux/man-pages/man2/mmap.2.html
Generic readahead, which is what the mmap case is relying on, benefits from at least one async thread running in parallel, but I suspect for any particular file you effectively get at most one thread running in parallel to fill the page cache.
What may also be important is the VM management. The splice and vmsplice syscalls came about because someone requested that Linux adopt a FreeBSD optimization--for sufficiently sized write calls (i.e. page size or larger), the OS would mark the page(s) CoW and zero-copy the data to disk or the network. But Linus measured that the cost of fiddling with VM page attributes on each call was too costly and erased most of the zero-copy benefit. So another thing to take note of is that the io_uring case doesn't induce any page faults at all or require any costly VM fiddling (the shared io_uring buffers are installed upfront), whereas in the mmap case there are many page faults and fixups, possibly as many as one for every 4K page. The io_uring case may even result in additional data copies, but with less cost than the VM fiddling, which is even greater now than 20 years ago.
This in contrast with the io_uring worker method where you keep the thread busy by submitting requests and letting the kernel do the work without expensive crossings.
The 2g fully in-mem shows the CPU's real perf, the dip to 50gb is interesting, perhaps when going over 50% memory the Linux kernel evicts pages or something similar that is hurting perf, maybe plot a graph of perf vs test-size to see if there is an obvious cliff.
And, even better, put all the lines on the same chart, or at least with the same y axis scale (perhaps make them all relative to their base on the left), so that we can the relative rate of growth?
When I put the lines on the same chart it made the y axis impossible to understand. The units are so different. Maybe I'll revisit that.
Yeah around 2000-2010 the doubling is noticeable. Interestingly it's also when alot of factors started to stagnate.
Log axis solves this, and turns meaningless hockey sticks into generally a straightish line that you can actually parse. If it still deviates from straight, then you really know there's true changes in the trendline.
Lines on same chart can all be divided by their initial value, anchoring them all at 1. Sometimes they're still a mess, but it's always worth a try.
You're enormously knowledgeable and the posts were fascinating. But this is stats 101. Not doing this sort of thing, especially explicitly in favour of showing a hockey stick, undermines the fantastic analysis.
AMD has something similar.
The PCIe bus and memory bus both originate from the processor or IO die of the "CPU" when you use an NVMe drive you are really just sending it a bunch of structured DMA requests. Normally you are telling the drive to DMA to an address that maps to the memory, so you can direct it cache and bypass sending it out on the DRAM bus.
In theory... the specifics of what is supported exactly? I can't vouch for that.
You might be able to set up SPDK to send data directly into the cpu cache? It’s one of those things I’ve wanted to play with for years but honestly I don’t know enough about it.
https://spdk.io/
With the Intel connection they might have explicit support for DDIO. Good idea.
But the arm64 systems with 16K or 64K native pages would have fewer faults.
Pass these flags to your mmap call: (MAP_HUGETLB | MAP_HUGE_1GB)
And, you can poke around in the linux kernel's source code to determine how it works. I had a related issue that I ended up digging around to find the answer to: what happens if you use mremap() to expand the mapping and it fails; is the old mapping still valid or not? Answer: it's still valid. I found that it was actually fairly easy to read linux kernel C code, compared to a lot (!) of other C libraries I've tried to understand.
[0]: https://www.man7.org/linux/man-pages/man2/mmap.2.html
Yes. Tens- or hundreds- of gigabytes of 4K page table entries take a while for the OS to navigate.