There's an error here: “NT instructions are used when there is an overlap between destination and source since destination may be in cache when source is loaded.”
Non-temporal instructions don't have anything to do with correctness. They are for cache management; a non-temporal write is a hint to the cache system that you don't expect to read this data (well, address) back soon, so it shouldn't push out other things in the cache. They may skip the cache entirely, or (more likely) go into just some special small subsection of it reserved for non-temporal writes only.
orlp · 4h ago
> Non-temporal instructions don't have anything to do with correctness. They are for cache management; a non-temporal write is a hint to the cache system that you don't expect to read this data (well, address) back soon
I disagree with this statement (taken at face value, I don't necessarily agree with the wording in the OP either). Non-temporal instructions are unordered with respect to normal memory operations, so without a _mm_sfence() after doing your non-temporal writes you're going to get nasty hardware UB.
Sesse__ · 3h ago
You mean if you access it from a different core? I believe that within the same core, you still have the normal ordering, but indeed, non-temporal writes don't have an implicit write fence after them like x86 stores normally do.
In any case, if so they are potentially _less_ correct; they never help you.
Do you have any Intel references for it? I mean, Rust has its own memory model and it will not always give the same guarantees as when writing assembler.
“Because the WC protocol uses a weakly-ordered memory consistency model, a fencing operation implemented with
the SFENCE or MFENCE instruction should be used in conjunction with VMOVNTDQ instructions if multiple processors might use different memory types to read/write the destination memory locations”
Note _if multiple processors_.
m0th87 · 4h ago
I had interpreted GP to mean that you don’t slap on NTs for correctness reasons, rather you do it for performance reasons.
orlp · 3h ago
That is something I can agree with, but I can't in good faith just let "it's just a hint, they don't have anything to do with correctness" stand unchallenged.
m0th87 · 4h ago
I work on optimizations like this at work, and yes this is largely correct. But do you have a source on this?
> or (more likely) go into just some special small subsection of it reserved for non-temporal writes only.
I hadn’t heard of this before. It looks like older x86 CPUs may have had a dedicated cache.
Tuna-Fish · 3h ago
IIRC they used the write-combining buffer, which was also a cache.
A common trick is to cache it but put it directly in the last or second-to-last bin in your pseudo-LRU order, so it's in cache like normal but gets evicted quickly when you need to cache a new line in the same set. Other solutions can lead to complicated situations when the user was wrong and the line gets immediately reused by normal instructions, this way it's just in cache like normal and gets promoted to least recently used if you do that.
Sesse__ · 3h ago
A source on what? The Intel optimization manuals explain what MOVNTQ is for. I don't think they explain in detail how it is implemented behind-the-scenes.
“The non-temporal move instructions (MOVNTI, MOVNTQ, MOVNTDQ, MOVNTPS, and MOVNTPD) allow data to be moved from the processor’s registers directly into system memory without being also written into the L1, L2, and/or L3 caches. These instructions can be used to prevent cache pollution when operating on data that is going to be modified only once before being stored back into system memory. These instructions operate on data in the general-purpose, MMX, and XMM registers.”
I believe that non-temporal moves basically work similar to memory marked as write-combining; which is explained in 13.1.1: “Writes to the WC memory type are not cached in the typical sense of the word cached. They are retained in an internal write combining buffer (WC buffer) that is separate from the internal L1, L2, and L3 caches and the store buffer. The WC buffer is not snooped and thus does not provide data coherency. Buffering of writes to WC memory is done to allow software a small window of time to supply more modified data to the WC buffer while remaining as non-intrusive to software as possible. The buffering of writes to WC memory also causes data to be collapsed; that is, multiple writes to the same memory location will leave the last data written in the location and the other writes will be lost.”
In the old days (Pentium Pro and the likes), I think there was basically a 4- or 8-way associative cache, and non-temporal loads/stores would go to only one of the sets, so you could only waste 1/4 (or 1/8) on your cache on it at worst.
m0th87 · 3h ago
I see, thanks. I had assumed incorrectly that NT writes operated the same as NT accesses, where there is no dedicated cache.
PaulHoule · 11m ago
If I understand that chart at the end it looks like the better performance is only for small buffer sizes which fit in the cache (4k) but if you are looking at big buffers the stdlib copy performs about the same as the optimized copy that he writes.
userbinator · 8h ago
It's not clear from a skim of this article, but a common problem I've seen in the past with memory copying benchmarks is to not serialise and access the copied data in its destination to ensure that it was actually completed before concluding the timing. A simple REP MOVS should be at or near the top, especially on CPUs with ERMSB.
kachapopopow · 8h ago
Yah, these benchmarks are irrelevant since the CPU executes instructions out of order. Majority of the time the cpu will continue executing assembly while a copy operation is ongoing.
viraptor · 7h ago
The full reorder buffer is still going to be only 200-500 instructions. The actual benchmark is not linked, but it would take only a hundred or so messages to largely ignore the reordering. On the other hand, when you use the library, the write needs to actually finish in the shared memory before you notify the other process. So unless the benchmark was tiny for some reason, why would this be irrelevant?
Arech · 7h ago
It's not clear how the author controlled for HW caching. Without this, the results are, unfortunately, meaningless, even though some good work has been gone
coxley · 19m ago
Ha, I love the project name "Shadesmar". Journey before destination, friend. :crossed-wrists:
jesse__ · 7h ago
Would have loved to see performance comparisons along the way, instead of just the small squashed graph at the end. Nice article otherwise :)
brucehoult · 8h ago
Conclusion
Stick to `std::memcpy`. It delivers great performance while also adapting to the hardware architecture, and makes no assumptions about the memory alignment.
----
So that's five minutes I'll never get back.
I'd make an exception for RISC-V machines with "RVV" vectors, where vectorised `memcpy` hasn't yet made it into the standard library and a simple ...
Confirming null hypothesis, with good supporting data is still interesting. Could save you from doing this yourself.
makach · 3h ago
You pre-stole my comment, I was about to make the exact same post :-D
Although the blog post is about going faster and him showing alternative algorithms, conclusion remains for safety which makes perfect sense. However, he did show us a few strategies which is useful. The five minutes I spent, will never be returned to me but at least I learned something interesting...
davrosthedalek · 4h ago
> Since the loop copies data pointer by pointer, it can handle the case of overlapping data.
I don't think this loop does the right thing if destination points somewhere into source. It will start overwriting the non-copied parts of source.
kvemkon · 3h ago
BTW, if we copy data between some device and RAM efficiently using DMA without spending CPU cycles, why we can't use DMA to copy RAM-to-RAM?
shakna · 2h ago
You can copy that way.
It's faster of you use the CPU, but you absolutely can just use DMA - and some embedded systems do.
kvemkon · 2h ago
> It's faster of you use the CPU
But not for AMD? E.g. 8 Zen 5 cores in the CCD have only 64 GB/s read and 32 GB/s write bandwidth, while the dual-channel memory controller in the IOD has up to 87 GB/s bandwidth.
waschl · 8h ago
Thought about zero-copy IPC recently. In order to avoid memcopy for the complete chain, I guess it would be best if the sender allocates its payload directly on the shared memory when it’s created. Is this a standard thing in such optimized IPC and which libraries offer this?
comex · 7h ago
IPC libraries often specifically avoid zero-copy for security reasons. If a malicious message sender can modify the message while the receiver is in the middle of parsing it, you have to be very careful not to enable time-of-check-time-of-use attacks. (To be fair, not all use cases need to be robust against a malicious sender.)
o11c · 7h ago
On Linux, that's exactly what `memfd` seals are for.
That said, even without seals, it's often possible to guarantee that you only read the memory once; in this case, even if the memory is technically mutating after you start, it doesn't matter since you never see any inconsistent state.
murderfs · 2h ago
It is very easy for zero-copy IPC using sealed memfd to be massively slower than just copying, because of the cost associated with doing a TLB shootdown on munmap. In order to see a benefit over just writing into a pipe, you'd likely need to be sending gigantic blobs, mapping them in both the reader and write into an address space that isn't shared with any other threads that are doing anything, and deferring and batching munmapping (and Linux doesn't really provide you an actual way to do this, aside from mapping them all in consecutive pages with MAP_FIXED and munmapping multiple mappings with a single call).
Any realistic high-performance zero copy IPC mechanism needs to avoid changing the page tables like the plague, which means things like memfd seals aren't really useful.
kragen · 4h ago
Thanks for the reference! I had been wondering if there was a way to do this on Linux for years. https://lwn.net/Articles/591108/ seems to be the relevant note?
duped · 6h ago
What's the threat model where a malicious message sender has write access to shared memory
kragen · 4h ago
When you are using the shared memory to communicate with an untrusted sender. Examples might include:
- browser main processes that don't trust renderer processes
- window system compositors that don't trust all windowed applications, and vice versa
- database servers that don't trust database clients, and vice versa
- message queue brokers that don't trust publishers and subscribers, and vice versa
- userspace filesystems that don't trust normal user processes
hmry · 6h ago
How would someone send a message over shared memory without write access to that memory?
IshKebab · 5h ago
I think he meant what's the scenario where you're using IPC via shared memory and don't trust both processes. Basically it only applies if the processes are running as two different users. (I think Android does that a lot?)
a_t48 · 5h ago
I've looked into this a bit - the big blocker isn't on the transport/IPC library, but the serializer itself, assuming you _also_ want to support serializing messages to disk or over network. It's a bit of a pickle - at least in C++, tying an allocator to a structure and its children is an ugly mess. And what happens if you do something like resize a string? Does it mean a whole new allocation? I've (partially) solved it before for single process IPC by having a concept of a sharable structure and its serialization type, you could do the same for shared memory. One could also use a serializer that offers promises around allocations, FlatBuffer might fit the bill. There's also https://github.com/Verdant-Robotics/cbuf but I'm not sure how well maintained it is right now, publicly.
As for allocation - it looks like Zenoh might offer the allocation pattern necessary. https://zenoh-cpp.readthedocs.io/en/1.0.0.5/shm.html TBH most of the big wins come from not copying big blocks of memory around from sensor data and the like. A thin header and reference to a block of shared memory containing an image or point cloud coming in over UDS is likely more than performant enough for most use cases. Again, big wins from not having to serialize/deserialize the sensor data.
Another pattern which I haven't really seen anywhere is handling multiple transports - at one point I had the concept of setting up one transport as an allocator (to put into shared memory or the like) - serialize once to shared memory, hand that serialized buffer to your network transport(s) or your disk writer. It's not quite zero copy but in practice most zero copy is actually at least one copy on each end.
(Sorry, this post is a little scatterbrained, hopefully some of my points come across)
dataflow · 8h ago
> I guess it would be best if the sender allocates its payload directly on the shared memory when it’s created.
On an SMP system yes. On a NUMA system it depends on your access patterns etc.
6keZbCECT2uB · 7h ago
I've been meaning to look at Iceoryx as a way to wrap this.
Pytorch multiprocessing queues work this way, but it is hard for the sender to ensure the data is already in shared memory, so it often has a copy. It is also common for buffers to not be reused, so that can end up a bottleneck, but it can, in principle, be limited by the rate of sending fds.
throwaway81523 · 8h ago
This is one of mmap's designed-for use cases. Look at DPDK maybe.
> The operation of copying data is super easy to parallelize across multiple threads. […] This will make the copy super-fast especially if the CPU has a large core count.
I seriously doubt that. Unless you have a NUMA system, a single core in a desktop CPU can easily saturate the bandwidth of the system RAM controller. If you can avoid going through main memory – e.g., when copying between the L2 caches of different cores – multi-threading can speed things up. But then you need precise knowledge of your program's memory access behavior, and this is outside the scope of a general-purpose memcpy.
bob1029 · 4h ago
> a single core in a desktop CPU can easily saturate the bandwidth of the system RAM controller.
Modern x86 machines offer far more memory bandwidth than what a single core can consume. The entire architecture is designed on purpose to ensure this.
The interesting thing to note is that this has not always been the case. The 2010s is when the transition occurred.
zozbot234 · 2h ago
Some modern non-x86 machines (and maybe even some very recent x86 ones) can't even saturate their system memory bandwidth with all of their CPU cores running at full tilt, they'd need to combine both CPU and non-CPU access for absolute best performance.
hugh-avherald · 6h ago
I've experienced modest but significant improvements in speed using very basic pragma omp section style parallelizing of this sort of thing.
adwn · 6h ago
Do you remember any specifics? For example, the size of the copy, whether it was a NUMA system, or the total bandwidth of your system RAM?
Orangeair · 7h ago
[2020]
wolfi1 · 7h ago
the "dumb of perf": some Freudian Slip?
_ZeD_ · 7h ago
soo... time to send a patch to glibc?
bawolff · 6h ago
Given their conclusion that glibc was the best option for most use cases, i would say no.
Non-temporal instructions don't have anything to do with correctness. They are for cache management; a non-temporal write is a hint to the cache system that you don't expect to read this data (well, address) back soon, so it shouldn't push out other things in the cache. They may skip the cache entirely, or (more likely) go into just some special small subsection of it reserved for non-temporal writes only.
I disagree with this statement (taken at face value, I don't necessarily agree with the wording in the OP either). Non-temporal instructions are unordered with respect to normal memory operations, so without a _mm_sfence() after doing your non-temporal writes you're going to get nasty hardware UB.
In any case, if so they are potentially _less_ correct; they never help you.
Intel's docs are unfortunately spartan, but the guarantees around program order is a hint that this is what it does.
Similarly, if I look up MOVNTDQ in the Intel manuals (https://www.intel.com/content/dam/www/public/us/en/documents...), they say:
“Because the WC protocol uses a weakly-ordered memory consistency model, a fencing operation implemented with the SFENCE or MFENCE instruction should be used in conjunction with VMOVNTDQ instructions if multiple processors might use different memory types to read/write the destination memory locations”
Note _if multiple processors_.
> or (more likely) go into just some special small subsection of it reserved for non-temporal writes only.
I hadn’t heard of this before. It looks like older x86 CPUs may have had a dedicated cache.
A common trick is to cache it but put it directly in the last or second-to-last bin in your pseudo-LRU order, so it's in cache like normal but gets evicted quickly when you need to cache a new line in the same set. Other solutions can lead to complicated situations when the user was wrong and the line gets immediately reused by normal instructions, this way it's just in cache like normal and gets promoted to least recently used if you do that.
See e.g. https://cdrdv2.intel.com/v1/dl/getContent/671200 chapter 13.5.5:
“The non-temporal move instructions (MOVNTI, MOVNTQ, MOVNTDQ, MOVNTPS, and MOVNTPD) allow data to be moved from the processor’s registers directly into system memory without being also written into the L1, L2, and/or L3 caches. These instructions can be used to prevent cache pollution when operating on data that is going to be modified only once before being stored back into system memory. These instructions operate on data in the general-purpose, MMX, and XMM registers.”
I believe that non-temporal moves basically work similar to memory marked as write-combining; which is explained in 13.1.1: “Writes to the WC memory type are not cached in the typical sense of the word cached. They are retained in an internal write combining buffer (WC buffer) that is separate from the internal L1, L2, and L3 caches and the store buffer. The WC buffer is not snooped and thus does not provide data coherency. Buffering of writes to WC memory is done to allow software a small window of time to supply more modified data to the WC buffer while remaining as non-intrusive to software as possible. The buffering of writes to WC memory also causes data to be collapsed; that is, multiple writes to the same memory location will leave the last data written in the location and the other writes will be lost.”
In the old days (Pentium Pro and the likes), I think there was basically a 4- or 8-way associative cache, and non-temporal loads/stores would go to only one of the sets, so you could only waste 1/4 (or 1/8) on your cache on it at worst.
Stick to `std::memcpy`. It delivers great performance while also adapting to the hardware architecture, and makes no assumptions about the memory alignment.
----
So that's five minutes I'll never get back.
I'd make an exception for RISC-V machines with "RVV" vectors, where vectorised `memcpy` hasn't yet made it into the standard library and a simple ...
... often beats `memcpy` by a factor of 2 or 3 on copies that fit into L1 cache.https://hoult.org/d1_memcpy.txt
Confirming null hypothesis, with good supporting data is still interesting. Could save you from doing this yourself.
Although the blog post is about going faster and him showing alternative algorithms, conclusion remains for safety which makes perfect sense. However, he did show us a few strategies which is useful. The five minutes I spent, will never be returned to me but at least I learned something interesting...
I don't think this loop does the right thing if destination points somewhere into source. It will start overwriting the non-copied parts of source.
It's faster of you use the CPU, but you absolutely can just use DMA - and some embedded systems do.
But not for AMD? E.g. 8 Zen 5 cores in the CCD have only 64 GB/s read and 32 GB/s write bandwidth, while the dual-channel memory controller in the IOD has up to 87 GB/s bandwidth.
That said, even without seals, it's often possible to guarantee that you only read the memory once; in this case, even if the memory is technically mutating after you start, it doesn't matter since you never see any inconsistent state.
Any realistic high-performance zero copy IPC mechanism needs to avoid changing the page tables like the plague, which means things like memfd seals aren't really useful.
- browser main processes that don't trust renderer processes
- window system compositors that don't trust all windowed applications, and vice versa
- database servers that don't trust database clients, and vice versa
- message queue brokers that don't trust publishers and subscribers, and vice versa
- userspace filesystems that don't trust normal user processes
As for allocation - it looks like Zenoh might offer the allocation pattern necessary. https://zenoh-cpp.readthedocs.io/en/1.0.0.5/shm.html TBH most of the big wins come from not copying big blocks of memory around from sensor data and the like. A thin header and reference to a block of shared memory containing an image or point cloud coming in over UDS is likely more than performant enough for most use cases. Again, big wins from not having to serialize/deserialize the sensor data.
Another pattern which I haven't really seen anywhere is handling multiple transports - at one point I had the concept of setting up one transport as an allocator (to put into shared memory or the like) - serialize once to shared memory, hand that serialized buffer to your network transport(s) or your disk writer. It's not quite zero copy but in practice most zero copy is actually at least one copy on each end.
(Sorry, this post is a little scatterbrained, hopefully some of my points come across)
On an SMP system yes. On a NUMA system it depends on your access patterns etc.
Pytorch multiprocessing queues work this way, but it is hard for the sender to ensure the data is already in shared memory, so it often has a copy. It is also common for buffers to not be reused, so that can end up a bottleneck, but it can, in principle, be limited by the rate of sending fds.
https://www.boost.org/doc/libs/1_46_0/doc/html/interprocess/...
I seriously doubt that. Unless you have a NUMA system, a single core in a desktop CPU can easily saturate the bandwidth of the system RAM controller. If you can avoid going through main memory – e.g., when copying between the L2 caches of different cores – multi-threading can speed things up. But then you need precise knowledge of your program's memory access behavior, and this is outside the scope of a general-purpose memcpy.
Modern x86 machines offer far more memory bandwidth than what a single core can consume. The entire architecture is designed on purpose to ensure this.
The interesting thing to note is that this has not always been the case. The 2010s is when the transition occurred.