An optimizing compiler doesn't help much with long instruction dependencies

32 ingve 6 6/1/2025, 7:17:10 AM johnnysswlab.com ↗

Comments (6)

solarexplorer · 1d ago
This is not a good article and the content doesn't support the claim in the title. It talks about memory latency and how it negatively affects instruction level parallelism, but doesn't offer any solution or advice, except for offering their own (payed) service...
adrian_b · 1d ago
Memory latency only matters in chains of dependent instructions.

Otherwise the performance is limited by the memory transfer throughput, not by the latency of individual memory accesses.

The article demonstrates the difference between these 2 cases, even if its title could have been better.

Because the latency of memory loads is many times greater than the latency of any other kind of CPU instructions, both for loads from the main memory and for loads from the L3 cache memory, this effect is more visible in programs with many memory loads, like the examples from the article, than in programs using other instructions with long latencies.

jjtheblunt · 23h ago
Aren't you overlooking memory latency mattering in mmap (MMU) page miss contexts?
adrian_b · 9h ago
A page miss in the TLB cache memory that happens for a memory load is just a memory load that happens to have a latency many times greater than its normal latency, which is already very big.

The same as for normal memory loads, the effect of a page miss will vary depending on whether the memory load is part of a long dependency chain, so the CPU will not be able to find other instructions to execute concurrently while the dependency chain is stalled by waiting for the load result, or the memory load has only few instructions depending on it, so the CPU will go ahead executing other parts of the program.

Page misses in the TLB do not cause any new behavior, but the very long latencies corresponding to them exacerbate the effects of long dependency chains. With page misses, even a relatively short dependency chain may not allow the CPU to find enough independent instructions to be executed in order to avoid an execution stall.

With certain operating systems that choose to load lazily memory pages from a SSD/HDD or which choose to implement a virtual memory capacity greater than the physical memory capacity, there is a different kind of page miss, a miss from the memory currently mapped as valid by the OS, which results in an exception handled by the operating system, while the executing program is suspended. There are also mostly obsolete CPUs where a TLB page miss causes an exception, instead of being handled by dedicated hardware. In these cases, to which I assume that you refer by mentioning mmap, it does not matter whether the exception-causing instruction was part of a long dependency chain or not, the slowing-down of the program by exception handling is the same.

dahart · 1d ago
Even though the example is contrived, and hopefully not too many people are doing massive reductions using a linked list of random pointers, it would still be nice to offer some suggestion on what alternatives there are. Maybe it’s faster to collect all the pointers into an array and use the first loop? If ‘list’ entries are consecutive in memory, you can ignore the list order and consume them in memory order. Collecting and sorting the pointers might improve the cache hit rates, especially if the values are dense in memory. For anything performance sensitive, avoiding linked lists, especially non-intrusive linked lists, is often a good idea, right?

What’s with the “if (idx == NULLPTR)” block? The loop won’t access an entry outside the list, so this appears to be adding unnecessary instructions and unnecessary divergence. (And maybe even unnecessary dependencies?) Does demonstrating the performance problem depend on having this code in the loop? I hope not, but I’m very curious why it’s there.

A couple of other tiny nits - the first 2 graphs should have a Y axis that starts at zero! That won’t compromise these in any way. There should be a very compelling reason not to show ratios on a graph that start from zero, and these don’t have any such reason. And I’m curious why the X axis is factors of 8 except the last two, which seem strangely arbitrary?

MatthiasWandel · 14h ago
The bottleneck with the pointer table may be the summation. While the fetches of elements can be parallelized, the summation can not, as the addition depends on the result of the previous addition being available.

Some experiments I have done with something that does summation showed a considerable speedup by summing odd and even values into separate bins. Although this applies only to doing something not too closely resembling signal processing algorithms, as the compiler can otherwise optimize out for that.

Part of my video titled "new computers don't speed up old code"