Why does it feel like computers are not getting faster

1 calamaridude 20 7/16/2025, 5:30:38 PM
I use an M2 Pro, with not that much running, it feels like despite advancements in chip performance (especially the M series) the speedup is barely noticeable for browsing around, having web browser tabs open.

Comments (20)

_wire_ · 12h ago
The speed of the UI was established decades ago, there's no reason for it to speed up past the rate at which a person is satisfied when interacting with the system.

But system performance improvement is real. There are ongoing enormous advances of compute and I/O power which allow new and improved media handling, more complex UI and workloads, and overall freer tradeoffs between memory capacity and time required to complete work.

In other words, the system is always getting faster, but you're not, so a more powerful system allows more work to be done within your attention window.

So upon this realization, you might become curious as to what the relationship between system performance and your perceptions of speed. Without explaining in any detail, you might first suspect that the relationship is non-intuitive, much in the way that speed of transportation and perception of distance traveled are not intuitive. A walk to the corner market by every measure is shorter than a flight across country, yet your perception of scale does not lead to the direct apprehension that the flight is 500 times faster than your stride. Same thing with computer power

Another unintuitive aspect of performance is that speed relationships within computing systems are subject to a time vs. distance tradeoff with interactions generally slowing with distance. If your sense of performance comes from interacting with web sites, while the parts of your PC go really fast, the stuff you're accessing is arbitrarily far away, and subject the sum of all the delays over that distance, only some of which are affected by the speed of local processing. There's a famous law, similar to Moore's law, coined by Cray architect Gene Amdahl, that observes that no matter how many things a system can do at once, the time to complete a job are limited by the time required to those things that must occur in a sequence. For your PC the wifi connection is a tube through which all the stuff you want to access is ordered sequentially. Even if your PC and the computers you access were infinitely fast, it still takes time to get the data through that tube. So you might suspect that your experience accessing the web is generally tube limited, not PC limited.

calamaridude · 8h ago
thats nuanced and a nice way to put it. Awfully sad that we don't put the user interface as a priority. Laggy windows, lag in general. It often is cumbersome and slow just to switch a window, on an M2 PRO!
jleyank · 13h ago
Because software gets slower/larger faster than hardware gets faster. Particularly as the last 10+ years have seen improvements in parallel performance (hard) rather than clock speed/scalar (easy).
calamaridude · 12h ago
That make sense
pwg · 13h ago
Bloated software expands to consume CPU performance faster than CPU performance is increasing from hardware advancements currently.

Gone are the days of a two year later CPU bringing 3x or 5x the performance to a new system.

calamaridude · 12h ago
So the geekbench single-core scores are irrelevant?
ashwinsundar · 12h ago
calamaridude · 12h ago
checks out. is there evidence of this?
ashwinsundar · 3h ago
it's just a saying, it's not provable
apothegm · 13h ago
Most of the speed improvements these days seem to be in greater parallelization rather than raw clock speed. That doesn’t affect web browsing much beyond the first few threads. Meanwhile, web front ends are just plain heavy, and network latency often has more effect for client/server apps than client computing power.
calamaridude · 12h ago
switching tabs on my macbook pro feels the same as it did 10 years ago! often slower!
lastcat743 · 12h ago
Unless you’re gaming or running ollama, few activities will stress your resources.

Any latency you experience comes by the many layers beyond your devices’ capacities.

I believe the hardware is pretty flashy and we’re better erring on economizing power consumption than generative predictive web preloading.

calamaridude · 12h ago
i have like 14Gb memory usage with like alacritty, orion, discord, imessage, mail, with most of it coming from alacritty and orion.
d00mB0t · 13h ago
calamaridude · 12h ago
umm
bigyabai · 13h ago
Apple's year-over-year performance improvements are still in lockstep with the industry's lethargic pace.
calamaridude · 12h ago
iS that so? what about the whole M series craze vs intel's architecture.
bigyabai · 10h ago
That craze, as documented at the time, was due to the jump from a 10nm chip to a 5nm one. The way chips scale, that could add up to a 4x efficiency improvement in some areas, thus the M1 was faster than anything that wasn't fabbed on 7nm/5nm silicon. Even back then though, there were non-Intel chips that were still competitive with Apple Silicon, like the Ryzen 7 4800/5800u laptops.

Fast-forward to today, and Apple is arguably in the same boat as Intel. They have mediocre year-over-year density improvements (only a 1.5x-2x density improvement most years) and can't radically change their IPC with ISA updates. They pay hand-over-fist for TSMC's best nodes but lose out on profit margins to Nvidia's datacenter products. Adding insult to injury, Apple arguably has the worst modern desktop GPU architecture, which compounds the acceleration issues ARM has with SIMD and vector workloads. Apple Silicon is backed into a corner, from a design perspective. Ye olde "I'm a RISC machine that wishes I had CISC architecture" problem.

Still a great chip for browsing the web or editing video. But again, people cautioned back at the M1's launch that ARM is hardly a silver bullet. We still have x86 laptops because for >90% of PC use-cases, ARM isn't worth the licensing cost.

calamaridude · 8h ago
Never heard of a licensing cost for that fundamental of a technology. Do you know approximately how much goes towards ARM vs in x86 licensing (if that even exists).
bigyabai · 7h ago
It's hard to get a solid grasp on what Apple pays for ARM, since their licensing is all conducted behind closed doors. It's worth noting that ARM (the ISA technology) is owned by SoftBank, a holding firm which Apple has invested in quite heavily. Some reports claim that they pay as little as $0.30 a chip: https://www.tomshardware.com/news/apple-pays-arm-less-than-3...

x86 licensing costs are complicated. The basic old 16-bit architecture is very well-documented, and while it's proprietary it's still implemented in hundreds of products and clones. Modern 32-bit and 64-bit implementations have a lot more ISA extensions that would be costly to license and hard to develop on your own. As a result, modern x86 designs can really only be made by licensing Intel or AMD technology.