Ask HN: Why hasn't x86 caught up with Apple M series?
My daily workhorse is a M1 Pro that I purchased on release date, It has been one of the best tech purchases I have made, even now it really deals with anything I throw at it. My daily work load is regularly having a Android emulator, iOS simulator and a number of Dockers containers running simultaneously and I never hear the fans, battery life has taken a bit of a hit but it is still very respectable.
I wanted a new personal laptop, and I was debating between a MacBook Air or going for a Framework 13 with Linux. I wanted to lean into learning something new so went with the Framework and I must admit I am regretting it a bit.
The M1 was released back in 2020 and I bought the Ryzen AI 340 which is one of the newest 2025 chips from AMD, so AMD has 5 years of extra development and I had expected them to get close to the M1 in terms of battery efficiency and thermals.
The Ryzen is using a TSMC N4P process compared to the older N5 process, I managed to find a TSMC press release showing the performance/efficiency gains from the newer process: “When compared to N5, N4P offers users a reported +11% performance boost or a 22% reduction in power consumption. Beyond that, N4P can offer users a 6% increase in transistor density over N5”
I am sorely disappointed, using the Framework feels like using an older Intel based Mac. If I open too many tabs in Chrome I can feel the bottom of the laptop getting hot, open a YouTube video and the fans will often spin up.
Why haven’t AMD/Intel been able to catch up? Is x86 just not able to keep up with the ARM architecture? When can we expect a x86 laptop chip to match the M1 in efficiency/thermals?!
To be fair I haven’t tried Windows on the Framework yet it might be my Linux setup being inefficient.
Cheers, Stephen
However, this doesn't really hold up as the cause for the difference. The Zen4/5 chips, for example, source the vast majority of their instructions out of their uOp trace cache, where the instructions have already been decoded. This also saves power - even on ARM, decoders take power.
People have been trying to figure out the "secret sauce" since the M chips have been introduced. In my opinion, it's a combination of:
1) The apple engineers did a superb job creating a well balanced architecture
2) Being close to their memory subsystem with lots of bandwidth and deep buffers so they can use it is great. For example, my old M2 Pro macbook has more than twice the memory bandwidth than the current best desktop CPU, the zen5 9950x. That's absurd, but here we are...
3) AMD and Intel heavily bias on the costly side of the watts vs performance curve. Even the compact zen cores are optimized more for area than wattage. I'm curious what a true low power zen core (akin to the apple e cores) would do.
As a layman there’s no way I’m running something called “Pop!_OS” versus Mac OS.
I feel like I've tried several times to get this working in both Linux and Windows on various laptops and have never actually found a reliable solution (often resulting in having a hot and dead laptop in my backpack).
I ended up moving to hybrid, where it suspends for an hour allowing immediate wake up then hibernates completely. It’s a decent compromise and I’ve never once had an issue with resume from suspend or hibernate, nor have I ever had an issue with it randomly waking up and frying itself in a backpack or unexpectedly having a dead battery.
My work M1 is still superior in this regard but it is an acceptable compromise.
notebookcheck.com does pretty comprehensive battery and power efficiency testing - not of every single device, but they usually include a pretty good sample of the popular options.
Most Linux distributions are not well tuned, because this is too device-specific. Spending a few minutes writing custom udev rules, with the aid of powertop, can reduce heat and power usage dramatically. Another factor is Safari, which is significantly more efficient than Firefox and Chromium. To counter that, using a barebones setup with few running services can get you quite far. I can get more than 10 hours of battery from a recent ThinkPad.
Am learning x86 in order to build nice software for the Framework 12 i3 13-1315U (raptor lake). Going into the optimization manuals for intel's E-cores (apparently Atom) and AMD's 5c cores. The efficiency cores on the M1 MacBook Pro are awesome. Getting debian or Ubuntu with KDE to run this on a FW12 will be mind-boggling.
Imagine that you made an FPGA do x86 work, and then you wanted to optimize libopenssl, or libgl, or libc. Would you restrict yourself to only modifying the source code of the libraries but not the FPGA, or would you modify the processor to take advantage of new capabilities?
For made-up example, when the iPhone 27 comes out, it won’t support booting on iOS 26 or earlier, because the drivers necessary to light it up aren’t yet published; and, similarly, it can have 3% less battery weight because they optimized the display controller to DMA more efficiently through changes to its M6 processor and the XNU/Darwin 26 DisplayController dylib.
Neither Linux, Windows, nor Intel have shown any capability to plan and execute such a strategy outside of video codecs and network I/O cards. GPU hardware acceleration is tightly controlled and defended by AMD and Nvidia who want nothing to do with any shared strategy, and neither Microsoft nor Linux generally have shown any interest whatsoever in hardware-accelerating the core system to date — though one could theorize that the Xbox is exempt from that, especially given the Proton chip.
I imagine Valve will eventually do this, most likely working with AMD to get custom silicon that implements custom hardware accelerations inside the Linux kernel that are both open source for anyone to use, and utterly useless since their correct operation hinges on custom silicon. I suspect Microsoft, Nintendo, and Sony already do this with their gaming consoles, but I can’t offer any certainty on this paragraph of speculation.
x86 isn’t able to keep up because x86 isn’t updated annually across software and hardware alike. M1 is what x86 could have been if it was versioned and updated without backwards compatibility as often as Arm was. it would be like saying “Intel’s 2026 processors all ship with AVX-1024 and hardware-accelerated DMA, and the OS kernel (and apps that want the full performance gains) must be compiled for its new ABI to boot on it”. The wreckage across the x86 ecosystem would be immense, and Microsoft would boycott them outright to try and protect itself from having to work harder to keep up — just like Adobe did with Apple M1, at least until their userbase starting canceling subscriptions en masse.
That’s why there are so many Arm Linux architectures: for Arm, this is just a fact of everyday life, and that’s what gave the M1 such a leg up in x86: not having to support anything older than your release date means you can focus on the sort of boring incremental optimizations that wouldn’t be permissible in a “must run assembly code written twenty years ago” environment assumed by Lin/Win today.
In terms of performance though, those N4P Ryzen chips have knocked it out of the park for my use-cases. It's a great architecture for desktop/datacenter applications, still.
The only real annoying thing I've found with the P14s is the Crowdstrike junk killing battery life when it pins several cores at 100% for an hour. That never happened in MacOS. These are corporate managed devices I have no say in, and the Ubuntu flavor of the corporate malware is obviously far worse implemented in terms of efficiency and impact on battery life.
I recently built myself a 7970X Threadripper and it's quite good perf/$ even for a Threadripper. If you build a gaming-oriented 16c ryzen the perf/$ is ridiculously good.
No personal experience here with Frameworks, but I'm pretty sure Jon Blow had a modern Framework laptop he was ranting a bunch about on his coding live streams. I don't have the impression that Framework should be held as the optimal performing x86 laptop vendor.
ARM is great. Those M are the only thing I could buy used and put Linux on it.
This hasn't been true for decades. Mainframes are fast because they have proprietary architectures that are purpose-built for high throughput and redundancy, not because they're RISC. The pre-eminent mainframe architecture these days (z/Architecture) is categorized as CISC.
Processors are insanely complicated these days. Branch prediction, instruction decoding, micro-ops, reordering, speculative execution, cache tiering strategies... I could go on and on but you get the idea. It's no longer as obvious as "RISC -> orthogonal addressing and short instructions -> speed".
Even though this was the case for the most part during the entire history of PPC Macs (I owned two during these years)
https://chipsandcheese.com/p/arm-or-x86-isa-doesnt-matter
TIL:
https://en.wikipedia.org/wiki/Monopole_(company)#Racing_cars
I was kind of hoping that there was some little-known x84 standard that never saw the light of day, but instead all I found was classic French racing cars.
Who here would be interested in testing a distro like debian with builds optimized for the Framework devices?
Once you normalize for either efficiency cores or performance cores, you'll quickly realize that the node lead is the largest advantage Apple had. Those guys were right, the writing was on the wall in 2019.