I realize this has not much to do with CPU choice per se, but I'm still gonna leave this recommendation here for people who like to build PCs to get stuff done with :) Since I've been able to afford it and the market has had them available, I've been buying desktop systems with proper ECC support.
I've been chasing flimsy but very annoying stability problems (some, of course, due to overclocking during my younger years, when it still had a tangible payoff) enough times on systems I had built that taking this one BIG potential cause out of the equation is worth the few dozens of extra bucks I have to spend on ECC-capable gear many times over.
Trying to validate an ECC-less platform's stability is surprisingly hard, because memtest and friends just aren't very reliably detecting more subtle problems. PRIME95, y-cruncher and linpack (in increasing order of effectiveness) are better than specialzied memory testing software in my experience, but they are not perfect, either.
Most AMD CPUs (but not their APUs with potent iGPUs - there, you will have to buy the "PRO" variants) these days have full support for ECC UDIMMs. If your mainboard vendor also plays ball - annoyingly, only a minority of them enables ECC support in their firmware, so always check for that before buying! - there's not much that can prevent you from having that stability enhancement and reassuring peace of mind.
> only a minority of them enables ECC support in their firmware, so always check for that before buying!
This is the annoying part.
That AMD permits ECC is a truly fantastic situation, but if it's supported by the motherboard is often unlikely and worse: it's not advertised even when it's available.
I have an ASUS PRIME TRX40 PRO and the tech specs say that it can run ECC and non-ECC but not if ECC will be available to the operating system, merely that the DIMMS will work.
It's much more hit and miss in reality than it should be, though this motherboard was a pricey one: one can't use price as a proxy for features.
c0l0 · 41m ago
Usually, if a vendor's spec sheet for a (SOHO/consumer-grade) motherboard mentions ECC-UDIMM explicitly in its memory compatibility section, and (but this is a more recent development afaict) DOES NOT specify something like "operating in non-ECC mode only" at the same time, then you will have proper ECC (and therefore EDAC and RAS) support in Linux, if the kernel version you have can already deal with ECC on your platform in general.
I would assume your particular motherboard to operate with proper SECDED+-level ECC if you have capable, compatible DIMM, enable ECC mode in the firmware, and boot an OS kernel that can make sense of it all.
consp · 46m ago
Isn't it mostly an ease of mind thing? I've never seen a ECC error on my home server which has plenty of memory in use and runs longer than my desktop. Maybe it's more common with higher clocked, near the limit, desktop PC's.
Also: DDR5 has some false ecc marketing due to the memory standard having an error correction scheme build in. Don't fall for it.
c0l0 · 33m ago
I see a particular ECC error at least weekly on my home desktop system, because one of my DIMMs doesn't like the (out of spec) clock rate that I make it operate at. Looks like this:
94 2025-08-26 01:49:40 +0200 error: Corrected error, no action required., CPU 2, bank Unified Memory Controller (bank=18), mcg mcgstatus=0, mci CECC, memory_channel=1,csrow=0, mcgcap=0x0000011c, status=0x9c2040000000011b, addr=0x36e701dc0, misc=0xd01a000101000000, walltime=0x68aea758, cpuid=0x00a50f00, bank=0x00000012
95 2025-09-01 09:41:50 +0200 error: Corrected error, no action required., CPU 2, bank Unified Memory Controller (bank=18), mcg mcgstatus=0, mci CECC, memory_channel=1,csrow=0, mcgcap=0x0000011c, status=0x9c2040000000011b, addr=0x36e701dc0, misc=0xd01a000101000000, walltime=0x68b80667, cpuid=0x00a50f00, bank=0x00000012
(this is `sudo ras-mc-ctl --errors` output)
It's always the same address, and always a Corrected Error (obviously, otherwise my kernel would panic). However, operating my system's memory at this clock and latency boosts x265 encoding performance (just one of the benchmarks I picked when trying to figure out how to handle this particular tradeoff) by about 12%. That is an improvement I am willing to stomach the extra risk of effectively overclocking the memory module beyond its comformt zone for, given that I can fully mitigate it by virtue of properly working ECC.
Hendrikto · 3m ago
Running your RAM so far out of spec that it breaks down regularly, where do you take the confidence that ECC will still work correctly?
Also: Could you not have just bought slightly faste RAM, given the premium for ECC?
swinglock · 51m ago
Excellent point. It's a shame and a travesty that data integrity is still mostly locked away inside servers, leaving most other computing devices effectively toys, the early prototype demo thing but then never finished and sold forever at inflated prices.
I wish AMD would make ECC a properly advertised feature with clear motherboard support. At least DDR5 has some level of ECC.
danieldk · 2h ago
I feel like both Intel and AMD are not doing great in the desktop CPU stability department. I made a machine with a Ryzen 9900X a while back and it had the issue that it would freeze when idling. A few years before I had a 5950X that would regularly crash under load (luckily it was a prebuilt, so it was ultimately fixed).
When you do not have a bunch of components ready to swap out it is also really hard to debug these issues. Sometimes it’s something completely different like the PSU. After the last issues, I decided to buy a prebuilt (ThinkStation) with on-site service. The cooling is a bit worse, etc., but if issues come up, I don’t have to spend a lot of time debugging them.
Random other comment: when comparing CPUs, a sad observation was that even a passively cooled M4 is faster than a lot of desktop CPUs (typically single-threaded, sometimes also multi-threaded).
johnisgood · 2h ago
This does not fill me with much hope. What am I even ought to buy at this point then, I wonder. I have a ~13 years old Intel CPU which lacks AVX2 (and I need it by now) and I thought of buying a new desktop (items separately, of course), but that is crazy to me that it freezes because of the CPU going idle. It was never an issue in my case. I guess I can only hope it is not going to be a problem once I completed building my PC. :|
On what metric am I ought to buy a CPU these days? Should I care about reviews? I am fine with a middle-end CPU, for what it is worth, and I thought of AMD Ryzen 7 5700 or AMD Ryzen 5 5600GT or anything with a similar price tag. They might even be lower-end by now?
hhh · 2h ago
Just buy an AMD CPU. One person’s experience isn’t the world. Nobody in my circle has had an issue with any chip from AMD in recent time (10 years).
Intel is just bad at the moment and not even worth touching.
danieldk · 1h ago
I agree that Intel is bad at the moment (especially with the 13th and 14th gen self-destruct issues). But unfortunately I also know plenty of people with issues with AMD systems.
And it's no bad power quality on mains as someone suggested (it's excellent here) or 'in the air' (whatever that means) if it happens very quickly after buying.
I would guess that a lot of it comes from bad firmware/mainboards, etc. like the recent issue with ASRock mainboards destroying Ryzen 9000-series GPUs: https://www.techspot.com/news/108120-asrock-confirms-ryzen-9... Anyone who uses Linux and has dealt with bad ACPI bugs, etc. knows that a lot of these mainboards probably have crap firmware.
I should also say that I had a Ryzen 3700X and 5900X many years back and two laptops with a Ryzen CPU and they have been awesome.
tester756 · 1h ago
This is funny because recently my AMD Ryzen 7 5700X3D died and I've decided that my next CPU will be Intel
It's either bad luck, bad power quality from the mains, or something in the air in that particular area. I know plenty of people running AM5 builds, have done so myself for the last couple of years, and there were no problems with any of them apart from the usual amdgpu bugs in latest kernels (which are "normal" since I'm running mainline kernels — it's easy to solve by just sticking to lts, and it has seemingly improved anyway since 6.15).
ahofmann · 2h ago
I wouldn't be so hopeless. Intel and AMD CPUs are used in millions of builds and most of them just work.
danieldk · 1h ago
However, the vast majority of PCs out there are not hobbyist builds but Dell/Lenovo/HP/etc. [1] with far fewer possible configurations (and much more testing as a byproduct). I am not saying these machines never have issues, but a high failure rate would not be acceptable to their business customers.
[1] Well, most non-servers are probably laptops today, but the same reasoning applies.
giveita · 28m ago
If you value your time a dell laptop with extended warranty and accidental damage where they replace shit and send people out to fix shit is well worth it. It costs but you can be a dumb user and call "IT" when you need a fix and thats a nice feeling IMO!
PartiallyTyped · 1h ago
3 of my last 4 machines have been AMD x NVDA and I have been very happy. The intel x NVDA machine has been my least stable one.
encom · 2h ago
>M4 is faster than a lot of desktop CPUs
Yea, but unfortunately it comes attached to a Mac.
An issue I've encountered often with motherboards, is that they have brain damaged default settings, that run CPU's out of spec. You really have to go through it all with a fine toothed comb and make sure everything is set to conservative stock manufacturer recommended settings. And my stupid MSI board resets everything (every single BIOS setting) to MSI defaults when you upgrade its BIOS.
homebrewer · 1h ago
Also be careful with overclocking, because the usual advice of "just running EXPO/XMP" often results in motherboards setting voltages on very sensitive components to more than 30% over their stock values, and this is somehow considered normal.
It looks completely bonkers to me. I overclocked my system to ~95% of what it is able to do with almost default voltages, using bumps of 1-3% over stock, which (AFAIK) is within acceptable tolerances, but it requires hours and hours of tinkering and stability testing.
Most users just set automatic overclocking, have their motherboards push voltages to insane levels, and then act surprised when their CPUs start bugging out within a couple of years.
Shocking!
danieldk · 1h ago
Unfortunately, some separately purchasable hardware components seem to be optimized completely for gamers these days (overclocking mainboards, RGB on GPUs, etc.).
I'd rather run everything at 90% and get very big power savings and still have pretty stellar performance. I do this with my ThinkStation with Core Ultra 265K now - I set the P-State maximum performance percentage to 90%. Under load it runs almost 20 degrees Celsius cooler. Single core is 8% slower, multicore 4.9%. Well worth the trade-off for me.
to;dr: they heavily customize BIOS settings, since many BIOSes run CPUs out-of-spec by default. With these customizations there was not much of a difference in failure rate between AMD and Intel at that point in time (even when including Intel 13th and 14th gen).
ahartmetz · 1h ago
Since you mention EXPO/XMP, which are about RAM overclocking: RAM has the least trouble with overvoltage. Raising some of the various CPU voltages is a problem, which RAM overclocking may also do.
eptcyka · 57m ago
The heat is starting to become an issue for DDR5 with higher voltage.
electroglyph · 1h ago
yah, the default overclocking stuff is pretty aggressive these days
danieldk · 1h ago
Yea, but unfortunately it comes attached to a Mac.
Yeah. If Asahi worked on newer Macs and Apple Silicon Macs supported eGPU (yes I know, big ifs), the choice would be simple. I had NixOS on my Mac Studio M1 Ultra for a while and it was pretty glorious.
claudex · 1h ago
>And my stupid MSI board resets everything (every single BIOS setting) to MSI defaults when you upgrade its BIOS.
I had the same issue with my MSI board, next one won't be a MSI.
techpression · 26m ago
My ASUS and Gigabyte did the same too. I think vendors are being lazy and don’t want to write migration code
izacus · 13m ago
Did what exactly? All the ASUS and Gigabytes I've seen had PBO (which I guess you're talking about) disabled by default.
perching_aix · 54m ago
> Tom’s Hardware recently reported that “Intel Raptor Lake crashes are increasing with rising temperatures in record European heat wave”, which prompted some folks to blame Europe’s general lack of Air Conditioning
Is politics really this far in rotting people's brains?
These CPUs have defined maximum operating temperatures, beyond which they default to auto-shutdown. That being like 100-110 °C. (Warm summer room temps without aircon being 30 °C to 35 °C.) It has piss all to do with the guy running his CPU that hot - it's his CPU cooling setup that was not up to the job.
What do you have to be thinking to believe that no ACs in yurop == dumb yuros' CPUs dying? The whole point of the temperature based throttling and auto shutdown is so that this doesn't matter!
Havoc · 2m ago
It's not some politics thing tom's hardware came up with. It's from the firefox's crash reports:
>If you have an Intel Raptor Lake system and you're in the northern hemisphere, chances are that your machine is crashing more often because of the summer heat. I know because I can literally see which EU countries have been affected by heat waves by looking at the locales of Firefox crash reports coming from Raptor Lake systems.
13th and 14th gen Intel is also showing up in aggregated gaming crash data, though not sure if that's heat related
That doesn't preclude the possibility of temperature related instability below 100...
Nothing that it's different generations - it's not the core ultras (15th) that are known to have these issues
>rotting people's brains
...
baobabKoodaa · 1h ago
Why is the author showing a chart of room temperatures? CPU temperature is what matters here. Expecting a CPU to be stable at 100C is just asking for problems. Issue probably could have been avoided by making improvements to case airflow.
Jolter · 1h ago
I would expect the CPU to start throttling at high temperatures in order to avoid damage. Allegedly, it never did, and instead died. Do you think that’s acceptable in 2025?
ACCount37 · 47m ago
Thermal throttling originated as a safety feature. The early implementations were basically a "thermal fuse" in function, and cut all power to the system to prevent catastrophic hardware damage. Only later did the more sophisticated versions that do things like "cut down clocks to prevent temps from rising further" appear.
On desktop PCs, thermal throttling is often set up as "just a safety feature" to this very day. Which means: the system does NOT expect to stay at the edge of its thermal limit. I would not trust thermal throttling with keeping a system running safely at a continuous 100C on die.
100C is already a "danger zone", with elevated error rates and faster circuit degradation - and there are only this many thermal sensors a die has. Some under-sensored hotspots may be running a few degrees higher than that. Which may not be enough to kill the die outright - but more than enough to put those hotspots into a "fuck around" zone of increased instability and massively accelerated degradation.
If you're relying on thermal throttling to balance your system's performance, as laptops and smartphones often do, then you seriously need to dial in better temperature thresholds. 100C is way too spicy.
FeepingCreature · 1h ago
No but it's also important to realize that this CPU was running at an insane temperature that should never happen in normal operation. I have a laptop with an undersized fan and if I max out all my cores with full load, I barely cross 80. 100 is mental. It doesn't matter if the manufacturer set the peak temperature wrong, a computer whose cpu reaches 100 degrees celsius is simply built incorrectly.
If nothing else, it very clearly indicates that you can boost your performance significantly by sorting out your cooling because your cpu will be stuck permanently emergency throttling.
izacus · 14m ago
I somehow doubt that, are you looking at the same temperature? I haven't seen a laptop that would have thermal stop under 95 for a long time and any gaming laptop will run at 95 under load for package temps.
FeepingCreature · 2m ago
i7 8550u. Google confirms it stabilizes at 80-85C.
That said, there's a difference between a laptop cpu turbo boosting to 90 and a desktop cpu, which are usually cooler, running at 100 sustained.
formerly_proven · 1h ago
Strange, laptop CPUs and their thermal solutions are designed in concert to stay at Tjmax when under sustained load and throttle appropriately to maintain maximum temperature (~ power ~ performance).
ACCount37 · 35m ago
And those mobile devices have much more conservative limits, and much more aggressive throttling behavior.
Smartphones have no active cooling and are fully dependent on thermal throttling for survival, but they can start throttling at as low as 50C easily. Laptops with underspecced cooling systems generally try their best to avoid crossing into triple digits - a lot of them max out at 85C to 95C, even under extreme loads.
perching_aix · 44m ago
> Expecting a CPU to be stable at 100C is just asking for problems.
No. High performance gaming laptops will routinely do this for hours on end for years.
If it can't take it, it shouldn't allow it.
swinglock · 1h ago
The text clearly explains all of this.
whyoh · 2h ago
It's crazy how unreliable CPUs have become in the last 5 years or so, both AMD and Intel. And it seems they're all running at their limit from the factory, whereas 10-20 years ago they usually had ample headroom for overclocking.
techpression · 21m ago
The 7800X3D is amazing here, runs extremely cool and stable, you can push it far above its defaults and it still won’t get to 80C even with air cooling. Mine was running between 60-70 under load with PBO set to high. Unfortunately it seems its successor is not that great :/
stavros · 2h ago
That's good, isn't it? I don't want the factory leaving performance on the table.
bell-cot · 1h ago
Depends on your priorities. That "performance on the table" might also be called "engineering safety factor for stability".
makeitdouble · 1h ago
TBF using more conservative energy profiles will bring stability and safety. To that effect in Windows the default profile effectively debuffs the CPU and most people will be fine that way.
stavros · 1h ago
Given that there used to be plenty of room to overclock the cores while still keeping them stable, I think it was more "performance on the table".
formerly_proven · 1h ago
You could also get the idea that vendors sometimes make strange decisions which increase neither performance nor reliability.
For example, various brands of motherboards are / were known to basically blow up AMD CPUs when using AMP/XMP, with the root cause being that they jacked an uncore rail way up. Many people claimed they did this to improve stability, but overclockers now that that rail has a sweet spot for stability and they went way beyond it (so much so that the actual silicon failed and burned a hole in itself with some low-ish probability).
Ekaros · 59m ago
Seems like failure in choosing cooling solutions. These high-end chips have obscene cooling needs. My guess would be using something that was not designed for TDP in question.
Sufficient cooler, with sufficient airflow is always needed.
uniqueuid · 47m ago
For what it's worth, I have an i9-13900K paired with the largest air cooler available at the time (a be quiet! Dark Rock 5 IIRC), and it's incapable of sufficiently cooling that CPU.
The 13900k draws more than 200W initially and thermal throttles after a minute at most, even in an air conditioned room.
I don't think that thermal problems should be pushed to end user to this degree.
SomeoneOnTheWeb · 4m ago
This means your system doesn't have enough airflow if it throttles this quickly.
But I agree this should not be a problem in the first place.
ttyyzz · 50m ago
Agree. Also, use good thermal paste. 100 °C is not safe or sustainable long term. Unfortunately, I think the manufacturer's specifications regarding the maximum temperature are misleading. With proper cooling, however, you'll be well within that limit.
onli · 47m ago
No, those processors clock or shut down if too hot. In no circumstances should they fail because of insufficient cooling. Even without airflow etc.
willtemperley · 1h ago
Gaming seems to be the final stronghold of x86 and I imagine that will shrink. Clearly games are able to run well on RISC architectures despite decades of x86 optimisation in game engines. Long term, an architecture that consumes more power and is tightly locked down by licensing looks unsustainable compared to royalty-free RISC alternatives. The instability, presumably because Intel are overclocking their own chips to look OK on benchmarks will not help.
smallpipe · 26m ago
x86 hasn't been CISC in 3 decades anywhere but in the frontend. An architecture doesn't consume power, a design does. I'm all for shitting on intel, but getting the facts right wouldn't hurt.
energy123 · 2h ago
I can't comment on the quality question, but for memory bandwidth sensitive use cases, Intel desktop is superior.
ttyyzz · 1h ago
I'm not convinced, what would be the use case?
Jnr · 2h ago
I have not had any issues with Intel or AMD CPUs but I have so many issues with AMD APUs, I would steer clear of them. In my experience with different models, they have many graphics issues, broken video transcoding and overall extremely unstable. If you need decent integrated graphics then Intel is the only real option.
sellmesoap · 1h ago
They make a lot of apus for gaming handhelds, I think they do well in that segment. I've had a handful of desktop and laptop apus with no complaints. Even an APU with ecc support, they've all worked without a hitch. I haven't tried transcoding anything on them mind you.
Jnr · 1h ago
Yes, I have Steam Deck and it works great. But I also have 2400G and 5700G and both of those have graphics issues (tested with different recommended RAM sets).
vkazanov · 1h ago
My laptop's AMD is great (Ryzen AI 7 PRO 360 w/ Radeon 880M). Gaming, GPI work, battery, small LLMs - all just work on my Ubuntu.
Don't know about transcoding though.
imiric · 57m ago
I've had the same experience with an 8600G on Linux. Very frequent graphics driver crashes and KDE/Wayland freezes, on old and new kernels alike. I've been submitting error reports for months, and the issues still persist. The RAM passes MemTest, and the system otherwise works fine, but the graphics issues are very annoying. It's not like I'm gaming or doing anything intensive either; it happens during plain desktop usage.
Yet I also use a 7840U in a gaming handheld running Windows, and haven't had any issues there at all. So I think this is related to AMD Linux drivers and/or Wayland. In contrast, my old laptop with an NVIDIA GPU and Xorg has given me zero issues for about a decade now.
So I've decided to just avoid AMD on Linux on my next machine. Intel's upcoming Panther Lake and Nova Lake CPUs seem promising, and their integrated graphics have consistently been improving. I don't think AMD's dominance will continue for much longer.
eptcyka · 2h ago
This is rather late, to be quite fair.
discardable_dan · 2h ago
My thoughts exactly: he figured out in 2025 what the rest of us knew in 2022.
positron26 · 1h ago
One of my work computers died and I hadn't checked the CPU market in years. Rode home that night in a taxi with a Ryzen 1700x completely stoked that AMD was back in the game.
If anyone thinks competition isn't good for the market or that also-rans don't have enough of an effect, just take note. Intel is a cautionary tale. I do agree we would have gotten where we are faster with more viable competitors.
M4 is neat. I won't be shocked if x86 finally gives up the ghost as Intel decides playing in Risc V or ARM space is their only hope to get back into an up-cycle. AMD has wanted to do heterogeneous stuff for years. Risc V might be the way.
One thing I'm finding is that compilers are actually leaving a ton on the table for AMD chips, so I think this is an area where AMD and all of the users, from SMEs on down, can benefit tremendously from cooperatively financing the necessary software to make it happen.
mrlonglong · 2h ago
To make linking go quicker, use mold.
Pass -fuse=mold when building.
positron26 · 1h ago
Do beware when doing WASM with mold. I shipped a broken WASM binary that Firefox could run just fine but Chrome would not.
hawshemi · 49m ago
And welcome to USB slow speeds and issues...
amelius · 33m ago
USB issues are driving me nuts. Please, someone show me the path to serenity.
andsoitis · 2h ago
> I would say that 25 to 28 degrees celsius are normal temperatures for computers.
An ideal ambient (room) temperature for running a computer is 15-25 celcius (60-77 Fahrenheit)
And that is an impossibility in most of the world today and it will be even more like that going forward.
nl · 1h ago
Much of the world (for better or worse) uses airconditioning in places they commonly use desktop computers.
em-bee · 37m ago
no they don't. in some countries in europe (maybe in all of them?), installing airconditioning is frowned upon because it is considered a waste of energy. if you want government subsidies for replacing your heating system with a more energy efficient one you are not allowed to have airconditioning. and in the rest of the world only people/countries well of, that don't consider their energy usage, do it. airconditioning is luxury.
using to much airconditioning is also not comfortable. i used to live in singapore. we used to joke that singapore has two seasons: indoors and outdoors. because the airconditioning is powered so high that you had to bring jacket to wear inside. i'd frequently freeze after entering a building. i don't know why they do it, because it doesn't make sense. when i did turn on airconditioning at home i'd go barely below 30. just a few degrees cooler than the outside so it feels more comfortable without making the transition to hard.
Are they saying this is bad? This Intel CPU has been at it for over a decade. There was a fan issue for half a year and would go up to 80 C for... half a year. Still works perfectly fine but it is outdated, it lacks instruction sets that I need, and it has two cores only, and 1 thread per core.
Maybe today's CPUs would not be able to handle it, I am not sure. One would expect these things to only improve, but seems like this is not the case.
Edit: I misread it, oops! Disregard this comment.
Rohansi · 1h ago
That is your CPU temperature, not ambient (room) temperature.
johnisgood · 1h ago
Oh, I misread. My bad!
imtringued · 1h ago
So you're saying that if you go even 3 degrees Celsius over that temperature range you should expect your CPU to fry itself? Even when the CPU throttled itself to exactly 100°C?
andsoitis · 59m ago
> So you're saying that if you go even 3 degrees Celsius over that temperature range you should expect your CPU to fry itself? Even when the CPU throttled itself to exactly 100°C?
It is actually 2.9999, precisely.
shmerl · 2h ago
Well, worryingly there were reports of AMD X3D CPUs burning out too. I hope it will be sorted out.
KronisLV · 1h ago
> I also double-checked if the CPU temperature of about 100 degrees celsius is too high, but no: this Tom’s Hardware article shows even higher temperatures, and Intel specifies a maximum of 110 degrees. So, running at “only” 100 degrees for a few hours should be fine.
I'd say that even crashing at max temperatures is still completely unreasonable! You should be able to run at 100C or whatever the max temperature is for a week non-stop if you well damn please. If you can't, then the value has been chosen wrong by the manufacturers. If the CPU can't handle that, the clock rates should just be dialed back accordingly to maintain stability.
It's odd to hear about Core Ultra CPUs failing like that, though - I thought that they were supposed to be more power efficient than the 13th and 14th gen, all while not having their stability issues.
That said, I currently have a Ryzen 7 5800X, OCed with PBO to hit 5 GHz with negative CO offsets per core set. There's also an AIO with two fans and the side panel is off because the case I have is horrible. While gaming the temps usually don't reach past like 82C but Prime95 or anything else that's computationally intensive can make the CPU hit and flatten out at 90C. So odd to have modern desktop class CPUs still bump into thermal limits like that. That's with a pretty decent ambient temperature between 21C to 26C (summer).
formerly_proven · 1h ago
> Looking at my energy meter statistics, I usually ended up at about 9.x kWh per day for a two-person household, cooking with induction.
> After switching my PC from Intel to AMD, I end up at 10-11 kWh per day.
It's kind of impressive to increase household electricity consumption by 10% by just switching one CPU.
usr1106 · 1m ago
I guess the author runs it at high load for long times, not only for the benchmarks to write this blog post. And less than 10 kWh is a low starting point, many households would be much higher.
crinkly · 2h ago
I’ve given up both.
Thorrez · 2h ago
What do you use?
aurareturn · 2h ago
I’ve given up on both and use Apple Silicon only. AMD and Intel are simply too power hungry for how slow they are and can’t optimize for power like Apple can.
izacus · 11m ago
Ah, was wondering where the mandatory Apple advertiser is in this thread.
Hope you get a commission.
maciejw · 1h ago
I also switched to cheapest Mac Mini M4 Pro this year (after 20+ years of using Intel CPUs). MacOS has its quirks, but it provides ZSH and it "just works" (unlike manjaro I used in parallel with Windows). I especially like the preview tool - it has useful pdf and photo editing options.
The hardware is impressive - tiny, metal box, always silent, basic speaker built-in and it can be left always on with minimal power consumption.
Drive size for basic models is limited (512gb) - I solved it by moving photos to NAS. I don't use it for gaming, except Hello Kitty Island Adventure. I would say it's a very competitive choice for a desktop PC in 2025 overall.
lostlogin · 1h ago
I just replaced a headless nuc 9 with a headless M4.
Nuc 9 averaged 65-70W power usage, while the m4 is averaging 6.6W.
The Mac is vastly more performant.
mr_windfrog · 1h ago
That's pretty amazing, I've never heard of that before .-_-!
ychompinator · 2h ago
No desktop CPU I’ve ever used has remained stable at 100 degrees.
My 14900k crashes almost immediately at that temp.
3 hours at 100 degrees is obscene.
swinglock · 1h ago
Then all your desktop CPUs were defective.
Besides AMD CPUs of the early 2000s going up in smokes without working cooling, they all throttle before they become temporarily or permanently unstable. Otherwise they are bad.
I've never had a desktop part fail due to max temperatures, but I don't think I've owned one that advertises nor allows itself to reach or remain at 100c or higher.
If someone sells a CPU that's specified to work at 100 or 110 degrees and it doesn't then it's either defective or fraudulent, no excuses.
clait · 1h ago
Yeah, i can’t believe they think it’s fine.
I would’ve shutdown my PC and rethought my cooling setup the first time it hit 100C tbh
I've been chasing flimsy but very annoying stability problems (some, of course, due to overclocking during my younger years, when it still had a tangible payoff) enough times on systems I had built that taking this one BIG potential cause out of the equation is worth the few dozens of extra bucks I have to spend on ECC-capable gear many times over.
Trying to validate an ECC-less platform's stability is surprisingly hard, because memtest and friends just aren't very reliably detecting more subtle problems. PRIME95, y-cruncher and linpack (in increasing order of effectiveness) are better than specialzied memory testing software in my experience, but they are not perfect, either.
Most AMD CPUs (but not their APUs with potent iGPUs - there, you will have to buy the "PRO" variants) these days have full support for ECC UDIMMs. If your mainboard vendor also plays ball - annoyingly, only a minority of them enables ECC support in their firmware, so always check for that before buying! - there's not much that can prevent you from having that stability enhancement and reassuring peace of mind.
Quoth DJB (around the very start of this millenium): https://cr.yp.to/hardware/ecc.html :)
This is the annoying part.
That AMD permits ECC is a truly fantastic situation, but if it's supported by the motherboard is often unlikely and worse: it's not advertised even when it's available.
I have an ASUS PRIME TRX40 PRO and the tech specs say that it can run ECC and non-ECC but not if ECC will be available to the operating system, merely that the DIMMS will work.
It's much more hit and miss in reality than it should be, though this motherboard was a pricey one: one can't use price as a proxy for features.
I would assume your particular motherboard to operate with proper SECDED+-level ECC if you have capable, compatible DIMM, enable ECC mode in the firmware, and boot an OS kernel that can make sense of it all.
Also: DDR5 has some false ecc marketing due to the memory standard having an error correction scheme build in. Don't fall for it.
It's always the same address, and always a Corrected Error (obviously, otherwise my kernel would panic). However, operating my system's memory at this clock and latency boosts x265 encoding performance (just one of the benchmarks I picked when trying to figure out how to handle this particular tradeoff) by about 12%. That is an improvement I am willing to stomach the extra risk of effectively overclocking the memory module beyond its comformt zone for, given that I can fully mitigate it by virtue of properly working ECC.
Also: Could you not have just bought slightly faste RAM, given the premium for ECC?
I wish AMD would make ECC a properly advertised feature with clear motherboard support. At least DDR5 has some level of ECC.
When you do not have a bunch of components ready to swap out it is also really hard to debug these issues. Sometimes it’s something completely different like the PSU. After the last issues, I decided to buy a prebuilt (ThinkStation) with on-site service. The cooling is a bit worse, etc., but if issues come up, I don’t have to spend a lot of time debugging them.
Random other comment: when comparing CPUs, a sad observation was that even a passively cooled M4 is faster than a lot of desktop CPUs (typically single-threaded, sometimes also multi-threaded).
On what metric am I ought to buy a CPU these days? Should I care about reviews? I am fine with a middle-end CPU, for what it is worth, and I thought of AMD Ryzen 7 5700 or AMD Ryzen 5 5600GT or anything with a similar price tag. They might even be lower-end by now?
Intel is just bad at the moment and not even worth touching.
And it's no bad power quality on mains as someone suggested (it's excellent here) or 'in the air' (whatever that means) if it happens very quickly after buying.
I would guess that a lot of it comes from bad firmware/mainboards, etc. like the recent issue with ASRock mainboards destroying Ryzen 9000-series GPUs: https://www.techspot.com/news/108120-asrock-confirms-ryzen-9... Anyone who uses Linux and has dealt with bad ACPI bugs, etc. knows that a lot of these mainboards probably have crap firmware.
I should also say that I had a Ryzen 3700X and 5900X many years back and two laptops with a Ryzen CPU and they have been awesome.
https://news.ycombinator.com/item?id=45043269
https://youtu.be/OVdmK1UGzGs
https://youtu.be/oAE4NWoyMZk
[1] Well, most non-servers are probably laptops today, but the same reasoning applies.
Yea, but unfortunately it comes attached to a Mac.
An issue I've encountered often with motherboards, is that they have brain damaged default settings, that run CPU's out of spec. You really have to go through it all with a fine toothed comb and make sure everything is set to conservative stock manufacturer recommended settings. And my stupid MSI board resets everything (every single BIOS setting) to MSI defaults when you upgrade its BIOS.
It looks completely bonkers to me. I overclocked my system to ~95% of what it is able to do with almost default voltages, using bumps of 1-3% over stock, which (AFAIK) is within acceptable tolerances, but it requires hours and hours of tinkering and stability testing.
Most users just set automatic overclocking, have their motherboards push voltages to insane levels, and then act surprised when their CPUs start bugging out within a couple of years.
Shocking!
I'd rather run everything at 90% and get very big power savings and still have pretty stellar performance. I do this with my ThinkStation with Core Ultra 265K now - I set the P-State maximum performance percentage to 90%. Under load it runs almost 20 degrees Celsius cooler. Single core is 8% slower, multicore 4.9%. Well worth the trade-off for me.
(Yes, I know that there are exceptions.)
https://www.pugetsystems.com/blog/2024/08/02/puget-systems-p...
to;dr: they heavily customize BIOS settings, since many BIOSes run CPUs out-of-spec by default. With these customizations there was not much of a difference in failure rate between AMD and Intel at that point in time (even when including Intel 13th and 14th gen).
Yeah. If Asahi worked on newer Macs and Apple Silicon Macs supported eGPU (yes I know, big ifs), the choice would be simple. I had NixOS on my Mac Studio M1 Ultra for a while and it was pretty glorious.
I had the same issue with my MSI board, next one won't be a MSI.
Is politics really this far in rotting people's brains?
These CPUs have defined maximum operating temperatures, beyond which they default to auto-shutdown. That being like 100-110 °C. (Warm summer room temps without aircon being 30 °C to 35 °C.) It has piss all to do with the guy running his CPU that hot - it's his CPU cooling setup that was not up to the job.
What do you have to be thinking to believe that no ACs in yurop == dumb yuros' CPUs dying? The whole point of the temperature based throttling and auto shutdown is so that this doesn't matter!
>If you have an Intel Raptor Lake system and you're in the northern hemisphere, chances are that your machine is crashing more often because of the summer heat. I know because I can literally see which EU countries have been affected by heat waves by looking at the locales of Firefox crash reports coming from Raptor Lake systems.
13th and 14th gen Intel is also showing up in aggregated gaming crash data, though not sure if that's heat related
https://www.youtube.com/watch?v=QzHcrbT5D_Y
>beyond which they default to auto-shutdown.
That doesn't preclude the possibility of temperature related instability below 100...
Nothing that it's different generations - it's not the core ultras (15th) that are known to have these issues
>rotting people's brains
...
On desktop PCs, thermal throttling is often set up as "just a safety feature" to this very day. Which means: the system does NOT expect to stay at the edge of its thermal limit. I would not trust thermal throttling with keeping a system running safely at a continuous 100C on die.
100C is already a "danger zone", with elevated error rates and faster circuit degradation - and there are only this many thermal sensors a die has. Some under-sensored hotspots may be running a few degrees higher than that. Which may not be enough to kill the die outright - but more than enough to put those hotspots into a "fuck around" zone of increased instability and massively accelerated degradation.
If you're relying on thermal throttling to balance your system's performance, as laptops and smartphones often do, then you seriously need to dial in better temperature thresholds. 100C is way too spicy.
If nothing else, it very clearly indicates that you can boost your performance significantly by sorting out your cooling because your cpu will be stuck permanently emergency throttling.
That said, there's a difference between a laptop cpu turbo boosting to 90 and a desktop cpu, which are usually cooler, running at 100 sustained.
Smartphones have no active cooling and are fully dependent on thermal throttling for survival, but they can start throttling at as low as 50C easily. Laptops with underspecced cooling systems generally try their best to avoid crossing into triple digits - a lot of them max out at 85C to 95C, even under extreme loads.
No. High performance gaming laptops will routinely do this for hours on end for years.
If it can't take it, it shouldn't allow it.
For example, various brands of motherboards are / were known to basically blow up AMD CPUs when using AMP/XMP, with the root cause being that they jacked an uncore rail way up. Many people claimed they did this to improve stability, but overclockers now that that rail has a sweet spot for stability and they went way beyond it (so much so that the actual silicon failed and burned a hole in itself with some low-ish probability).
Sufficient cooler, with sufficient airflow is always needed.
The 13900k draws more than 200W initially and thermal throttles after a minute at most, even in an air conditioned room.
I don't think that thermal problems should be pushed to end user to this degree.
But I agree this should not be a problem in the first place.
Don't know about transcoding though.
Yet I also use a 7840U in a gaming handheld running Windows, and haven't had any issues there at all. So I think this is related to AMD Linux drivers and/or Wayland. In contrast, my old laptop with an NVIDIA GPU and Xorg has given me zero issues for about a decade now.
So I've decided to just avoid AMD on Linux on my next machine. Intel's upcoming Panther Lake and Nova Lake CPUs seem promising, and their integrated graphics have consistently been improving. I don't think AMD's dominance will continue for much longer.
If anyone thinks competition isn't good for the market or that also-rans don't have enough of an effect, just take note. Intel is a cautionary tale. I do agree we would have gotten where we are faster with more viable competitors.
M4 is neat. I won't be shocked if x86 finally gives up the ghost as Intel decides playing in Risc V or ARM space is their only hope to get back into an up-cycle. AMD has wanted to do heterogeneous stuff for years. Risc V might be the way.
One thing I'm finding is that compilers are actually leaving a ton on the table for AMD chips, so I think this is an area where AMD and all of the users, from SMEs on down, can benefit tremendously from cooperatively financing the necessary software to make it happen.
Pass -fuse=mold when building.
An ideal ambient (room) temperature for running a computer is 15-25 celcius (60-77 Fahrenheit)
Source: https://www.techtarget.com/searchdatacenter/definition/ambie...
using to much airconditioning is also not comfortable. i used to live in singapore. we used to joke that singapore has two seasons: indoors and outdoors. because the airconditioning is powered so high that you had to bring jacket to wear inside. i'd frequently freeze after entering a building. i don't know why they do it, because it doesn't make sense. when i did turn on airconditioning at home i'd go barely below 30. just a few degrees cooler than the outside so it feels more comfortable without making the transition to hard.
Maybe today's CPUs would not be able to handle it, I am not sure. One would expect these things to only improve, but seems like this is not the case.
Edit: I misread it, oops! Disregard this comment.
It is actually 2.9999, precisely.
I'd say that even crashing at max temperatures is still completely unreasonable! You should be able to run at 100C or whatever the max temperature is for a week non-stop if you well damn please. If you can't, then the value has been chosen wrong by the manufacturers. If the CPU can't handle that, the clock rates should just be dialed back accordingly to maintain stability.
It's odd to hear about Core Ultra CPUs failing like that, though - I thought that they were supposed to be more power efficient than the 13th and 14th gen, all while not having their stability issues.
That said, I currently have a Ryzen 7 5800X, OCed with PBO to hit 5 GHz with negative CO offsets per core set. There's also an AIO with two fans and the side panel is off because the case I have is horrible. While gaming the temps usually don't reach past like 82C but Prime95 or anything else that's computationally intensive can make the CPU hit and flatten out at 90C. So odd to have modern desktop class CPUs still bump into thermal limits like that. That's with a pretty decent ambient temperature between 21C to 26C (summer).
> After switching my PC from Intel to AMD, I end up at 10-11 kWh per day.
It's kind of impressive to increase household electricity consumption by 10% by just switching one CPU.
Hope you get a commission.
The hardware is impressive - tiny, metal box, always silent, basic speaker built-in and it can be left always on with minimal power consumption.
Drive size for basic models is limited (512gb) - I solved it by moving photos to NAS. I don't use it for gaming, except Hello Kitty Island Adventure. I would say it's a very competitive choice for a desktop PC in 2025 overall.
Nuc 9 averaged 65-70W power usage, while the m4 is averaging 6.6W.
The Mac is vastly more performant.
Besides AMD CPUs of the early 2000s going up in smokes without working cooling, they all throttle before they become temporarily or permanently unstable. Otherwise they are bad.
I've never had a desktop part fail due to max temperatures, but I don't think I've owned one that advertises nor allows itself to reach or remain at 100c or higher.
If someone sells a CPU that's specified to work at 100 or 110 degrees and it doesn't then it's either defective or fraudulent, no excuses.