I don't worry about things like games or 3D. The big advantage to me about Intel is the open-source graphics drivers. As a long time user of Linux (on everything), I don't need to worry about graphics driver issues after kernel upgrades.
d3Xt3r · 1d ago
You get the same advantage with AMD, but better, thanks to AMD's superior GPUs.
You might not personally care about 3D, but a lot of DEs and apps use features that benift from GPUs these days (even websites make use of them now!), so technically you'd be better off with AMD - especially considering how much they've improved in the last couple of years, thanks to the involvement of Valve and the Steam Deck.
slipofpaper · 1d ago
AMD has had very good Linux graphics drivers for a while now. The last time I had to fiddle with graphics drivers, I was still using an Nvidia card. AMD just worked out of the box.
noplacelikehome · 1d ago
This sadly doesn't match my lived Linux experience. The amdgpu drivers are absolutely abominable with Thunderbolt and regularly cause suspend/resume issues, even when the laptop has suspended while connected to the dock and resumed whilst still connected.
Beyond this applications like Firefox fairly regularly cease rendering frames for seconds at a time while various timeout errors are logged by amdgpu.
AMD have a lot of work to do to just to match Intel on Linux graphics driver stability.
nicolaslem · 1d ago
Ryzen 6000 series user here. I run Arch and the past three years have been a roller-coaster. Suspend/resume works well but the amdgpu crashes are a constant pain. Sometimes it works well for a few weeks, until a kernel update introduces a regression. Regression that gets fixed after a few weeks. Rince and repeat for the past three years. I am seriously considering Intel for my next laptop.
dagw · 1d ago
Personally I've had much more problems over the past 6-7 years with AMD's open source drivers than I've had with Nvidia's closed source drivers. Especially their Ryzen APUs have caused me so much pain.
mkj · 1d ago
Intel's wifi cards and drivers are their saving grace for laptops. Atheros/qualcomm drivers are pitful in comparison on Linux.
g42gregory · 1d ago
Perhaps, Apple Silicon is the way to solve the dilemma... :-)
spacebanana7 · 1d ago
It'll never be made available outside of the Apple ecosystem, but if ever was there'd be billions of dollars of orders on day one.
wqaatwt · 1d ago
They are not a particularly good desktop chips, though. Or rather the market niche for them is rather small (low-power and relatively high performance) and generally filled by laptops anyway.
For desktop use cases like gaming or performance intense cases they are not that great (IIRC the 14900k which cost a few hundred $ by the end was considerably faster than the top end M3 chip).
Of course high memory bandwidth + unified CPU/GPU memory is an advantage in certain cases but that also makes the system entirely unmodular.
johnb231 · 17h ago
Apple is faster at single core and can match or exceed the 14900k at multi core in some applications.
They are not for gaming. They are much faster than Intel for video editing due to hardware codecs.
The Mac Mini / iMac / Apple Studio / Mac Pro are more than enough for most desktop users and provide high performance without obnoxious fan noise. The high end Intel chips draw ~300 watts. I need a silent desktop.
wqaatwt · 1h ago
> Apple is faster at single core and can match or exceed the 14900k
Well yeah, if price is not a factor and you don’t care about gaming and such.
That’s a perfectly reasonable use case. The original premise was that Apple’s M series chips would be extremely successful if Apple started selling them directly (and presumably provided Windows/Linux drivers.
Which obviously can’t and won’t happen due to clear reasons but even if it did they probably wouldn’t be very popular as desktop chips (of course that also comes down to pricing). There aren’t that many people who’d want or need a desktop with a chip like that (instead of just getting a laptop ).
7speter · 1d ago
I don’t really think apple cares about gaming. AI is a big part of why their SOC’s matter (coupled with the potential to have 128-512gb of unified memory coupled with these SOCs. These chips are all about running llms locally pretty well at under 100w. Also great for creatives as well.
sagarm · 11h ago
This market of people willing to spend thousands to run large LLMs locally -- and still slowly -- has to be tiny.
Plus, there's competitors using AMD CPUs that provide 128GB with 500GB/s of bandwidth (8xDDR5-8000) for just $2K now.[1]
A Mac Studio with an M4 Max has similar bandwidth but costs $4k for similar specs; the M3 Pro has ~800GB/s of bandwidth but 256GB of RAM (there's no 128GB option) and 2TB of SSD will set you back $6k!
Also the 48GB B60 for $1000. Of course it might be massively overhyped like some Intel things but if you don’t care about power usage, noise and heat a PC with 4 of those might blow the equivalent priced Mac out of the water.
avhception · 1d ago
If only the CPUs could be had outside of Macs!
At least we're seeing _some_ ARM development, I'm currently typing this from a Lenovo x13s with a Qualcomm ARM CPU. It works with Fedora, but the firmware is less than ideal and requires you to feed the bootloader a devicetree file.
Other than that, I'm really happy with the hardware.
levi0saur · 1d ago
> At work I need to drive two 4K displays at 60Hz and I don't think there are many motherboards that will do that with onboard graphics
MSI has such motherboards in their PRO range, the UEFI allows you to set the primary GPU. The otherwise anemic Intel UHD 770 graphics are able to drive two 4K monitors (144 and 60Hz, DP and HDMI respectively) quite smoothly.
Besides that, performance and idle power consumption, the onboard NIC and wireless should also be taken into consideration. Intel's are pretty well supported across operating systems, and were the deciding factor in my case.
nyarlathotep_ · 21h ago
RE: displays, 4k60
I've done this without a single issue on several iGPU Ryzen CPUs (OTOH, Ryzen 7 5700G, and Ryzen 3 3200G). Neither used a discrete GPU at the time.
Couldn't tell you what Motherboard; I specifically sought after one with 2x HDMI for the Ryzen 7, as I had no intent to use a DGPU for this machine. Plenty of sleep wake issues, but monitors being present/connected has never been an issue with either.
esseph · 1d ago
This was me ~5y ago, but at that time I'd still consider them for datacenter.
Now? I actively avoid the Intel models for things, but sometimes I'm solving a particular problem with various constraints and for whatever reason the vendors only have a handful of AMD SKUs. Feels gross.
adrian_b · 1d ago
Despite the fact that I hate intensely some anonymous Intel employees, buying AMD instead of Intel is not an emotional decision for me.
I hate many things done by Intel, but I hate none more than whoever has conceived around 1994 the market segmentation between Pentium and Pentium Pro. Those people are the origin of the fact that today the majority of the non-server computers lack protection against memory errors and the memory modules with ECC have a price much higher than justified by their production cost.
While this market segmentation scheme has increased the profits for both CPU and memory manufacturers, it will never be possible to estimate the financial losses that it has caused upon the rest of the world, due to things like undetected data corruption and computer crashes misattributed to software bugs.
However, that is not the reason why 6 years have passed since the last time when I have bought Intel CPUs (early 2019, when I have bought a couple of Intel NUCs), despite previously having been a big spender on Intel CPUs.
The programs that are the most important for me are either programs that I write myself or programs that I compile from sources with whatever compilation options I choose. Therefore buying today any x86 CPU that does not support AVX-512 would be unacceptable.
AVX-512 is a much better vector ISA than AVX and actually its current names of AVX-512 or AVX10 are an insult, because AVX-512 has not been an evolution of AVX. The Larrabee New Instructions (the initial name of AVX-512) have been developed and also implemented slightly earlier than AVX. The first product with AVX, Sandy Bridge, was launched in 2011, while the first product with the Larrabee New Instructions had been delivered (in development systems) one year earlier.
AVX should have never existed, but it was a creation of the Intel Team A, while Larrabee was a creation of outsiders together with the Team C or D. As launched in 2011, AVX was much worse, being a completely obsolete vector ISA on the day of its launch, but it has been improved a lot in 2013, when Haswell has added to AVX many instructions taken from Larrabee, including FMA3 and gather.
Now, the Intel CPUs that support AVX-512 are too expensive in comparison with AMD alternatives, while the Intel CPUs with decent prices lack AVX-512 support, so they have a much lower performance than they should in any application that can benefit from array operations. The too low performance of the Intel CPUs is masked by the legacy applications, which have avoided to use AVX-512, because most of the installed base of CPUs lacked support, but I do not care about legacy applications.
Currently AMD does not have CPUs optimized for low-power levels, i.e. for TDPs of 6/7 W, like Intel Twin Lake/Amston Lake, or TDPs of 15 to 17 W, like Intel Lunar Lake. For such low powers, either ARM-based or Intel CPUs are suitable, while AMD is the best for any size equal to or greater than that of an Intel NUC or of a not too thin tablet/notebook (for the classic Intel NUC size, i.e. for a volume of 0.5 liter with active cooling, a CPU TDP of 28 to 35 W is optimal, e.g. an AMD Strix Point CPU; Lunar Lake is a too weak CPU for the Intel NUC size, while Arrow Lake H would have been powerful enough, but it does not support AVX-512 like AMD).
So after the Intel Lunar Lake launch, I have planned for some time to buy a Lunar Lake computer, for a low power application, because I was also curious to test the new Lion Cove and Skymont CPU cores. However, I have abandoned these plans after it became known that Lunar Lake has a bug that makes MONITOR/MWAIT unreliable. For me this is a critical feature and a great advantage of the x86 CPUs. Without it working, I shall better get an ARM-based SBC for that application.
wahern · 16h ago
AMD also segments, even with respect to ECC. Not all Ryzen CPUs support ECC, officially or unofficially. IME, it's easier to find or build a low-power, ECC-supporting server system based on Intel than AMD, though that says more about Intel's robust channel partner network. Nominally, more of AMD's product range supports ECC than Intel's.
adrian_b · 6h ago
While you are right that AMD also disables ECC in some products, e.g. in most laptop CPUs, and it also has far worse software support for ECC in its consumer products than Intel (i.e. its EDAC drivers are seldom up-to-date and with all features working), as an individual or small business it is far more easy to buy and assemble a workstation or server with ECC memory using AMD CPUs.
The motherboards for Intel CPUs with ECC support are expensive and they can be found in very few places. In Europe, you may need to order such a motherboard from another country.
On the other hand, I can buy an ASRock or ASUS motherboard with ECC support for the AM5 socket from any shop around me at a much lower price.
The availability of ECC-supporting CPUs and UDIMMs is identical for Intel and AMD, but there is a great difference in the availability of motherboards in favor of AMD.
nan60 · 1d ago
Intel's chips have become so absolutely awful in the last few years I also have no desire to buy one, even in laptops where power efficiency is so important. Maybe I'm just yelling at clouds but the whole P-core and E-core architecture seems off to me (and clearly Intel too), and having to implement new schedulers for virtually zero performance gain (just power efficiency) is really annoying. Especially as a Linux user where power efficiency isn't really the priority and battery life tends to suck anyway.
pjmlp · 1d ago
The whole P-core and E-core architecture is everywhere now on the ARM world that people keep praising around here, if anything Intel is trailing behind.
nan60 · 22h ago
I understand it on ARM, since it's primarily targeted at mobile and other oddball devices, but using on desktop class chips just seems odd to me. I'd even understand doing so on laptop chips but desktop ones just seems like leaving extra performance on the table.
7speter · 1d ago
P and E cores were around on arm at least a decade before Alderlake released. I remember being told to hold out for Haswell because it was rumored to have big and little cores like arm cpus; enabling your computer to use minimal power to check for mailbox updates.
darthrupert · 1d ago
I braved out and got the latest x1 carbon, and I've been rather happy. Battery life with latest kernels is quite ok (although obviously not macbook level) and nearly everything seems to work quite nicely.
Only weird part is the slow throttle up that happens after sleep. Seems like the machine is at 400MHz for half a minute before actually waking up.
hoseja · 1d ago
> and may have better single-core burst performance
Is this even still true?
And in any case there is 9950X3D
johnb231 · 1d ago
Intel's older 13th and 14th gen CPUs still have the fastest single core performance. But they are operating at their physical limit and tend to stop working after a few months. Unreliable trash.
You might not personally care about 3D, but a lot of DEs and apps use features that benift from GPUs these days (even websites make use of them now!), so technically you'd be better off with AMD - especially considering how much they've improved in the last couple of years, thanks to the involvement of Valve and the Steam Deck.
Beyond this applications like Firefox fairly regularly cease rendering frames for seconds at a time while various timeout errors are logged by amdgpu.
AMD have a lot of work to do to just to match Intel on Linux graphics driver stability.
For desktop use cases like gaming or performance intense cases they are not that great (IIRC the 14900k which cost a few hundred $ by the end was considerably faster than the top end M3 chip).
Of course high memory bandwidth + unified CPU/GPU memory is an advantage in certain cases but that also makes the system entirely unmodular.
They are not for gaming. They are much faster than Intel for video editing due to hardware codecs.
The Mac Mini / iMac / Apple Studio / Mac Pro are more than enough for most desktop users and provide high performance without obnoxious fan noise. The high end Intel chips draw ~300 watts. I need a silent desktop.
Well yeah, if price is not a factor and you don’t care about gaming and such.
That’s a perfectly reasonable use case. The original premise was that Apple’s M series chips would be extremely successful if Apple started selling them directly (and presumably provided Windows/Linux drivers.
Which obviously can’t and won’t happen due to clear reasons but even if it did they probably wouldn’t be very popular as desktop chips (of course that also comes down to pricing). There aren’t that many people who’d want or need a desktop with a chip like that (instead of just getting a laptop ).
Plus, there's competitors using AMD CPUs that provide 128GB with 500GB/s of bandwidth (8xDDR5-8000) for just $2K now.[1]
A Mac Studio with an M4 Max has similar bandwidth but costs $4k for similar specs; the M3 Pro has ~800GB/s of bandwidth but 256GB of RAM (there's no 128GB option) and 2TB of SSD will set you back $6k!
[1] https://www.gmktec.com/products/amd-ryzen%E2%84%A2-ai-max-39...
At least we're seeing _some_ ARM development, I'm currently typing this from a Lenovo x13s with a Qualcomm ARM CPU. It works with Fedora, but the firmware is less than ideal and requires you to feed the bootloader a devicetree file. Other than that, I'm really happy with the hardware.
MSI has such motherboards in their PRO range, the UEFI allows you to set the primary GPU. The otherwise anemic Intel UHD 770 graphics are able to drive two 4K monitors (144 and 60Hz, DP and HDMI respectively) quite smoothly.
Besides that, performance and idle power consumption, the onboard NIC and wireless should also be taken into consideration. Intel's are pretty well supported across operating systems, and were the deciding factor in my case.
I've done this without a single issue on several iGPU Ryzen CPUs (OTOH, Ryzen 7 5700G, and Ryzen 3 3200G). Neither used a discrete GPU at the time.
Couldn't tell you what Motherboard; I specifically sought after one with 2x HDMI for the Ryzen 7, as I had no intent to use a DGPU for this machine. Plenty of sleep wake issues, but monitors being present/connected has never been an issue with either.
Now? I actively avoid the Intel models for things, but sometimes I'm solving a particular problem with various constraints and for whatever reason the vendors only have a handful of AMD SKUs. Feels gross.
I hate many things done by Intel, but I hate none more than whoever has conceived around 1994 the market segmentation between Pentium and Pentium Pro. Those people are the origin of the fact that today the majority of the non-server computers lack protection against memory errors and the memory modules with ECC have a price much higher than justified by their production cost.
While this market segmentation scheme has increased the profits for both CPU and memory manufacturers, it will never be possible to estimate the financial losses that it has caused upon the rest of the world, due to things like undetected data corruption and computer crashes misattributed to software bugs.
However, that is not the reason why 6 years have passed since the last time when I have bought Intel CPUs (early 2019, when I have bought a couple of Intel NUCs), despite previously having been a big spender on Intel CPUs.
The programs that are the most important for me are either programs that I write myself or programs that I compile from sources with whatever compilation options I choose. Therefore buying today any x86 CPU that does not support AVX-512 would be unacceptable.
AVX-512 is a much better vector ISA than AVX and actually its current names of AVX-512 or AVX10 are an insult, because AVX-512 has not been an evolution of AVX. The Larrabee New Instructions (the initial name of AVX-512) have been developed and also implemented slightly earlier than AVX. The first product with AVX, Sandy Bridge, was launched in 2011, while the first product with the Larrabee New Instructions had been delivered (in development systems) one year earlier.
AVX should have never existed, but it was a creation of the Intel Team A, while Larrabee was a creation of outsiders together with the Team C or D. As launched in 2011, AVX was much worse, being a completely obsolete vector ISA on the day of its launch, but it has been improved a lot in 2013, when Haswell has added to AVX many instructions taken from Larrabee, including FMA3 and gather.
Now, the Intel CPUs that support AVX-512 are too expensive in comparison with AMD alternatives, while the Intel CPUs with decent prices lack AVX-512 support, so they have a much lower performance than they should in any application that can benefit from array operations. The too low performance of the Intel CPUs is masked by the legacy applications, which have avoided to use AVX-512, because most of the installed base of CPUs lacked support, but I do not care about legacy applications.
Currently AMD does not have CPUs optimized for low-power levels, i.e. for TDPs of 6/7 W, like Intel Twin Lake/Amston Lake, or TDPs of 15 to 17 W, like Intel Lunar Lake. For such low powers, either ARM-based or Intel CPUs are suitable, while AMD is the best for any size equal to or greater than that of an Intel NUC or of a not too thin tablet/notebook (for the classic Intel NUC size, i.e. for a volume of 0.5 liter with active cooling, a CPU TDP of 28 to 35 W is optimal, e.g. an AMD Strix Point CPU; Lunar Lake is a too weak CPU for the Intel NUC size, while Arrow Lake H would have been powerful enough, but it does not support AVX-512 like AMD).
So after the Intel Lunar Lake launch, I have planned for some time to buy a Lunar Lake computer, for a low power application, because I was also curious to test the new Lion Cove and Skymont CPU cores. However, I have abandoned these plans after it became known that Lunar Lake has a bug that makes MONITOR/MWAIT unreliable. For me this is a critical feature and a great advantage of the x86 CPUs. Without it working, I shall better get an ARM-based SBC for that application.
The motherboards for Intel CPUs with ECC support are expensive and they can be found in very few places. In Europe, you may need to order such a motherboard from another country.
On the other hand, I can buy an ASRock or ASUS motherboard with ECC support for the AM5 socket from any shop around me at a much lower price.
The availability of ECC-supporting CPUs and UDIMMs is identical for Intel and AMD, but there is a great difference in the availability of motherboards in favor of AMD.
Only weird part is the slow throttle up that happens after sleep. Seems like the machine is at 400MHz for half a minute before actually waking up.
Is this even still true?
And in any case there is 9950X3D
https://browser.geekbench.com/processor-benchmarks
https://www.theverge.com/2024/7/26/24206529/intel-13th-14th-...
EDIT: Apple has the fastest single core https://browser.geekbench.com/mac-benchmarks
Apple still fastest though.
Mac Mini M4 Pro comes out higher with 3987 Single-Core Score: https://browser.geekbench.com/v6/cpu/12175463