I think the best thing happened to that system is having an Arctic Cooling device on board. These things are as reliable as it gets.
None of the Arctic Cooling fans I had ever failed or lost performance over the years. Even their first generation desktop fan (Breeze) which runs multiple 8-12 hour shifts with me for the last decade shows any age.
a_t48 · 5h ago
A few years ago I was working at a place that needed to do builds targeting the Jetson platform, and was somewhat allergic to tossing it into the cloud due to cost. We ran the numbers and the Altra paid for itself pretty quickly. Great thing, it ripped through our C++ builds + Docker image creation - I think we ended up with a 64 core version (don't remember through who, but we needed a server form factor). Ended up still moving our release builds to the cloud, due to some dicey internet situations but for local builds this thing was A+. I hope they're still using it.
dingdingdang · 1h ago
Sounds good, but also sounds odd that a dicey internet situation caused you to move to the cloud as opposed to local (I guess could be dicey uplink for remote team or similar)
wffurr · 4m ago
Sending an ssh command vs uploading an entire release binary.
amelius · 2h ago
> And the latest one, an Apple MacBook Pro, is nice and fast but has some limits — does not support 64k page size. Which I need for my work.
I wonder where this requirement comes from ...
zozbot234 · 1h ago
Asahi Linux might support 64k pages on Apple Silicon hardware. Might require patching some of the software though, if it's built assuming a default page size.
It should also be possible to patch Linux itself to support different page sizes in different processes/address spaces, which it currently doesn't. It would be quite fiddly (which is why it hasn't happened so far) but it should be technically feasible.
IIRC ARM64 hardware also has some special support (compared to x86 and x86-64) for handling multiple-page "blocks" - that kind of bridges the gap between a smaller and larger page size, opening up further scenarios for better support on the OS side.
dist1ll · 1h ago
OP works for Red Hat, and some of the tests require booting systems with 64k pages.
What surprises me more is why Red Hat doesn't provide them with the proper hardware..
rwmj · 55m ago
Red Hat has dozens of internal aarch64 machines (similar to the one in the article) that can be reserved, but sometimes you just want a machine of your own to play with.
ot · 2h ago
I would guess to develop and test software that will ultimately run on a system with 64k page size.
amelius · 2h ago
Is there a fundamental advantage over other page sizes, other than the convenience of 64k == 2^16?
dan-robertson · 50m ago
The reason to want small pages is that the page is often the smallest unit that the operating system can work with, so bigger pages can be less efficient – you need more ram for the same number of memory mapped files, tricks like guard pages or mapping the same memory twice for a ring buffer have a bigger minimum size, etc.
The reason to want pages of exactly 4k is that software is often tuned for this and may even require this from not being programmed in a sufficiently hardware agnostic way (similar to why running lots of software on big median systems can be hard).
The reasons to want bigger pages are:
- there is more OS overhead tracking tiny pages
- as well as caches for memory, CPUs have caches for the mapping between virtual memory and physical memory, and this mapping is page-size granularity. These caches are very small (as they have to be extremely fast) so bigger pages means memory accesses are more likely to go to pages in the cache, which means faster memory accesses.
- CPU caches are addressed based on the index into the minimum page size so the max size of a cache is page-size * associativity. I think it can be harder to increase the latter than the former so bigger pages could allow for bigger caches, which can make some software perform better.
These things you see in practice are:
- x86 supports 2MB and 2GB pages, as well as 4KB pages. Linux can either directly give you pages in this larger size (a fixed number are allocated at startup by the OS) or there is a feature called ‘transparent hugepages’ where sufficiently aligned contiguous smaller pages can be merged. This mostly helps with the first two problems
- I think the Apple M-series chips have an 8k minimum page size, which might help with the third problem but I don’t really know about them
ch_123 · 1h ago
64k is the largest page size that the ARM architecture supports. The large page size provides advantages for applications which allocate large amounts of memory.
raverbashing · 1h ago
Yes there are
(as a starting point 4k is a "page size for ants" in 2025 - 4MB might be too much however)
But the bigger the page the less TLB entries you need, and less entries in your OS data structures managing memory, etc
fc417fc802 · 2m ago
4K seems appropriate for embedded applications. Meanwhile 4M seems like it would be plenty small for my desktop. Nearly every process is currently using more than that. Even the lightest is still coming in at a bit over 1M
fschutze · 4h ago
I realize the Ampere Altra Q features the Armv8.2-a ISA. Does anybody know if there are chips with Armv8.6-a (or above) or even SVE that one can buy? I did some research but couldn't find any.
adrian_b · 1h ago
Armv8.6-A is almost the same as Armv9.1-A, except that a few features are not mandatory.
There have been no consumer chips with Armv9.1-A, but only with Armv9.0-A and with Armv9.2-A. The only CPU with Armv8.6-A that has ever been announced publicly was the now obsolete Neoverse N2. Neoverse N2 has been skipped by Amazon and I do not know if any other major cloud vendor has used it.
So what you really search for are CPUs with Armv9.2-A (i.e. a superset of Armv8.6-A), i.e. with Cortex-A520 or Cortex-A720 or Cortex-X4 or Cortex-A725 or Cortex-X925.
There are many smartphones launched last year or this year with these CPU cores, but except for them the list of choices is short, i.e. either the very cheap Radxa Orion O6 (Cortex-A720 based), which is fine, but its software is immature, or a very expensive NVIDIA DGX development system (Cortex-X925 based; $4000 from NVIDIA or $3000 from ASUS), or one of the latest Apple computers, which support Armv8.7-A (which do not have SVE, but which have SME).
For the latest Qualcomm CPUs, I have no idea what ISA is supported by them, because Qualcomm always hides very deeply any technical information about their products.
If all you care about is the CPU, then a mid-range Android smartphone in the $400-$500 price range could be a better development system, especially if its USB Type C connector supports USB 3.0 and DisplayPort, like some Motorola Edge models, allowing you to use an external monitor and a docking station.
If you also care about testing together with some standard desktop/server peripherals, the mini-ITX motherboard of Radxa Orion O6 is more appropriate, but encountering bugs in some of its Linux device drivers is likely, which may slow down the development until they are handled.
nubinetwork · 4h ago
Radxa orion o6 claims to be the first arm v9 system available.
maz1b · 5h ago
I've always wondered why there isn't a bigger market / offering for dedicated servers with Ampere at their heart (apart from Hetzner)
If anyone knows of any, let me know!
jychang · 1h ago
Oracle OCI?
Their Ampere A1 free tier is pretty good. 4 core ARM and 24gb ram webserver for free.
maz1b · 1h ago
Are there any alternatives to a big cloud provider or Hetzner?
ozgrakkurt · 5h ago
A lot of software is built and optimized for x86 and EPYC processors are really good so it is hard to get into arm, don’t think that many companies use it.
adev_ · 4h ago
> A lot of software is built and optimized for x86 and EPYC processors are really good so it is hard to get into arm, don’t think that many companies use it.
That is just not true.
Nowadays, most OSS software and most server side software will run without any hinch on armv8.
A tremendous amount of work has been done to speed up common software on armv8, partially due to popularity of mobile as a platform but also and to the emergence of ARM servers (Graviton / Neoverses) in the major Cloud providers infrastructure.
p_l · 3h ago
However, it's hard to get into ARM other than using cloud offerings.
Because those cloud offerings have handled for you the problematic case of ARM generally operating as "closed platform" even when everything is open source.
On a PC server, usually you only hit any issues if you want to play with something more exotic on either software or hardware. Bog-standard linux setup is trivial to integrate.
On ARM, even though finally there's UEFI available, I recall that even few years ago there were issues with things like interrupt controller support - and that kind of reputation persists and definitely makes it harder to percolate on-prem ARM.
It also does not help that you need to go for pretty pricy systems to avoid vendor lock-in at firmware compatibility level - or had to, until quite recently.
rixed · 2h ago
Why is it hard to get a mac or a pi?
p_l · 1h ago
Pi is relatively underpowered (quite underpowered, even), has proprietary boot system, and similarly isn't exactly good with things you might want in professional server (there are some boards using compute modules that provide it as add on, but it's generally not a given). Also, I/O starved.
Mac is similarly an issue of proprietary system with no BMC support. Running one in a rack is always going to be at least partially half-baked solution. Additionally, you're heavily limited in OS support (for all that I love what Asahi has done, it does not mean you can install let's say RHEL on it, even in virtual machine - because M-series chips do not support 64kB page size which became the standard on ARM64 installs in the cloud, for example RHEL defaults to it and it was quite a pain to deal with in a company using Macbooks).
So you end up "shopping" for something that actually matches server hardware and it gets expensive and sometimes non-trivial, because ARM server market was (probably still is) not quite friendly to casually buying a rackmount server with ARM CPUs for affordable prices. Hyperscalers have completely different setups where they can easily tank the complexity costs because they can bother with customized hardware all the way to custom ASICs that provide management, I/O, and hypervisor boot and control path (like AWS Nitro).
One option is to find a VAR that actually sells ARM servers and not just appliances that happen to use ARM inside, but that's honestly a level of complexity (and pricing) above what many smaller companies want.
So if you're on a budget it's either cloud(-ish) solutions or maybe one your engineers can be spared to spend considerable amount of time to build a server from parts that will resemble something production quality.
delfinom · 22m ago
Have you ever tried to run a Mac "professionally" as a role of a server?
It's absolute garbage. You can't even run daemons in the last few years on the mac without having an user actually log into the mac on boot. And that's just the beginning of how bad they are.
And don't get me wrong, I'm not shitting on Macs here, but Apple does not intend for them to be used as servers in the slightest.
Someone · 4h ago
If you use AWS, lots of software can easily be run on Graviton, and lots of companies do that.
“Bernstein's report estimates that Graviton represented about 20 percent of AWS CPU instances by mid-2022“
And that’s three years ago. Graviton instances are cheaper than (more or less) equivalent x86 ones on AWS, so I think it’s a safe bet that number has gone up since.
baq · 3h ago
yeah if you're running a node backend, the changes are cosmetic at best (unless you're running chrome to generate pdfs or whatever). easiest 20% saved ever. if I were Intel or AMD I would be very afraid of this... years ago.
imtringued · 1h ago
I was scared of ARM taking over in 2017 (e.g. Windows being locked down to just the Microsoft store) and 8 years later literally nothing happened.
zxexz · 5h ago
I don’t think a lot of companies realize they are using it. At three companies now, I’ve witnessed core microservices migrate to ARM seamlessly, due to engineering being direct pressure to “reduce cloud spend”. The terrifying (and amazing) bit is that moving to ARM was enough to get finance off engineering’s back in all cases.
M0r13n · 2h ago
I am running an ARM64 build of Ubuntu on my MacBook Air using Multipass. I've never had a problem due to missing support/optimisation for ARM - at least I didn't notice any. I even noticed that build times were faster on this virtualised machine than they were natively on my previous Tuxedo laptop which had an Intel i7 that was a couple of years old. Although, I blame this speed mostly on the sheer horsepower of the newest Apple chips
moffkalast · 3h ago
They're slow and the arch is less compatible? Arm cores in web hosting are typically known as the shit-tier.
I think the main use case for these is some sort of Android build farm, as a CI/CD pipeline with testing of different OS versions and general app building, since they don't have to emulate arm.
dijit · 2h ago
Well, I've run some Ampere Altra ARM machines in my studio so I can speak to this;
A) No you can't use ARM as android build farms, as androids build tools only work on x86 (go figure).
B) Ampere Altra runs faster for throughput than x86 on the same lithography and clock frequency; I can't imagine how they'd be slower for web, it's not my experience with these machines under test. Maybe virtualisation has issues (I ran bare-metal containers - as you should).
My original intent was to use these machines as build/test clusters for our go microservices (and I'd run ARM on GCP) but GCP was a bit too slow to roll out and now we're far into feature locking any migrations of that.
So I added the machines to the general pool of compute and they run bots, internal webservices etc; with Kubernetes.
The performance is extremely good, only limited by the fact we can't use them as build machines for the game due to the architecture difference - however for storage or heavy compute they really outperform the EPYC Milan machines which are also on a 7nm lithography.
zozbot234 · 1h ago
> No you can't use ARM as android build farms, as androids build tools only work on x86 (go figure).
Does qemu-user solve that, or are there special requirements due to JIT and the like that qemu-user can't support?
fschutze · 5h ago
I never bought used computer parts. Are these parts generally reliable for ~2 years when bought used?
magicalhippo · 1h ago
The main failure points in electronics are by far power supply and batteries.
Non-polymer electrolytic capacitors can dry out, but just about all decent modern motherboards use polymer-based since years ago.
My current NAS is my previous desktop, which I bought in 2015. I tended to keep my desktop on 24/7 due to services, and my NAS as well, so it's been running more or less continuously since then. It's on its second PSU but apart from that chugging along.
I've been using older computer parts like this for a long time, and reliability increased markedly after they switched to non-polymer caps.
Modern higher-end GPUs due to their immense power requirements can have components fail, typically in the voltage regulation. Often this can be fixed relatively cheaply.
If buying a desktop I'd check that it works, it looks good inside (no dust bunnies etc), seller seems legit, and I'd throw a new PSU in there once I bought it.
throw-qqqqq · 2h ago
I haven’t bought new hardware since I was a teenager. Second hand is cheap and good for the environment. I never received a broken part and everything has worked reliably for me.
2-3 years is not a lot. My daily driver laptop is from 2011 and still going strong.
Sure, there are “lemons” out there, but there are also a lot of people who just replace their hardware often.
nisa · 2h ago
I concur. Doing this for almost all my technical equipment and mobile phones and never had a problem. For important/expensive things you can buy on refurbished stores that offer a 1-year warranty in EU.
amelius · 2h ago
Are you still using the same battery?
throw-qqqqq · 1h ago
Haha yes, but it doesn’t last for more than 20-30mins now. Used to be 7-8hours for the first five-six years, then dropped off.
I also only buy used phones (I don’t have high requirements) and as with laptops, batteries are the “weak link” - as you correctly point out.
A brand new battery for my laptop, can be had for ~30-65 USD though, and the battery is easy to replace (doesn’t even require screwdriver). I never use it untethered anymore though, so I don’t bother..
amelius · 22m ago
Ha, ok. I sometimes read that old batteries pose a physical risk, but I thankfully haven't experienced that. Maybe something to keep in mind though.
I'd like to see some numbers on it.
pabs3 · 4h ago
My current computer is from more than 10 years ago, and I found it in a dumpster. Works fine.
cornichon622 · 5h ago
Built a gaming desktop for a friend almost 2 years ago; used GPU and CPU (maybe a few others things too), everything's going great. It helps that our local Cragslist offers efficient buyer protection.
Server-side, I also bought used Xeons for an old box and recertified 10TB Exos. No issues there neither.
The HDDs are a bit of a gamble, but for anything else I can only encourage you to buy used!
throwaway2037 · 2h ago
> It helps that our local Cragslist offers efficient buyer protection.
What does this mean?
avhception · 5h ago
I regularly buy used hardware. It fails when it fails, same as the new stuff. Is there a higher probability? Possibly, but at the small sample sizes I'm at I can't feel the difference. Feels random either way.
theandrewbailey · 2h ago
I work at an e-waste recycling company. People throwing out old but still working servers, desktops, and laptops is pretty common. Companies regularly decommission and throw out their IT assets after some number of years, even if the stuff still works (which most of it still does).
mrheosuper · 5h ago
those are server-grade stuff, it's normal for them to work 10 years continuously.
ekianjo · 5h ago
Used professional hardware (servers, workstations) are made with higher quality standards so they last fairly long.
Cthulhu_ · 3h ago
Plus if they're from a data center, they will have been in a cooled, filtered, and stable space for their lifetime, vs a desktop that may have been in a dusty room getting moved or kicked from time to time.
tasuki · 3h ago
Also they're good for heating your home.
NexRebular · 1h ago
Unless you are running SUN CoolThreads(tm) servers!
burnt-resistor · 6h ago
1341 PLN / 371 USD isn't "cheap" for 25% more cores. That's almost double the price.
Q64-22 on eBay (US) for $150-200 USD / 542-723 PLN.
25% more cores and 36% more clock. It amounts to paying 85% more for 70% more perf. Not too bad.
szszrk · 5h ago
He clearly refers to that and states they did not respond.
Also, CPU was hardly the biggest cost here.
timzaman · 5h ago
Offtopic, I'm so confused why this is top1 on my HN? Just a pretty normal build?
eqvinox · 5h ago
It's not "your" HN, HN doesn't do algorithmic/per-user ranking. (Ed.: Actually a refreshing breath of wide social cohesion on a platform, IMHO. We have enough platforms that create bubbles for you.)
It's top1 on everyone's HN because a sufficient number of people (including myself) thought it a nice writeup about fat ARM systems.
baq · 5h ago
I haven’t been following hardware for a while, granted, but this is the first time I see a desktop build with an arm64 cpu. Didn’t know you can just… buy one.
For what it's worth, I've been using a Lenovo X13s for some 3 months now. It's not a desktop, and it took years for core components to be supported in mainline Linux, but I do use it as a daily driver now. The only thing that's still not working is the webcam.
szszrk · 5h ago
Normal ARM64 80 core system with $1000 EATX motherboard? How is this typical?
tinix · 5h ago
EATX is a pretty standard server motherboard form factor.
It's not even a multiple CPU board...
This is indeed a pretty standard (and weak) ARM server build.
You can get the same CPU M128-30 with 128 3ghz cores for under $800 USD.
You can throw two into a Gigabyte MP72-HB0 and fit it into a full tower case easily.
That'd only cost like $3,200 USD for 256 cores.
RAM is cheap, and that board could take 16 DIMMs.
If you used 16 GB DIMM like OP that's only 256 GB of RAM, in a server, it is not that much... only one gig per core... for like $500 USD.
Maybe for a personal build this seems extravagant but it's nothing special for a server.
None of the Arctic Cooling fans I had ever failed or lost performance over the years. Even their first generation desktop fan (Breeze) which runs multiple 8-12 hour shifts with me for the last decade shows any age.
I wonder where this requirement comes from ...
It should also be possible to patch Linux itself to support different page sizes in different processes/address spaces, which it currently doesn't. It would be quite fiddly (which is why it hasn't happened so far) but it should be technically feasible.
IIRC ARM64 hardware also has some special support (compared to x86 and x86-64) for handling multiple-page "blocks" - that kind of bridges the gap between a smaller and larger page size, opening up further scenarios for better support on the OS side.
What surprises me more is why Red Hat doesn't provide them with the proper hardware..
The reason to want pages of exactly 4k is that software is often tuned for this and may even require this from not being programmed in a sufficiently hardware agnostic way (similar to why running lots of software on big median systems can be hard).
The reasons to want bigger pages are:
- there is more OS overhead tracking tiny pages
- as well as caches for memory, CPUs have caches for the mapping between virtual memory and physical memory, and this mapping is page-size granularity. These caches are very small (as they have to be extremely fast) so bigger pages means memory accesses are more likely to go to pages in the cache, which means faster memory accesses.
- CPU caches are addressed based on the index into the minimum page size so the max size of a cache is page-size * associativity. I think it can be harder to increase the latter than the former so bigger pages could allow for bigger caches, which can make some software perform better.
These things you see in practice are:
- x86 supports 2MB and 2GB pages, as well as 4KB pages. Linux can either directly give you pages in this larger size (a fixed number are allocated at startup by the OS) or there is a feature called ‘transparent hugepages’ where sufficiently aligned contiguous smaller pages can be merged. This mostly helps with the first two problems
- I think the Apple M-series chips have an 8k minimum page size, which might help with the third problem but I don’t really know about them
(as a starting point 4k is a "page size for ants" in 2025 - 4MB might be too much however)
But the bigger the page the less TLB entries you need, and less entries in your OS data structures managing memory, etc
There have been no consumer chips with Armv9.1-A, but only with Armv9.0-A and with Armv9.2-A. The only CPU with Armv8.6-A that has ever been announced publicly was the now obsolete Neoverse N2. Neoverse N2 has been skipped by Amazon and I do not know if any other major cloud vendor has used it.
So what you really search for are CPUs with Armv9.2-A (i.e. a superset of Armv8.6-A), i.e. with Cortex-A520 or Cortex-A720 or Cortex-X4 or Cortex-A725 or Cortex-X925.
There are many smartphones launched last year or this year with these CPU cores, but except for them the list of choices is short, i.e. either the very cheap Radxa Orion O6 (Cortex-A720 based), which is fine, but its software is immature, or a very expensive NVIDIA DGX development system (Cortex-X925 based; $4000 from NVIDIA or $3000 from ASUS), or one of the latest Apple computers, which support Armv8.7-A (which do not have SVE, but which have SME).
For the latest Qualcomm CPUs, I have no idea what ISA is supported by them, because Qualcomm always hides very deeply any technical information about their products.
If all you care about is the CPU, then a mid-range Android smartphone in the $400-$500 price range could be a better development system, especially if its USB Type C connector supports USB 3.0 and DisplayPort, like some Motorola Edge models, allowing you to use an external monitor and a docking station.
If you also care about testing together with some standard desktop/server peripherals, the mini-ITX motherboard of Radxa Orion O6 is more appropriate, but encountering bugs in some of its Linux device drivers is likely, which may slow down the development until they are handled.
If anyone knows of any, let me know!
Their Ampere A1 free tier is pretty good. 4 core ARM and 24gb ram webserver for free.
That is just not true.
Nowadays, most OSS software and most server side software will run without any hinch on armv8.
A tremendous amount of work has been done to speed up common software on armv8, partially due to popularity of mobile as a platform but also and to the emergence of ARM servers (Graviton / Neoverses) in the major Cloud providers infrastructure.
Because those cloud offerings have handled for you the problematic case of ARM generally operating as "closed platform" even when everything is open source.
On a PC server, usually you only hit any issues if you want to play with something more exotic on either software or hardware. Bog-standard linux setup is trivial to integrate.
On ARM, even though finally there's UEFI available, I recall that even few years ago there were issues with things like interrupt controller support - and that kind of reputation persists and definitely makes it harder to percolate on-prem ARM.
It also does not help that you need to go for pretty pricy systems to avoid vendor lock-in at firmware compatibility level - or had to, until quite recently.
Mac is similarly an issue of proprietary system with no BMC support. Running one in a rack is always going to be at least partially half-baked solution. Additionally, you're heavily limited in OS support (for all that I love what Asahi has done, it does not mean you can install let's say RHEL on it, even in virtual machine - because M-series chips do not support 64kB page size which became the standard on ARM64 installs in the cloud, for example RHEL defaults to it and it was quite a pain to deal with in a company using Macbooks).
So you end up "shopping" for something that actually matches server hardware and it gets expensive and sometimes non-trivial, because ARM server market was (probably still is) not quite friendly to casually buying a rackmount server with ARM CPUs for affordable prices. Hyperscalers have completely different setups where they can easily tank the complexity costs because they can bother with customized hardware all the way to custom ASICs that provide management, I/O, and hypervisor boot and control path (like AWS Nitro).
One option is to find a VAR that actually sells ARM servers and not just appliances that happen to use ARM inside, but that's honestly a level of complexity (and pricing) above what many smaller companies want.
So if you're on a budget it's either cloud(-ish) solutions or maybe one your engineers can be spared to spend considerable amount of time to build a server from parts that will resemble something production quality.
It's absolute garbage. You can't even run daemons in the last few years on the mac without having an user actually log into the mac on boot. And that's just the beginning of how bad they are.
And don't get me wrong, I'm not shitting on Macs here, but Apple does not intend for them to be used as servers in the slightest.
https://www.theregister.com/2023/08/08/amazon_arm_servers/:
“Bernstein's report estimates that Graviton represented about 20 percent of AWS CPU instances by mid-2022“
And that’s three years ago. Graviton instances are cheaper than (more or less) equivalent x86 ones on AWS, so I think it’s a safe bet that number has gone up since.
I think the main use case for these is some sort of Android build farm, as a CI/CD pipeline with testing of different OS versions and general app building, since they don't have to emulate arm.
A) No you can't use ARM as android build farms, as androids build tools only work on x86 (go figure).
B) Ampere Altra runs faster for throughput than x86 on the same lithography and clock frequency; I can't imagine how they'd be slower for web, it's not my experience with these machines under test. Maybe virtualisation has issues (I ran bare-metal containers - as you should).
My original intent was to use these machines as build/test clusters for our go microservices (and I'd run ARM on GCP) but GCP was a bit too slow to roll out and now we're far into feature locking any migrations of that.
So I added the machines to the general pool of compute and they run bots, internal webservices etc; with Kubernetes.
The performance is extremely good, only limited by the fact we can't use them as build machines for the game due to the architecture difference - however for storage or heavy compute they really outperform the EPYC Milan machines which are also on a 7nm lithography.
Does qemu-user solve that, or are there special requirements due to JIT and the like that qemu-user can't support?
Non-polymer electrolytic capacitors can dry out, but just about all decent modern motherboards use polymer-based since years ago.
My current NAS is my previous desktop, which I bought in 2015. I tended to keep my desktop on 24/7 due to services, and my NAS as well, so it's been running more or less continuously since then. It's on its second PSU but apart from that chugging along.
I've been using older computer parts like this for a long time, and reliability increased markedly after they switched to non-polymer caps.
Modern higher-end GPUs due to their immense power requirements can have components fail, typically in the voltage regulation. Often this can be fixed relatively cheaply.
If buying a desktop I'd check that it works, it looks good inside (no dust bunnies etc), seller seems legit, and I'd throw a new PSU in there once I bought it.
2-3 years is not a lot. My daily driver laptop is from 2011 and still going strong.
Sure, there are “lemons” out there, but there are also a lot of people who just replace their hardware often.
I also only buy used phones (I don’t have high requirements) and as with laptops, batteries are the “weak link” - as you correctly point out.
A brand new battery for my laptop, can be had for ~30-65 USD though, and the battery is easy to replace (doesn’t even require screwdriver). I never use it untethered anymore though, so I don’t bother..
I'd like to see some numbers on it.
Server-side, I also bought used Xeons for an old box and recertified 10TB Exos. No issues there neither.
The HDDs are a bit of a gamble, but for anything else I can only encourage you to buy used!
Q64-22 on eBay (US) for $150-200 USD / 542-723 PLN.
https://www.ebay.com/itm/365380821650
https://www.ebay.com/itm/365572689742
Also, CPU was hardly the biggest cost here.
It's top1 on everyone's HN because a sufficient number of people (including myself) thought it a nice writeup about fat ARM systems.
Not that much changed since this:
https://marcin.juszkiewicz.com.pl/2019/10/23/what-is-wrong-w...
It's not even a multiple CPU board...
This is indeed a pretty standard (and weak) ARM server build.
You can get the same CPU M128-30 with 128 3ghz cores for under $800 USD.
You can throw two into a Gigabyte MP72-HB0 and fit it into a full tower case easily.
That'd only cost like $3,200 USD for 256 cores.
RAM is cheap, and that board could take 16 DIMMs.
If you used 16 GB DIMM like OP that's only 256 GB of RAM, in a server, it is not that much... only one gig per core... for like $500 USD.
Maybe for a personal build this seems extravagant but it's nothing special for a server.