It is worth it to buy the fast CPU

140 ingve 254 8/24/2025, 6:03:15 AM blog.howardjohn.info ↗

Comments (254)

beezle · 32m ago
When ever I've built a new desktop I've always gone near the top performance with some consideration given to cache and power consumption (remember when peeps cared about that? lol).

From dual pentium pros to my current desktop - Xeon E3-1245 v3 @ 3.40GHz built with 32 GB top end ram in late 2012 which has only recently started to feel a little pokey, I think largely due to cpu security mitigations added to Windows over the years.

So that extra few hundred up front gets me many years extra on the backend.

ocdtrekkie · 29m ago
I think people overestimate the value of a little bump in performance. I recently built a gaming PC with a 9700X. The 9800X3D is drastically more popular, for an 18% performance bump on benchmarks but double the power draw. I rarely peg my CPU, but I am always drawing power.

Higher power draw means it runs hotter, and it stresses the power supply and cooling systems more. I'd rather go a little more modest for a system that's likely to wear out much, much slower.

rafaelmn · 2m ago
Is it really 2x or is it 2x at max load ? Since, as you say, you're not peggig the CPU - would be interesting to compare power usage on a task basis and the duration. Could be that the 3D cache is really adding that much overhead even to idle CPU.

Anyway I've never regretted buying a faster CPU (GPU is a different story, burned some money there on short time window gains that were marginally relevant), but I did regret saving on it (going with M4 air vs M4 pro)

avidiax · 10h ago
Employers, even the rich FANG types, are quite penny-wise and pound-foolish when it comes to developer hardware.

Limiting the number and size of monitors. Putting speedbumps (like assessments or doctor's notes) on ergo accessories. Requiring special approval for powerful hardware. Requiring special approval for travel, and setting hotel and airfare caps that haven't been adjusted for inflation.

To be fair, I know plenty of people that would order the highest spec MacBook just to do web development and open 500 chrome tabs. There is abuse. But that abuse is really capped out at a few thousand in laptops, monitors and workstations, even with high-end specs, which is just a small fraction of one year's salary for a developer.

Aurornis · 3h ago
Every well funded startup I’ve worked for went through a period where employees could get nearly anything they asked for: New computers, more monitors, special chairs, standing desks, SaaS software, DoorDash when working late. If engineers said they needed it, they got it.

Then some period of time later they start looking at spending in detail and can’t believe how much is being spent by the 25% or so who abuse the possibly. Then the controls come.

> There is abuse. But that abuse is really capped out at a few thousand in laptops, monitors and workstations, even with high-end specs,

You would think, but in the age of $6,000 fully specced MacBook Pros, $2,000 monitors, $3,000 standing desks, $1500 iPads with $100 Apple pencils and $300 keyboard cases, $1,000 chairs, SaaS licenses that add up, and (if allowed) food delivery services for “special circumstances” that turns into a regular occurrence it was common to see individuals incurring expenses in the tens of thousands range. It’s hard to believe if you’re a person who moderates their own expenditures.

Some people see a company policy as something meant to be exploited until a hidden limit is reached.

There also starts to be some soft fraud at scales higher than you’d imagine: When someone could get a new laptop without questions, old ones started “getting stolen” at a much higher rate. When we offered food delivery for staying late, a lot of people started staying just late enough for the food delivery to arrive while scrolling on their phones and then walking out the door with their meal.

dcrazy · 2h ago
Is it “soft fraud” when a manager at an investment bank regularly demands unreasonable productivity from their junior analysts, causing them to work late and effectively reduce their compensation rate? Only if the word “abuse” isn’t ambiguous and loaded enough for you!
AtlanticThird · 1h ago
Lying about a laptop being stolen is black and white. I'm not sure how you are trying to say that is ambiguous.

I don't know what the hell you mean by the term unreasonable. Are you under the impression that investment banking analysts do not think they will have to work late before they take the role?

margalabargala · 1h ago
> Lying about a laptop being stolen is black and white. I'm not sure how you are trying to say that is ambiguous.

I've been at startups where there's sometimes late night food served.

I've never been at a startup where there was an epidemic about lying about stolen hardware.

Staying just late enough to order dinner on the company, and theft by the employee of computer hardware plus lying about it, are not in the same category and do not happen with equal frequency. I cannot believe the parent comment presented these as the same, and is being taken seriously.

gregshap · 28m ago
The pay and working hours are extremely well known to incoming jr investment bankers
wmf · 1h ago
Working late is official company policy in investment banking.
dingnuts · 1h ago
Is this meant to be a gotcha question? Yes, unpaid overtime is fraud, and employers commit that kind of fraud probably just as regularly as employees doing the things up thread.

none of it is good lol

jasode · 1h ago
>, unpaid overtime is fraud,

gp was talking about salaried employees which is legally exempt from overtime pay. There is no rigid 40-hour ceiling for salary pay.

Salary compensation is typical for white-collar employees such as analysts in investment banking and private equity, associates at law firms, developers at tech startups, etc.

SoftTalker · 1h ago
The overtime is assumed and included in their 6-figure salaries.
Aeolun · 2h ago
Don’t you think the problem there is that you hired the wrong people?
SteveJS · 36m ago
Was trying to remember a counter example on good hires and wasted money.

Alex St. John Microsoft Windows 95 era, created directX annnnd also built an alien spaceship.

I dimly recalled it as a friend in the games division telling me about some someone getting 5 and a 1 review scores in close succession.

Facts i could find (yes i asked an llm)

5.0 review: Moderately supported. St. John himself hosted a copy of his Jan 10, 1996 Microsoft performance review on his blog (the file listing still exists in archives). It reportedly shows a 5.0 rating, which in that era was the rare top-box mark. Fired a year later: Factual. In an open letter (published via GameSpot) he states he was escorted out of Microsoft on June 24, 1997, about 18 months after the 5.0 review. Judgment Day II alien spaceship party: Well documented as a plan. St. John’s own account (quoted in Neowin, Gizmodo, and others) describes an H.R. Giger–designed alien-ship interior in an Alameda air hangar, complete with X-Files cast involvement and a Gates “head reveal” gag. Sunk cost before cancellation: Supported. St. John says the shutdown came “a couple of weeks” before the 1996 event date, after ~$4.3M had already been spent/committed (≈$1.2M MS budget + ≈$1.1M sponsors + additional sunk costs). Independent summaries repeat this figure (“in excess of $4 million”).

So: 5.0 review — moderate evidence Fired 1997 — factual Alien spaceship build planned — factual ≈$4M sunk costs — supported by St. John’s own retrospective and secondary reporting

mort96 · 11m ago
As a company grows, it will undoubtedly hire some "wrong people" along the way.
michaelt · 49m ago
Well partly, yes.

But also, when I tell one of my reports to spec and order himself a PC, there should be several controls in place.

Firstly, I should give clear enough instructions that they know whether they should be spending around $600, $1500, or $6000.

Second, although my reports can freely spend ~$100 no questions asked, expenses in the $1000+ region should require my approval.

Thirdly, there is monitoring of where money is going; spending where the paperwork isn't in order gets flagged and checked. If someone with access to the company amazon account gets an above-ground pool shipped to their home, you can bet there will be questions to be answered.

spyckie2 · 1h ago
Basic statistics. You can find 10 people that will probably not abuse the system but definitely not 100.

It’s like your friend group and time choosing a place to eat. It’s not your friends, it’s the law of averages.

jayd16 · 1h ago
Maybe so but it's not like that's something you can really control. You can control the policy so that is what's done.
necovek · 51m ago
If $20k is misspent by 1 in 100 employees, that's still $200 per employee per year: peanuts, really.

Just like with "policing", I'd only focus on uncovering and dealing with abusers after the fact, not on everyone — giving most people "benefits" that instead makes them feel valued.

lukan · 1h ago
"$1,000 chairs"

Not an expert here, but from what I heard, that would be a bargain for a good office chair. And having a good chair or not - you literally feel the difference.

master_crab · 3h ago
There also starts to be some soft fraud at scales higher than you’d imagine: When someone could get a new laptop without questions, old ones started “getting stolen” at a much higher rate. When we offered food delivery for staying late, a lot of people started staying just late enough for the food delivery to arrive while scrolling on their phones and then walking out the door with their meal.

Ehh. Neither of these are soft fraud. The former is outright law-breaking, the latter…is fine. They stayed till they were supposed to.

Aurornis · 3h ago
> the latter…is fine. They stayed till they were supposed to.

This is the soft fraud mentality: If a company offers meal delivery for people who are working late who need to eat at the office and then people start staying late (without working) and then taking the food home to eat, that’s not consistent with the policies.

It was supposed to be a consolation if someone had to (or wanted to, as occurred with a lot of our people who liked to sleep in) stay late to work. It was getting used instead for people to avoid paying out of pocket for their own dinners even though they weren’t doing any more work.

Which is why we can’t have nice things: People see these policies as an opportunity to exploit them rather than use them as intended.

ThrowawayR2 · 32m ago
Good grief, no. They got an extra hour of productive (or semi-productive time; after 8 hours most people are, unsurprisingly, kind of worn down) out of us while waiting for dinner to arrive and a bit of team-building as we commiserate over whatever we're working on causing us to stay late over a meal. That more than offsets the cost of the food.

If an employee or team is not putting in the effort desired, that's a separate issue and there are other administrative processes for dealing with that.

humanrebar · 2h ago
Are you saying the mentality is offensive? Or is there a business justification I am missing?

Note that employers do this as well. A classic one is a manager setting a deadline that requires extreme crunches by employees. They're not necessarily compensating anyone more for that. Are the managers within their rights? Technically. The employees could quit. But they're shaving hours, days, and years off of employees without paying for it.

Aurornis · 2h ago
It’s basic expense fraud.

If a company policy says you can expense meals when taking clients out, but sales people started expensing their lunches when eating alone, it’s clearly expense fraud. I think this is obvious to everyone.

Yet when engineers are allowed to expense meals when they’re working late and eating at the office, but people who are neither working late nor eating at the office start expensing their meals, that’s expense fraud.

These things are really not gray area. It seems more obvious when we talk about sales people abusing budgets, but there’s a blind spot when we start talking about engineers doing it.

margalabargala · 1h ago
Frankly this sort of thing should be ignored, if not explicitly encouraged, by the company.

Engineers are very highly paid. Many are paid more than $100/hr if you break it down. If a salaried engineer paid the equivalent of $100/hr stays late doing anything, expenses a $25 meal, and during the time they stay late you get the equivalent of 20 minutes of work out of them- including in intangibles like team bonding via just chatting with coworkers or chatting about some bug- then the company comes out ahead.

That you present the above as considered "expense fraud" is fundamentally a penny-wise, pound-foolish way to look at running a company. Like you say, it's not really a gray area. It's a feature not a bug.

alt227 · 56m ago
> Like you say, it's not really a gray area. It's a feature not a bug.

Luckily that comes down to the policy of the individual company and is not enforced by law. I am personally happy to pay engineers more so they can buy this sort of thing themselves and we dont open the company to this sort of abuse. Then its a known cost and the engineers can decide from themselves if they want to spend that $30 on a meal or something else.

Nemi · 30m ago
Tragedy of the Commons is a real thing. The goto solution that most companies use is to remove all privileges for everyone. But really, this is a cultural issue. This is how company culture is lost when a company gets larger.

A better option is for leadership to enforce culture by reinforcing expectations and removing offending employees if need be to make sure that the culture remains intact. This is a time sync, without a doubt. For leadership to take this on it has to believe that the unmeasurable benefit of a good company culture outweighs the drag on leadership's efficiency.

Company culture is will always be actively eroded in any company and part of the job of leadership is to enforce culture so that it can be a defining factor in the company's success for as long as possible.

master_crab · 2h ago
soft fraud mentality

This isn’t about fraud anymore. It’s about how suspiciously managers want to view their employees. That’s a separate issue (but not one directed at employees).

Aurornis · 2h ago
If a company says you have permission to spend money on something for a purpose, but employees are abusing that to spend money on something that clearly violates that stated purpose, that’s into fraud territory.

This is why I call it the soft fraud mentality: When people see some fraudulent spending and decide that it’s fine because they don’t think the policy is important.

Managers didn’t care. It didn’t come out of their budget.

It was the executives who couldn’t ignore all of the people hanging out in the common areas waiting for food to show up and then leaving with it all together, all at once. Then nothing changed after the emails reminding them of the purpose of the policy.

When you look at the large line item cost of daily food delivery and then notice it’s not being used as intended, it gets cut.

master_crab · 2h ago
This might come as a bit of a surprise to you, but most (really all) employees are in it for money. So if you are astonished that people optimize for their financial gain, that’s concerning. That’s why you implement rules.

If you start trying to tease apart the motivations people have even if they are following those rules, you are going to end up more paranoid than Stalin.

Aurornis · 2h ago
> This might come as a bit of a surprise to you

> So if you are astonished that people optimize for their financial gain, that’s concerning.

I’m not “surprised” nor “astonished” nor do you need to be “concerned” for me. That’s unnecessarily condescending.

I’m simply explaining how these generous policies come to and end through abuse.

You are making a point in favor of these policies: Many will see an opportunity for abuse and take it, so employers become more strict.

alt227 · 53m ago
> but most (really all) employees are in it for money

Yes, but some also have a moral conscience and were brought up to not take more than they need.

If you are not one of these types of people, then not taking complete over advantage of an offer like free meals probably seems like an alien concept.

I try to hire more people like this, it makes for a much stronger workforce when people are not all out to get whatever they can for themselves and look out for each others interests more.

d4mi3n · 2h ago
This is disingenuous but soft-fraud is not a term I’d use for it. Fraud is a legal term. You either commit fraud or you do not. There is no “maybe” fraud—you comply with a policy or law or you don’t.

As you mentioned, setting policy that isn’t abused is hard. But abuse isn’t fraud—it’s abuse—and abuse is its own rabbit hole that covers a lot of these maladaptive behaviors you are describing.

Aurornis · 2h ago
It’s called expense fraud.

I call the meal expense abuse “soft fraud” because people kind of know it’s fraud, but they think it’s small enough that it shouldn’t matter. Like the “eh that’s fine” commenter above: They acknowledged that it’s fraud, but also believe it’s fine because it’s not a major fraud.

If someone spends their employer’s money for personal benefit in a way that is not consistent with the policies, that is legally considered expense fraud.

There was a case local to me where someone had a company credit card and was authorized to use it for filling up the gas tank of the company vehicle. They started getting in the habit of filling up their personal vehicle’s gas tank with the card, believing that it wasn’t a big deal. Over the years their expenses weren’t matching the miles on the company vehicle and someone caught on. It went to court and the person was liable for fraud, even though the total dollar amount was low five figures IIRC. The employee tried to argue that they used the personal vehicle for work occasionally too, but personal mileage was expensed separately so using the card to fill up the whole tank was not consistent with policy.

I think people get in trouble when they start bending the rules of the expense policy thinking it’s no big deal. The late night meal policy confounds a lot of people because they project their own thoughts about what they think the policy should be, not what the policy actually is.

varjag · 2h ago
Fraud is also used colloquially and it doesn't seem we're in a court of justice rn.
baq · 3h ago
> individuals incurring expenses in the tens of thousands range

peanuts compared to their 500k TC

Aurornis · 2h ago
Very few companies pay $500K. Even at FAANG a lot of people are compensated less than that.

I do think a lot of this comment section is assuming $500K TC employees at employers with infinite cash to spend, though.

hellisothers · 1h ago
But at the FAANGy companies I’ve worked at this issue persists. Mobile engineers working on 3yo computers and seeing new hires compile 2x (or more) faster with their newer machines.
alt227 · 1h ago
If they care that much about compile time, they would work on a desktop instead of a laptop.
bigtechennui · 44m ago
Then the company would issue a desktop and a laptop, since they want engineers to be able to use computers in places other than their desk.
LevGoldstein · 12m ago
...and we're back to trying to convince a penny-wise pound-foolish company to buy twice the computing hardware for every developer.
pengaru · 2h ago
500k is not the average, and anyone at that level+ can get fancy hardware if they want it.
groby_b · 2h ago
One, not everybody gets 500K TC.

Two, several tens of thousands are in the 5%-10% range. Hardly "peanuts". But I suppose you'll be happy to hear "no raise for you, that's just peanuts compared to your TC", right?

incone123 · 2h ago
$3,000 standing desks?? It's some wood, metal and motors. I got one from IKEA in about 2018 for 500 gbp and it's still my desk today. You can get Chinese ones now for about 150 gbp.
Aurornis · 2h ago
The people demanding new top spec MacBook Pros every year aren’t the same people requesting the cheapest Chinese standing desk they can find.
incone123 · 2h ago
I can understand paying more for fast processors and so on but a standing desk just goes up and down. What features do the high end desks have that I am missing out on?
hellisothers · 1h ago
I went with Uplift desks which are not $150 but certainly sub $1000. I think what I was paying for was the stability/solidity of the desk, the electronics and memory and stuff is probably commodified.
alt227 · 1h ago
Doesnt matter, some people just want whatever the company will spring for them.
dcrazy · 2h ago
Stability and reliability.
ffsm8 · 1h ago
Stability is a big one, but the feel of the desk itself is also a price point. You're gonna be paying a lot depending on the type of tabletop you get. ($100-1k+ just for the top)
incone123 · 40m ago
Mine is very stable. Top is just some kind of board. It took a bit of damage from my cat's claws but that's not a risk most corporate offices have.
wslh · 32m ago
Breaking news: "Trump tariffs live updates: Trump says US to tariff furniture imports following investigation"<https://finance.yahoo.com/news/live/trump-tariffs-live-updat...>
kev009 · 1h ago
Netflix, at least the Open Connect org, was still open ended adjacent to whatever NTech provided (your issued laptop and remote working stuff). It was very easy to get "exotic" hardware. I really don't think anyone abused it. This is an existence proof to the comment parents, it's neither a startup and I don't see engineers screwing the wheels off the bus anywhere I've ever worked.

No comments yet

geor9e · 44m ago
I know a FAANG company whose IT department, for the last few years, has been "out of stock" for SSD drives over 250GB . They claim its a global market issue (it's not). There's constant complaining in the chats for folks who compile locally. The engineers make $300k+ so they just buy a second SSD from Amazon on their credit cards and self-install them without mentioning it to the IT dept. I've never heard a rational explanation for the "shortage" other than chronic incompetence from the team supplying engineers with laptops/desktops. Meanwhile, spinning up a 100TB cloud VM has no friction whatsoever there. It's a cushy place to work tho, so folks just accept the comically dumb aspects everyone knows about.
llbbdd · 14m ago
Always amuses me when I see someone use web development as an example like this. Web dev is very easily in the realm of game dev as far as required specs for your machine, otherwise you're probably not doing much actual web dev. If anything, engineers doing nothing but running little Java or Python servers don't need anything more than a PI and a two-color external display to do their job.
jjmarr · 1h ago
It's not abuse to open 500 Chrome tabs if they're work-related and increase my productivity.

I am 100x more expensive than the laptop. Anything the laptop can do instead of me is something the laptop should be doing instead of me.

loeg · 3h ago
I think you're maybe underestimating the aggregate cost of totally unconstrained hardware/travel spending across tens or hundreds of thousands of employees, and overestimating the benefits. There need to be some limits or speedbumps to spending, or a handful of careless employees will spend the moon.
adverbly · 3h ago
It's the opposite.

You're underestimating the scope of time lost by losing a few percent in productivity per employee across hundreds of thousands of employees.

You want speed limits not speed bumps. And they should be pretty high limits...

loeg · 3h ago
I don't believe anyone is losing >1% productivity from these measures (at FANG employers).
xmodem · 2h ago
When Apple switched to their own silicon, I was maintaining the build systems at a scaleup.

After I saw the announcement, I immediately knew I needed to try out our workflows on the new architecture. There was just no way that we wouldn't have x86_64 as an implicit dependency all throughout our stack. I raised the issue with my manager and the corporate IT team. They acknowledged the concern but claimed they had enough of a stockpile of new Intel machines that there was no urgency and engineers wouldn't start to see the Apple Silicon machines for at least another 6-12 months.

Eventually I do get allocated a machine for testing. I start working through all the breakages but there's a lot going on at the time and it's not my biggest priority. After all, corporate IT said these wouldn't be allocated to engineers for several more months, right? Less than a week later, my team gets a ticket from a new-starter who has just joined and was allocated an M1 and of course nothing works. Turns out we grew a bit faster than anticipated and that stockpile didn't last as long as planned.

It took a few months before we were able to fix most of the issues. In that time we ended up having to scavenge under-specced machines form people in non-technical roles. The amount of completely avoidable productivity wasted from people swapping machines would have easily reached into the person-years. And of course myself and my team took the blame for not preparing ahead of time.

Budgets and expenditure are visible and easy to measure. Productivity losses due to poor budgetry decisions, however, are invisible and extremely difficult to measure.

alt227 · 48m ago
> I raised the issue with my manager and the corporate IT team.

> And of course myself and my team took the blame for not preparing ahead of time.

If your initial request was not logged and then able to be retrieved by yourself in defence, then I would say something is very wrong at your company.

Aeolun · 2h ago
Actually, just time spent compiling, or waiting for other builds to finish makes investing in the top level macbook pro worth it every 3 years. I think the calculation assumed something like 1-2% of my time was spent compiling, and I cost like $100k per year.
BlandDuck · 3h ago
Scaling cuts both ways. You may also be underestimating the aggregate benefits of slight improvements added up across hundreds or thousands of employees.

For a single person, slight improvements added up over regular, e.g., daily or weekly, intervals compound to enormous benefits over time.

XKCD: https://xkcd.com/1205/

Retric · 3h ago
The breakeven rate on developer hardware is based on the value a company extracts not their salary. Someone making X$/year directly has a great deal of overhead in terms of office space and managers etc, and above that the company only employees them because the company gains even more value.

Saving 1 second/employee/day can quickly be worth 10+$/employee/year (or even several times that). But you rarely see companies optimizing their internal processes based on that kind of perceived benefits.

Water cooler placement in a cube farm comes to mind as a surprisingly valuable optimization problem.

corimaith · 3h ago
The cost of a good office chair is comparable to a top tier gaming pc, if not higher.
kec · 3h ago
Not for an enterprise buying (or renting) furniture in bulk it isn’t. The chair will also easily last a decade and be turned over to the next employee if this one leaves… unlike computer hardware which is unlikely to be reused and will historically need to be replaced every 24-36 months even if your dev sticks around anyway.
SoftTalker · 2h ago
> computer hardware which is unlikely to be reused and will historically need to be replaced every 24-36 months

That seems unreasonably short. My work computer is 10 years old (which is admittedly the other extreme, and far past the lifecycle policy, but it does what I need it to do and I just never really think about replacing it).

nicoburns · 34m ago
> My work computer is 10 years old... but it does what I need it to do and I just never really think about replacing it

It depends what you're working on. My work laptop is 5 years old, and it takes ~4 minutes to do a clean compile of a codebase I work on regularly. The laptop I had before that (which would now be around 10 years old) would take ~40 minutes to compile to the same codebase. It would be completely untenable for me to do the job I do with that laptop (and indeed I only started working in the area I do once I got this one).

varjag · 2h ago
Right, the employee with unlimited spend would want to sit in a used chair.
kec · 1h ago
That’s more or less my point from a different angle: unlimited spend isn’t reasonable and the justification “but $other_thing is way more expensive!” Is often incorrect.
oblio · 1h ago
An Aeron chair that's not been whacked with baseball bats looks pretty much the same after many, many years.
loeg · 3h ago
Are there any FANG employers unwilling to provide good office chairs? I think even cheap employers offer these.
Aeolun · 2h ago
I think my employer hed a contest to see which of 4 office chairs people liked the most, then they bought the one that everyone hated. I’m not quite sure anymore what kind of reason was given.
thenewwazoo · 2h ago
There are many that won’t even assign desks, much less provide decent chairs. Amazon and LinkedIn are two examples I know from personal experience.
jacobolus · 29m ago
> highest spec MacBook just to do web development and open 500 chrome tabs. There is abuse.

Why is that abuse? Having many open browser tabs is perfectly legitimate.

Arguably they should switch from Chrome to Safari / lobby Google to care about client-side resource use, but getting as much RAM as possible also seems fine.

forgotusername6 · 2h ago
Just to do web development? I regularly go into swap running everything I need on my laptop. Ideally I'd have VScode, webpack, and jest running continuously. I'd also occasionally need playwright. That's all before I open a chrome tab.
SoftTalker · 2h ago
This explains a lot about why the modern web is the way it is.
thfuran · 1h ago
I do think a lot of software would be much better if all devs were working on hardware that was midrange five years ago and over a flaky WiFi connection.
benlivengood · 1h ago
It's straightforward to measure this; start a stopwatch every time your flow gets interrupted by waiting for compilation or your laptop is swapping to keep the IDE and browser running, and stop it once you reach flow state again.

We managed to just estimate the lost time and management (in a small startup) was happy to give the most affected developers (about 1/3) 48GB or 64GB MacBooks instead of the default 16GB.

At $100/hr minimum (assuming lost work doesn't block anyone else) it doesn't take long for the upgrades to pay off. The most affected devs were waiting an hour a day sometimes.

This applies to CI/CD pipelines too; it's almost always worth increasing worker CPU/RAM while the reduction in time is scaling anywhere close to linearly, especially because most workers are charged by the minute anyway.

screye · 2h ago
This is especially relevant now that docker has made it easy to maintain local builds of the entire app (fe+be). Factor in local AI flows and the RAM requirements explode.

I have a whisper transcription module running at all times on my Mac. Often, I'll have a local telemetry service (langfuse) to monitor the 100s of LLM calls being made by all these models. With AI development it isnt uncommon to have multiple background agents hogging compute. I want each of them to be able to independently build + host and test their changes. The compute load apps up quickly. And I would never push agent code to a cloud env (not even a preview env) because I don't trust them like that and neither should you.

Anything below an M4 pro 64GB would be too weak for my workflow. On that point, Mac's unified VRAM is the right approach in 2025. I used windows/wsl devices for my entire life, but their time is up.

This workflow is the first time I have needed multiple screens. Pre-agentic coding, I was happy to work on a 14 inch single screen machine with standard thinkpad x1 specs. But, the world has changed.

oblio · 1h ago
> On that point, Mac's unified VRAM is the right approach in 2025. I used windows/wsl devices for my entire life, but their time is up.

AMD's Strix Halo can have up to 128GB of unified RAM, I think. The bandwidth is less than half the Mac one, but it's probably going to accelerate.

Windows doesn't inherently care about this part of the hardware architecture.

tgma · 9h ago
FANG is not monolithic. Amazon is famously cheap. So is Apple in my opinion based on what I have heard (you get random refurbished hardware that is available not some standardized thing, sometimes with 8GB RAM sometimes something nicer) Apple is also famously cheap on their compensation. Back in the day they proudly said shit to the effect of "we deliberately don't pay you top of the market because you have to love Apple" to which the only valid answer is "go fuck yourself."

Google and Facebook I don't think are cheap for developers. I can speak firsthand for my past Google experience. You have to note that the company has like 200k employees and there needs to be some controls and not all of the company are engineers.

Hardware -> for the vast majority of stuff, you can build with blaze (think bazel) on a build cluster and cache, so local CPU is not as important. Nevertheless, you can easily order other stuff should you need to. Sure, if you go beyond the standard issue, your cost center will be charged and your manager gets an email. I don't think any decent manager would block you. If they do, change teams. Some powerful hardware that needs approval is blanket whitelisted for certain orgs that recognize such need.

Trips -> Google has this interesting model you have a soft cap for trips and if you don't hit the cap, you pocket half of the trips credit in your account which you can choose to spend later when you are overcap or you want to get something slightly nicer the next time. Also, they have clear and sane policies on mixing personal and corporate travel. I encourage everyone to learn about and deploy things like that in their companies. The caps are usually not unreasonable, but if you do hit them, it is again an email to your management chain, not some big deal. Never seen it blocked. If your request is reasonable and your manager is shrugging about this stuff, that should reflect on them being cheap not the company policy.

fmajid · 8h ago
iOS development is still mostly local which is why most of the iOS developers at my previous Big Tech employer got Mac Studios as compiler machines in addition to their MacBook Pros. This requires director approval but is a formality.

I read Google is now issuing Chromebooks instead of proper computers to non-engineers, which has got to be corrosive to productivity and morale.

SoftTalker · 2h ago
If you're not a developer and everything you need for your job runs in a browser, what's wrong with a Chromebook?
walterbell · 2h ago
> Chromebooks ... to non-engineers

"AI" (Plus) Chromebooks?

alt227 · 46m ago
> sometimes with 8GB RAM

Apple have long thought that 8Gb ram is good enough for anything, and will continue to for some time now.

beachtaxidriver · 3h ago
Google used to be so un-cheap they had a dedicated ergo lab room where you could try out different keyboards.

They eventually became so cheap they blanket paused refreshing developer laptops...

toast0 · 1h ago
Yahoo was cheap/stingy/cost concious as hell. They still had a well stocked ergo team, at least for the years I was there. You'd schedule an ergo consult during new hire orientation, and you'd get a properly sized seat and your desk height adjusted if needed and etc. Lots of ergo keyboards, although I didn't see a lot of kinesis back then.

Proper ergo is a cost concious move. It helps keep your employees able to work which saves on hiring and training. It reduces medical expenses, which affects the bottom line because large companies are usually self-insured; they pay a medical insurance company only to administer the plan, not for insurance --- claims are paid from company money.

walterbell · 2h ago
Some BigCos would benefit from <Brand> version numbers to demarcate changes in corporate leadership, culture and fiscal policy.
likpok · 3h ago
The soft cap thing seems like exactly this kind of penny-foolish behavior though. I’ve seen people spend hours trying to optimize their travel to hit the cap — or dealing with flight changes, etc that come from the “expense the flight later” model.

All this at my company would be a call or chat to the travel agent (which, sure, kind of a pain, but they also paid for dedicated agents so wait time was generally good).

MonkeyClub · 33m ago
> Back in the day they proudly said shit to the effect of "we deliberately don't pay you top of the market because you have to love Apple" to which the only valid answer is "go fuck yourself."

So people started slacking off, because "you have to love your employees"?

PartiallyTyped · 9h ago
Not sure what you are talking about re amzn.

I have a pretty high end MacBook Pro, and that pales in comparison to the compute I have access to.

tgma · 9h ago
The OP was talking beyond just compute hardware. Stuff like this: https://www.reddit.com/r/womenintech/comments/1jusbj2/amazon...
PartiallyTyped · 9h ago
That’s fair criticism. I only corrected the hardware aspect of it all.
sincerely · 9h ago
All of OPs posts in that thread are blatantly Chat GPT output
gruez · 4h ago
Because.. em-dashes? As many others have mentioned, ios/mac have auto em-dashes so it's not really a reliable indicator.
brookst · 3h ago
It’s so annoying that we’ve lost a legit and useful typographic convention just because some people think that AI overusing it means that all uses indicate AI.

Sure, I’ve stopped using em-dashes just to avoid the hassle of trying to educate people about a basic logical fallacy, but I reserve the right to be salty about it.

Snarwin · 3h ago
Several things:

1) Em-dashes

2) "It's not X, it's Y" sentence structure

3) Comma-separated list that's exactly 3 items long

gruez · 3h ago
>1) Em-dashes

>3) Comma-separated list that's exactly 3 items long

Proper typography and hamburger paragraphs are canceled now because of AI? So much for what I learned high school english class.

>2) "It's not X, it's Y" sentence structure

This is a pretty weak point because it's n=1 (you can check OP's comment history and it's not repeated there), and that phrase is far more common in regular prose than some of the more egregious ones (eg. "delve").

AtlasBarfed · 2h ago
You sound like a generated message from a corporate reputation AI defense bot
laidoffamazon · 9h ago
How do you know someone worked at Google?

Don’t worry, they’ll tell you

createaccount99 · 8h ago
Isn't it about equal treatment? You can't buy one person everything they want, just because they have high salary, otherwise the employee next door will get salty.
hamdingers · 4h ago
I previously worked at a company where everyone got a budget of ~$2000. The only requirement was you had to get a mac (to make it easier on IT I assume), the rest was up to you. Some people bought a $2000 macbook pro, some bought a $600 mac mini and used the rest on displays and other peripherals.

Equality doesn't have to mean uniformity.

Aurornis · 3h ago
I saw this tried ones and it didn’t work.

Some people would minimize the amount spent on their core hardware so they had money to spend on fun things.

So you’d have to deal with someone whose 8GB RAM cheap computer couldn’t run the complicated integration tests but they were typing away on a $400 custom keyboard you didn’t even know existed while listening to their AirPods Max.

toast0 · 1h ago
I mean; looks like someone volunteered to make the product work on low spec machines. That's needed.

I've been on teams where corporate hardware is all max spec, 4-5 years ahead of common user hardware, provided phones are all flagships replaced every two years. The product works great for corporate users, but not for users with earthly budgets. And they wonder how competitors swallow market in low income countries.

hamdingers · 2h ago
That's probably another reason why we were limited to a set menu of computer options.
bobmcnamara · 3h ago
I've often wondered how a personal company budget would work for electrical engineers.

At one place I had a $25 no question spending limit, but sank a few months trying to buy a $5k piece of test equipment because somebody thought maybe some other tool could be repurposed to work, or we used to have one of those but it's so old the bandwidth isn't useful now, or this project is really for some other cost center and I don't work for that cost center.

Turns out I get paid the same either way.

Aeolun · 2h ago
That doesn’t matter. If I’m going to spend 40% of my time alive somewhere, you bet a requirement is that I’m not working on ridiculously outdated hardware. If you are paying me $200k a year to sit around waiting for my PC to boot up, simply because Joe Support that makes 50k would get upset, that’s just a massive waste of money.
dfxm12 · 4h ago
If we're talking about rich faang type companies, no, it's not about equal treatment. These companies can afford whatever hardware is requested. This is probably true of most companies.

Where did this idea about spiting your fellow worker come from?

loeg · 3h ago
I don't think so. I think mostly just keeping spend down in aggregate.
groby_b · 2h ago
"even the rich FANG types"

I think you wanted to say "especially". You're exchanging clearly measurable amounts of money for something extremely nebulous like "developer productivity". As long as the person responsible for spend has a clear line of view on what devs report, buying hardware is (relatively) easy to justify.

Once the hardware comes out of a completely different cost center - a 1% savings for that cost center is promotion-worthy, and you'll never be able to measure a 1% productivity drop in devs. It'll look like free money.

the__alchemist · 1h ago
Tangent: IMO top tier CPU is a no brainer if you play games, run performance-sensitive software (molecular dynamics or w/e), or compile code.

Look at GPU purchasing. It's full of price games, stock problems, scalpers, 3rd party boards with varying levels of factory overclock, and unreasonable prices. CPU is a comparative cake walk: go to Amazon or w/e, and buy the one with the highest numbers in its name.

AnotherGoodName · 1h ago
For games its generally not worthwhile since the performance is almost entirely based on gpu these days.

Almost all build guides will say ‘get midrange cpu X over high end chip Y and put the savings to a better GPU’.

Consoles in particular are just a decent gpu with a fairly low end cpu these days. The xbox one with a 1.75Ghz 8core AMD from a couple of generations ago now is still playing all the latest games.

the__alchemist · 1h ago
Anecdote: I got a massive performance (FPS) improvement in games after upgrading CPU recently, with no GPU change.

I think currently, that build guide doesn't apply based on what's going on with GPUs. Was valid in the past, and will be valid in the future, I hope!

Hikikomori · 59m ago
Depending on the game there can be a large difference. Ryzen with larger cache have a large benefit in singleplayer games with many units like civilization or in most multiplayer games. Not so much GHz speed but being able to keep most of hot path code and data you need in cache.
enraged_camel · 1h ago
>> For games its generally not worthwhile since the performance is almost entirely based on gpu these days.

It completely depends on the game. Civilization series, for example, are mostly CPU bound, which is why turns take longer and longer as the games progress.

AnotherGoodName · 56m ago
Factorio and stellaris are others i’m aware of.

Factorio it's an issue when you go way past the end game into the 1000+ hour megabases.

Stellaris is just poorly coded with lots of n^2 algorithms and can run slowly on anything once population and fleets grow a bit.

For civilisation the ai does take turns faster with a higher end cpu but imho it’s also no big deal since you spend most time scrolling the map and taking actions (gpu based perf).

I think it’s reasonable to state that the exceptions here are very exceptional.

jandrese · 1h ago
It’s not quite that simple. Often the most expensive chips trade off raw click speed for more cores, which can be counterproductive if your game only uses 4 threads.
saati · 30m ago
The 8 core X3D chips beat the 16 core ones on almost all games, so that's not that simple.
userbinator · 9h ago
I wish developers, and I'm saying this as one myself, were forced to work on a much slower machine, to flush out those who can't write efficient code. Software bloat has already gotten worse by at least an order of magnitude in the past decade.
diggan · 2h ago
> were forced to work on a much slower machine

I feel like that's the wrong approach. Like saying a music producer to always work with horrible (think car or phone) speakers. True, you'll get a better mix and master if you test it on speakers you expect others to hear it through, but no one sane recommends you to default to use those for day-to-day work.

Same goes for programming, I'd lose my mind if everything was dog-slow, and I was forced to experience this just because someone thinks I'll make things faster for them if I'm forced to have a slower computer. Instead I'd just stop using my computer if the frustration ended up larger than the benefits and joy I get.

fluoridation · 2h ago
That's actually a good analogy. Bad speakers aren't just slow good speakers. If you try to mix through a tinny phone speaker you'll have no idea what the track will sound like even through halfway acceptable speakers, because you can't hear half of the spectrum properly. Reference monitors are used to have a standard to aim for that will sound good on all but the shittiest sound systems.

Likewise, if you're developing an application where performance is important, setting a hardware target and doing performance testing on that hardware (even if it's different from the machines the developers are using) demonstrably produces good results. For one, it eliminates the "it runs well on my machine" line.

SoftTalker · 2h ago
Although, any good producer is going to listen to mixes in the car (and today, on a phone) to be sure they sound at least decent, since this is how many consumers listen to their music.
diggan · 1h ago
Yes, this is exactly my point :) Just like any good software developer who don't know exactly where their software will run, they test on the type of device that their users are likely to be running it with, or at least similar characteristics.
ofcrpls · 2h ago
The car test has been considered a standard by mixing engineers for the past 4 decades
avidiax · 8h ago
Yeah, I recognize this all too well. There is an implicit assumption that all hardware is top-tier, all phones are flagships, all mobile internet is 5G, everyone has regular access to free WiFi, etc.

Engineers and designers should compile on the latest hardware, but the execution environment should be capped at the 10th percentile compute and connectivity at least one rotating day per week.

Employees should be nudged to rotate between Android and IOS on a monthly or so basis. Gate all the corporate software and ideally some perks (e.g. discounted rides as a ride-share employee) so that you have to experience both platforms.

jacobgorm · 8h ago
If they get the latest hardware to build on the build itself will become slow too.
geocar · 3h ago
> can't write efficient code. Software bloat has already gotten worse by at least an order of magnitude in the past decade.

Efficiency is a good product goal: Benchmarks and targets for improvement are easy to establish and measure, they make users happy, thinking about how to make things faster is a good way to encourage people to read the code that's there, instead of just on new features (aka code that's not there yet)

However they don't sell very good: Your next customer is probably not going to be impressed your latest version is 20% faster than the last version they also didn't buy. This means that unless you have enough happy customers, you are going to have a hard time convincing yourself that I'm right, and you're going to continue to look for backhanded ways of making things better.

But reading code, and re-reading code is the only way you can really get it in your brain; it's the only way you can see better solutions than the compiler, and it's the only way you remember you have this useful library function you could reuse instead of writing more and more code; It's the only guaranteed way to stop software bloat, and giving your team the task of "making it better" is a great way to make sure they read it.

When you know what's there, your next feature will be smaller too. You might even get bonus features by making the change in the right place, instead of as close to the user as possible.

Management should be able to buy into that if you explain it to them, and if they can't, maybe you should look elsewhere...

> a much slower machine

Giving everyone laptops is also one of those things: They're slow even when they're expensive, and so developers are going to have to work hard to make things fast enough there, which means it'll probably be fine when they put it on the production servers.

I like having a big desktop[1] so my workstation can have lots of versions of my application running, which makes it a lot easier to determine which of my next ideas actually makes things better.

[1]: https://news.ycombinator.com/item?id=44501119

Using the best/fastest tools I can is what makes me faster, but my production hardware (i.e. the tin that runs my business) is low-spec because that's cheaper, and higher-spec doesn't have a measurable impact on revenue. But I measure this, and I make sure I'm always moving forward.

jayd16 · 48m ago
The beatings will continue until the code improves.

I get the sentiment but taken literally it's counter productive. If the business cares about perf, put it in the sprint planning. But they don't. You'll just be writing more features with more personal pain.

For what its worth, console gamedev has solved this. You test your game on the weakest console you're targeting. This usually shakes out as a stable perf floor for PC.

Lerc · 4h ago
Perhaps the better solution would be to have the fast machine but have a pseudo VM for just the software you are developing that uses up all of those extra resources with live analysis. The software runs like it is on a slower machine, but you could potentially gather plenty of info that would enable you to speed up the program for everyone.
guerrilla · 3h ago
Why complicated? Incentivize the shit out of it at the cultural level so they pressure their peers. This has gotten completely out of control.
djmips · 8h ago
They shouldn't work on a slower machine - however they should test on a slower machine. Always.
zh3 · 9h ago
Develop on a fast machine, test and optimise on a slow one?
baq · 2h ago
it's absolutely the wrong approach.

software should be performance tested, but you don't want a situation when time of single iteration is dominated by duration of functional tests and build time. the faster software builds and tests, the quicker solutions get delivered. if giving your developers 64GB or RAM instead of 32GB halves test and build time, you should happily spend that money.

toast0 · 1h ago
Assuming you build desktop software; you can build it on a beastly machine, but run it on a reasonable machine. Maybe local builds for special occasions, but it's special, you can wait.

Sure, occasionally run the software on the build machine to make sure it works on beastly machines; but let the developers experience the product on normal machines as the usual.

mft_ · 8h ago
I came here to say exactly this.

If developers are frustrated by compilation times on last-generation hardware, maybe take a critical look at the code and libraries you're compiling.

And as a siblimg comment notes, absolutely all testing should be on older hardware, without question, and I'd add with deliberately lower-quality and -speed data connections, too.

sweetjuly · 3h ago
This is one of the things which cheeses me the most about LLVM. I can't build LLVM on less than 16GB of RAM without it swapping to high heaven (and often it just gets OOM killed anyways). You'd think that LLVM needing >16GB to compile itself would be a signal to take a look at the memory usage of LLVM but, alas :)
jcranmer · 1h ago
The thing that causes you run out of memory isn't actually anything in LLVM, it's all in ld. If you're building with debugging info, you end up pulling in all of the debug symbols for deduplication purposes during linking, and that easily takes up a few GB. Now link a dozen small programs in parallel (because make -j) and you've got an OOM issue. But the linker isn't part of LLVM itself (unless you're using lld), so there's not much that LLVM can do about it.

(If you're building with ninja, there's a cmake option to limit the parallelism of the link tasks to avoid this issue).

sitkack · 1h ago
Right, optimize for horrible tools so the result satisfies the bottom 20%. Counterpoint, id Software produced amazingly performant programs using top of the line gear. What are you trying to do is enforce a cultural norm by hobbling the programmer's hardware. If you want fast programs, you need to make that a criteria, slow hardware isn't going to get you there.
loeg · 3h ago
I wish this hell on other developers, too. ;-)
qwertytyyuu · 3h ago
Yeah but working with windows, visual studio and cooperate security software in a 8gb machine is just pain
nchmy · 12m ago
What's with the post title being completely incongruent with the article title? Moreover, I'm pretty sure this was not the case when it was first posted...
sgarland · 3h ago
Tangential: TIL you can compile the Linux kernel in < 1 minute (on top-spec hardware). Seems it’s been a while since I’ve done that, because I remember it being more like an hour or more.
bornfreddy · 3h ago
I remember I was blown away by some machine that compiled it in ~45 minutes. Pentium Pro baby! Those were the days.
sgarland · 1h ago
My memory must be faulty, then, because I was mostly building it on an Athlon XP 2000+, which is definitely a few generations newer than a Pentium Pro.

I’m probably thinking of various other packages, since at the time I was all-in on Gentoo. I distinctly remember trying to get distcc running to have the other computers (a Celeron 333 MHz and Pentium III 550 MHz) helping out for overnight builds.

Can’t say that I miss that, because I spent more time configuring, troubleshooting, and building than using, but it did teach me a fair amount about Linux in general, and that’s definitely been worth it.

JonChesterfield · 3h ago
I'd like to know why making debian packages containing the kernel now takes substantially longer than a clean build of the kernel. That seems deeply wrong and rather reduces the joy at finding the kernel builds so quickly.
dahart · 3h ago
Spinning hard drives were soooo slow! Maybe very roughly an order of magnitude from SSDs and an order of magnitude from multi-core?
2shortplanks · 10h ago
This article skips a few important steps - how a faster CPU will have a demonstrable improvement on developer performance.

I would agree with the idea that faster compile times can have a significant improvement in performance. 30s is long enough for a developer to get distracted and go off and check their email, look at social media, etc. Basically turning 30s into 3s can keep a developer in flow.

The critical thing we’re missing here is how increasing the CPU speed will decrease the compile time. What if the compiler is IO bound? Or memory bound? Removing one bottleneck will get you to the next bottleneck, not necessarily get you all the performance gains you want

mordae · 10h ago
IO bound compiler would be weird. Memory, perhaps, but newer CPUs also tend to be able to communicate with RAM faster, so...

I think just having LSP give you answers 2x faster would be great for staying in flow.

crinkly · 9h ago
Compiler is usually IO bound on windows due to NTFS and the small files in MFT and lock contention problem. If you put everything on a ReFS volume it goes a lot faster.

Applies to git operations as well.

delusional · 9h ago
I wish I was compiler bound. Nowadays, with everything being in the cloud or whatever I'm more likely to be waiting for Microsoft's MFA (forcing me to pick up my phone, the portal to distractions) or getting some time limited permission from PIM.

The days when 30 seconds pauses for the compiler was the slowest part are long over.

1over137 · 1h ago
You must be a web developer. Doing desktop development, nothing is in the cloud for me. I’m always waiting cor my compiler.
necovek · 24m ago
More likely in an enterprise company using MS tooling (AD/Entra/Outlook/Teams/Office...) with "stringent" security settings.

It gets ridiculous quickly, really.

yoz-y · 10h ago
I don’t think that we live in an era where a hardware update can bring you down to 3s from 30s, unless the employer really cheaped out on the initial buy.

Now in the tfa they compare laptop to desktop so I guess the title should be “you should buy two computers”

miiiiiike · 2h ago
Depends on the workload.

I spent a few grand building a new machine with a 24-core CPU. And, while my gcc Docker builds are MUCH faster, the core Angular app still builds a few seconds slower than on my years old MacBook Pro. Even with all of my libraries split into atoms, built with Turbo, and other optimizations.

6-10 seconds to see a CSS change make its way from the editor to the browser is excruciating after a few hours, days, weeks, months, and years.

renewiltord · 2h ago
Web development is crazy. Went from a Java/C codebase to a webdev company using TS. The latter would take minutes to build. The former would build in seconds and you could run a simulated backtest before the web app would be ready.

It blew my mind. Truly this is more complicated than trading software.

thunderfork · 1h ago
A lot of this seems to have gotten a lot better with esbuild for me, at least, and maybe tsgo will be another big speed-up once it's done...
jhanschoo · 9h ago
Important caveat that the author neglects to mention since they are discussing laptop CPUs in the same breath:

The limiting factor on high-end laptops is their thermal envelope. Get the better CPU as long as it is more power efficient. Then get brands that design proper thermal solutions.

krisroadruck · 3h ago
You simply cannot cram enough cooling and power into a laptop to have it equal a desktop high end desktop CPU of the same generation. There is physically not enough room. Just about the only way to even approach that would be to have liquid cooling loop ports out the back that you had to plug into an under-desk cooling loop and I don't think anyone is doing that because at that point just get a frickin desktop computer + all the other conveniences that come with it (discrete peripherals, multiple monitors, et cetera). I honestly do not understand why so many devs seem to insist on doing work on a laptop. My best guess is this is mostly the apple crowd because apple "desktops" are for the most part - just the same hardware in a larger box instead of being actually a different class of machine. A little better on the thermals, but not the drastic jump you see between laptops and desktops from AMD and Intel.
necovek · 21m ago
If you have to do any travel for work, a lightweight but fast portable machine that is easy to lug around beats any productivity gains from two machines (one much faster) due to the challenge of keeping two devices in sync.
apt-apt-apt-apt · 2h ago
* Me shamefully hiding my no-fan MBA used for development... *
diminish · 9h ago
Multi-core operations like compiling C/C++ could benefit.

Single thread performance of 16-core AMD Ryzen 9 9950X is only 1.8x of my poor and old laptop's 4-core i5 performance. https://www.cpubenchmark.net/compare/6211vs3830vs3947/AMD-Ry...

I'm waiting for >1024 core ARM desktops, with >1TB of unified gpu memory to be able to run some large LLMs with

Ping me when some builds this :)

zh3 · 9h ago
Yes, just went from i3770 (12 years old!) to a 9900x as I tend to wait for a doubling of single core performance before upgrading (got through a lot of PCs in the 386/486 era!). It's actually only 50% faster according to cpubenchmark [0] but is twice as fast in local usage (multithread is reported about 3 times faster).

Also got a Mac Mini M4 recently and that thing feels slow in comparison to both these systems - likely more of a UI/software thing (only use M4 for xcode) than being down to raw CPU performance.

[0] https://www.cpubenchmark.net/compare/Intel-i9-9900K-vs-Intel...

fmajid · 8h ago
M4 is amazing hardware held up by a sub-par OS. One of the biggest bottlenecks when compiling software on a Mac is notarization, where every executable you compile causes a HTTP call to Apple. In addition to being a privacy nightmare, this causes the configure step in autoconf based packages to be excruciatingly slow.
gentooflux · 8h ago
They added always-connected DRM to software development, neat
fmajid · 7h ago
Exactly. They had promised to make notarization opt-out but reneged.
glitchc · 4h ago
Does this mean that compilation fails without an internet connection? If so, that's horrifying.
pierre-renaux · 32m ago
Yes, of course it does, isn't it nice?

Even better if you want to automate the whole notarization thing you don't have a "nice" notarize-this-thing command that blocks until its notarized and fails if there's an issue, you send a notarization request... and wait, and then you can write a nice for/sleep/check loop in a shell script to figure out whether the notarization finished and whether it did so successfully. Of course from time to time the error/success message changes so that script will of course break every so often, have to keep things interesting.

Xcode does most of this as part of the project build - when it feels like it that is. But if you want to run this in CI its a ton a additional fun.

torginus · 3h ago
I jumped ahead about 5 generations of Intel, when I got my new laptop and while the performance wasn't much better, the fact that I changed from a 10 pound workstation beast that sounded like a vacuum cleaner, to a svelte 13 inch laptop that works with a tiny USB C brick, and barely runs its fans while being just as fast made it worthwhile for me.
kaspar030 · 10h ago
> Top end CPUs are about 3x faster than the comparable top end models 3 years ago

I wish that were true, but the current Ryzen 9950 is maybe 50% faster than the two generations older 5950, at compilation workloads.

szatkus · 45m ago
The author used kernel compilation as a benchmark. Which is weird, because for most projects a build process isn't as scalable as that (especially in the node.js ecosystem), even less after a full build.
tgma · 9h ago
Not even. Probably closer to 30%, and that's if you are doing actual many-core compile workloads on your critical path.
ben-schaaf · 3h ago
Phoronix has actual benchmarks: https://www.phoronix.com/review/amd-ryzen-9950x-9900x/2

It's not 3x, but it's most certainly above 1.3x. Average for compilation seems to be around 1.7-1.8x.

defanor · 9h ago
This compares a new desktop CPU to older laptop ones. There are much more complete benchmarks on more specialized websites [0, 1].

> If you can justify an AI coding subscription, you can justify buying the best tool for the job.

I personally can justify neither, but not seeing how one translates into another: is a faster CPU supposed to replace such a subscription? I thought those are more about large and closed models, and that GPUs would be more cost-effective as such a replacement anyway. And if it is not, it is quite a stretch to assume that all those who sufficiently benefit from a subscription would benefit at least as much from a faster CPU.

Besides, usually it is not simply "a faster CPU": sockets and chipsets keep changing, so that would also be a new motherboard, new CPU cooler, likely new memory, which is basically a new computer.

[0] https://www.cpubenchmark.net/

[1] https://www.tomshardware.com/pc-components/cpus

npn · 1h ago
I find it crazy that some people only use a single laptop for their dev work. Meanwhile I have 3 PCs, 5 monitors, keyboards and mouses, and still think they are not enough.

There are a lot of jobs that should run in a home server running 24/7 instead of abusing your poor laptop. Remote dedicated servers work, but the latency is killing your productivity, and it is pricey if you want a server with a lot of disk space.

callc · 33m ago
I was away from my regular desktop dev PC for multiple months recently and only used a crappy laptop for dev work. I got used to it pretty quickly.

This makes me remember so many years ago starting to program on a dual core plastic MacBook.

Also, I’m very impressed by one of my coworkers working on 13 inch laptop only. Extremely smart. A bigger guy so I worry about his posture and RSI on such a small laptop.

TLDR I think more screen space does not scale near linearly with productivity

blueflow · 8h ago
Dunno. I got a Ryzen 7 with 16 cores from 2021 and the modern web still doesn't render smoothly. Maybe its not the hardware?
ofalkaed · 8h ago
Right now I am on my ancient cheap laptop with some 4 core intel and hard drive noises, the only time it has issues with webpages is when I have too many tabs open for its 4gigs of ram. My current laptop which is a 16 core Rhyzen 7 from about 2021 (x13) has never had an issue and I have yet to have too many tabs open on it. I think you might be having a OS/browser issue.

As an aside, being on my old laptop with its hard drive, can't believe how slow life was before SSDs. I am enjoying listening to the hard drive work away and I am surprised to realize that I missed it.

dahart · 3h ago
Maybe, but I can’t repro. Do you have a GPU? What browser? What web site? How much RAM do you have, and how much is available? What else is running on your machine? What OS, and is it a work machine with corporate antivirus?
mft_ · 8h ago
As an alternative anecdote, I've got a Ryzen 7 5800X from 2021, and it's stil blazingly fast for just about everything I throw at it...
fxtentacle · 9h ago
I wish I could. But most software nowadays is still limited by single core speed and that area hasn’t seen relevant growth in years.

„Public whipping for companies who don’t parallelize their code base“ would probably help more. ;)

Anyway, how many seconds does MS Teams need to boot on a top of the line CPU?

amarcheschi · 9h ago
I'm forced to use teams and SharePoint in my university as a student and I hate every single interaction with it, I wish curse upon their creators, and may their descendants never have a smooth user experience with any software they use.

Except for the ridiculous laggy interface, it has some functional bugs as well such as things just disappearing for a few days and then they pop up again

GloriousMEEPT · 3h ago
You're lucky to not have experienced what came before Teams in most corporate environments.
poink · 10h ago
I generally agree you should buy fast machines, but the difference between my 5950x (bought in mid 2021. I checked) and the latest 9950x is not particularly large on synthetic benchmarks, and the real world difference for a software developer who is often IO bound in their workflow is going to be negligible

If you have a bad machine get a good machine, but you’re not going to get a significant uplift going from a good machine that’s a few years old to the latest shiny

brookst · 3h ago
5950x is such a great platform. I can’t see replacing mine for several years at least.
unethical_ban · 1h ago
I enjoy building PCs so I've tried to justify upgrading my 5800x to a 9950x3d. But I really absolutely cannot justify it right now. I can play doom dark ages at 90fps 4k. I don't need it!
the__alchemist · 1h ago
FYI, going from some Radeon I had from 6 years ago to a 9950x made a huge impact on game frame rate: choppy to smoother-than-i-can-percieve. And much faster compile times, and code execution if using thread pools. But, I think it was a 3 series Radeon, not 5. Totally worth it compared to say, GPU costs
Apreche · 8h ago
Too bad it’s so hard to get a completely local dev environment these days. It hardly matter what CPU I have since all the intensive stuff happens on another computer.
alt227 · 43m ago
You dont run an IDE with indexing, error checking and code completion?
DrNosferatu · 9h ago
But single core performance has been stagnant for ages!

Considering ‘Geekbench 6’ scores, at least.

So if it’s not a task massively benefiting from parallelization, buying used is still the best value for money.

bob1029 · 9h ago
Single core performance has not been stagnant. We're about double where we were in 2015 for a range of workloads. Branch prediction, OoO execution, SIMD, etc. make a huge difference.

The clock speed of a core is important and we are hitting physical limits there, but we're also getting more done with each clock cycle than ever before.

zozbot234 · 3h ago
Doubling single-core performance in 10 years amounts to a less than 10% improvement year-over-year. That will feel like "stagnant" if you're on non-vintage hardware. Of course there are improvements elsewhere that partially offset this, but there's no need to upgrade all that often.
DrNosferatu · 8h ago
I certainly will not die on this hill: my comment was motivated by recently comparing single core scores on Geekbench6 from 10 years apart CPUs.

Care to provide some data?

TiredOfLife · 9h ago
Single core performance has tripled in the last 10 years
ksec · 3h ago
This. I just did a comparison between my MacBook Pro Early 2015 to MacBook Air M4 Early 2025.

*Intel Core i5-5287U*: - *Single-Core Maximum Wattage*: ~7-12W - *Process Node*: 14nm - *GB6 Single Core *: ~950

- *Apple M4*: - *Single-Core Maximum Wattage*: ~4-6W - *Process Node*: 3nm - *GB6 Single Core *: ~3600

Intel 14nm = TSMC 10nm > 7nm > 5nm > 3nm

In 10 years, we got ~3.5x Single Core performance at ~50% Wattage. i.e 7x Performance per Watt with 3 Node Generation improvements.

In terms of Multi Core we got 20x Performance per Watt.

I guess that is not too bad depending on how you look at it. Had we compared it to x86 Intel or AMD it would have been worst. I hope M5 have something new.

PartiallyTyped · 9h ago
I don’t think that’s true. AMD’s ****X3D chips are evidence that’s not true, with lots of benchmarks supporting this.
gnfargbl · 9h ago
OK, I'm convinced. Can someone tell me what to buy, specifically? Needs to run Ubuntu, support 2 x 4K monitors (3 would be nice), have at least 64GB RAM and fit on my desk. Don't particularly care how good the GPU is / is not.

Here's my starting point: gmktec.com/products/amd-ryzen™-ai-max-395-evo-x2-ai-mini-pc. Anything better?

JonChesterfield · 3h ago
Fastest threadripper on the market is usually a good bet. Worth considering mini-pc on vesa mount / in a cable tray + fast machine in another room.

Also, I've got a gmktec here (cheaper one playing thin client) and it's going to be scrapped in the near future because the monitor connections keep dropping. Framework make a 395 max one, that's tempting as a small single machine.

arp242 · 3h ago
Threadripper is complete overkill for most developers and hella expensive especially at the top end. May also not even be that much faster for many work-loads. The 9950X3D is the "normal top-end" CPU to buy for most people.
JonChesterfield · 1h ago
Whether ~$10k is infeasibly expensive or a bargain depends strongly on what workloads you're running. Single threaded stuff? Sure, bad idea. Massively parallel set suites backed by way too much C++, where building it all has wound up on the dev critical path? The big machine is much cheaper than rearchitecting the build structure and porting to a non-daft language.

I'm not very enamoured with distcc style build farms (never seem to be as fast as one hopes and fall over a lot) or ccache (picks up stale components) so tend to make the single dev machine about as fast as one can manage, but getting good results out of caching or distribution would be more cash-efficient.

arp242 · 36m ago
Yes of course it depends, which is why I used "most developers" and not "all developers". What is certainly is not is a good default option for most people, like you suggested.
zozbot234 · 3h ago
Different class of machines, the Threadripper will be heavier on multicore and less bottlenecked by memory bandwidth, which is nice for some workloads (e.g. running large local AIs that aren't going to fit on GPU). The 9950X and 9950X3D may be preferable for workloads where raw single-threaded compute and fast cache access are more important.
mortsnort · 2h ago
The computer you listed is specifically designed for running local AI inference, because it has an APU with lots of integrated RAM. If that isn't your use case then AMD 9000 series should be better.
zozbot234 · 1h ago
Not just local AI inference, plenty of broader workstation workloads will benefit from a modern APU with a lot of VRAM-equivalent.
alt227 · 42m ago
If you want 3x 4k monitors, you need to care how good the GPU is.
fmajid · 8h ago
Beelink GTR9 Pro. It has dual 10G Ethernet interfaces. And get the 128GB RAM version, the RAM is not upgradeable. It isn't quite shipping yet, though.

The absolute best would be a 9005 series Threadripper, but you will easily be pushing $10K+. The mainstream champ is the 9950X but despite being technically a mobile SOC the 395 gets you 90% of the real world performance of a 9950X in a much smaller and power efficient computer:

https://www.phoronix.com/review/amd-ryzen-ai-max-arrow-lake/...

the__alchemist · 4h ago
9950x. If you game, get that in X3D version, or a lower-numbered version in X3D.
mft_ · 8h ago
Huh, that's a really good deal at 1500 USD for the 64Gb model considering the processor it's running. (It's the same one that's in the Framework desktop that there's been lots of noise about recently - lots of recent reviews on YouTube.)

Get the 128Gb model for (currently) 1999 USD and you can play with running big local LLMs too. The 8060 iGPU is roughly equivalment to a mid-level nVidia laptop GPU, so it's plenty to deal with a normal workload, and some decent gaming or equivalent if needed.

gnfargbl · 8h ago
Yeah, I like the look of the Framework but the (relative) price and lead times are putting me off a little.

There are also these which look similar https://www.bee-link.com/products/beelink-gtr9-pro-amd-ryzen...

arp242 · 3h ago
From what I've been able to tell from interwebz reviews, the Framework one is better/faster as the GMTek is thermally throttled more. Dunno about the Beelink.
zh3 · 9h ago
Multimonitor with 4K tends to need fast GPU just for the bandwidth, else dragging large windows around can feel quite slow (found that out running 3 x 4K monitors on a low-end GPU).
JSR_FDED · 10h ago
> Desktop CPUs are about 3x faster than laptop CPUs

Maybe that’s an AMD (or even Intel) thing, but doesn’t hold for Apple silicon.

I wonder if it holds for ARM in general?

wqaatwt · 9h ago
Apple doesn’t really make desktop CPUs, though. Just very good oversized mobile ones.

For AMD/Intel laptop, desktop and server CPUs usually are based on different architectures and don’t have that much overlap.

brookst · 3h ago
What’s the difference between a M4 max and a “real” desktop processor?
Tsiklon · 1h ago
Generally PCI-E lanes and memory bandwidth tend to be the big difference between mobile and proper desktop workstation processors.

Core count used to be a big difference but the ARM Procs in the Apple machines certainly meet the lower end workstation parts now. to exceed it you're spending big big money to get high core counts in the x86 space.

Proper desktop processors have lots and lots of PCI-E Lanes. The current cream of the crop Threadripper Pro 9000 series have 128 PCI-E 5.0 Lanes. A frankly enormous amount of fast connectivity.

M2 Ultra, the current closest workstation processor in Apple's lineup (at least in a comparable form factor in the Mac Pro) has 32 lanes of PCI-E 4.0 connectivity that's enhanced by being slotted into a PCI-E Switch fabric on their Mac Pro. (this I suspect is actually why there hasn't been a rework of the Mac Pro to use M3 Ultra - that they'll ditch the switch fabric for direct wiring on their next one)

Memory bandwidth is a closer thing to call here - using the Threadripper pro 9000 series as an example we have 8 channels of 6400MT/s DDR5 ECC. According to kingston the bus width of DDR5 is 64b so that'll get us ((6400 * 64)/8) = 51,200MB/s per channel; or 409.6 GB/s when all 8 channels are loaded.

On the M4 Max the reported bandwidth is 546 GB/s - but i'm not so certain how this is calculated as the maths doesn't quite stack up from the information i have (8533 MT/s, bus width of 64b, seems to point towards 68,264MB/s per channel. the reported speed doesn't neatly slot into those numbers).

In short the memory bandwidth bonus workstation processors traditionally have is met by the M4 Max, but PCI-E Extensibility is not.

In the mac world though that's usually not a problem as you're not able to load up a Mac Pro with a bunch of RTX Pro 6000s and have it be usable in MacOS. You can however load your machine with some high bandwidth NICs or HBAs i suppose (but i've not seen what's available for this platform)

brookst · 1h ago
The M4 Max’s bus width is 512 bytes, not 64.
Sayrus · 10h ago
The author is talking about multi-core performance rather than single core. Apple silicon only offers a low number of cores on desktop chips compared to what Intel or AMD offers. Ampere offers chips than are an order of magnitude faster in multi-core but they are not exactly "desktop" chips. But they are a good data point to say it can be true for ARM if the offer is here.
gloxkiqcza · 9h ago
> Apple silicon only offers a low number of cores on desktop chips compared to what Intel or AMD offers.

* Apple: 32 cores (M3 Ultra)

* AMD: 96 cores (Threadripper PRO 9995WX)

* Intel: 60 cores (W‑9 3595X)

I wouldn’t exactly call that low, but it is lower for sure. On the other hand, the stated AMD and Intel CPUs are borderline server grade and wouldn’t be found in a common developer machine.

brookst · 3h ago
Yeah i9-14900 and 9950x are better comparisons, at 24 and 16 cores respectively.
ezoe · 8h ago
But, does your work constantly compile Linux kernel or encoding AES-256 more than 33GB/s?
feelamee · 2h ago
more generally - it is worth it to pay for a good developer experience. It's not exactly about the CPU. As you compared build times - it is worth it to make a build faster. And, happily, often you don't need new CPU for this.
shmerl · 50m ago
Depends on what you need, but it's totally worth it if you want to spend less time say on compiling stuff.

Video processing, compression, games and etc. Anything computationally heavy directly benefits from it.

syngrog66 · 1h ago
"You should buy a faster CPU" is the post's actual title.

And an evergreen bit of advice. Nothing new to see here, kids, please move along!

jgb1984 · 8h ago
Specifically: buy a good desktop computer. I couldn't imagine working on a laptop several hours per day (even with an external screen + keyboard + mouse you're still stuck with subpar performance).
furkansahin · 8h ago
FWIW, my recent hn submission had a really good discussion on this very same topic.

https://news.ycombinator.com/item?id=44985323

mgaunard · 10h ago
I've seen more and more companies embrace cloud workstations.

It is of course more expensive but that allows them to offer the latest and greatest to their employees without needing all the IT staff to manage a physical installation.

Then your actual physical computer is just a dumb terminal.

hulitu · 9h ago
> I've seen more and more companies embrace cloud workstations.

In which movie ? "Microsoft fried movie" ? Cloud sucks big time. Not all engineers are web developers.

mysteria · 3h ago
There are big tech companies which are slowly moving their staff (for web/desktop dev to asic designers to HPC to finance and HR) to VDI, with the only exception being people who need a local GPU. They issue a lightweight laptop with long battery life as a dumb terminal.

The desktop latency has gotten way better over the years and the VMs have enough network bandwidth to do builds on a shared network drive. I've also found it easier to request hardware upgrades for VDIs if I need more vCPUs or memory, and some places let you dispatch jobs to more powerful hosts without loading up your machine.

fmajid · 7h ago
With tools like Blaze/Bazel (Google) or Buck2 (Meta) compilations are performed on a massive parallel server farm and the hermetic nature of the builds ensures there are no undocumented dependencies to bite you. These are used for nearly everything at Big Tech, not just webdev.
mgaunard · 9h ago
It's for example being rolled out at my current employer, which is one of the biggest electronic trading companies in the world, mostly C++ software engineers, and research in Python. While many people still run their IDE on the dumb terminal (VSCode has pretty good SSH integration), people that use vim or the like work fully remotely through ssh.

I've also seen it elsewhere in the same industry. I've seen AWS workspaces, custom setups with licensed proprietary or open-source tech, fully dedicated instances or kubernetes pods.. All managed in a variety of ways but the idea remains the same: you log into a remote machine to do all of your work, and can't do anything without a reliable low-latency connection.

milesrout · 10h ago
Great, now every operation has 300ms of latency. Kill me
mgaunard · 9h ago
All of the big clouds have regions throughout the world so you should be able to find one less than 100ms away fairly easily.

Then realistically in any company you'll need to interact with services and data in one specific location, so maybe it's better to be colocated there instead.

datadrivenangel · 3h ago
Worse when the VPN they also force on you adds 300ms.
TiredOfLife · 8h ago
I wonder if everyone on HN has just woken from a 20 year coma.
DrNosferatu · 9h ago
You can actually do a lot with a non-congested build server.

But I would never say no to a faster CPU!

Joel_Mckay · 1h ago
Like so many things it depends on the use-case...

If you are gaming... than high core count chips like Epyc CPUs can actually perform worse in Desktops, and is a waste of money compared to Ryzen 7/Ryzen 9 X3D CPUs. Better to budget for the best motherboard, ram, and GPU combo supported by a specific application test-ranked CPU. In general, a value AMD GPU can perform well if you just play games, but Nvidia rtx cards are the only option for many CUDA applications.

Check your model numbers, as marketers ruined naming conventions:

https://opendata.blender.org/

https://www.cpubenchmark.net/multithread/

https://www.videocardbenchmark.net/high_end_gpus.html

Best of luck, we have been in the "good-enough" computing age for some time =3

Rucadi · 10h ago
I've been struggling with this topic a lot, I feel the slowness everyday and productivity loss of having slow computers, 30m for something that could take 10 times less... it's horrible.
dahart · 3h ago
It is true, but also funny to think back on how slow computers used to be. Even the run-of-the-mill cheap machines today are like a million times faster than supercomputers from the 70s and 80s. We’ve always had the issue that we have to wait for our computers, even though for desktop personal computers there has been a speedup of like seven or eight orders of magnitude over the last 50 years. It could be better, but that has always been true. The things we ask computers to do grows as fast as the hardware speeds up. Why?

So, in a way, slow computers is always a software problem, not a hardware problem. If we always wrote software to be as performant as possible, and if we only ran things that were within the capability of the machine, we’d never have to wait. But we don’t do that; good optimization takes a lot of developer time, and being willing to wait a few minutes nets me computations that are a couple orders of magnitude larger than what it can do in real time.

To be fair, things have improved on average. Wait times are reduced for most things. Not as fast as hardware has sped up, but it is getting better over time.

einpoklum · 10h ago
This is quite the silly argument.

* "people" generally don't spend their time compiling the Linux kernel, or anything of the sort.

* For most daily uses, current-gen CPUs are only marginally faster than two generations back. Not worth spending a large amount of money every 3 years or so.

* Other aspects of your computer, like memory (capacity mostly) and storage, can also be perf bottlenecks.

* If, as a developer, you're repeatedly compiling a large codebase - what you may really want is a build farm rather than the latest-gen CPU on each developer's individual PC/laptop.

ralferoo · 9h ago
Just because it doesn't match your situation, doesn't make it a silly argument.

Even though I haven't compiled a Linux kernel for over a decade, I still waste a lot of time compiling. On average, each week I have 5-6 half hour compiles, mostly when I'm forced to change base header files in a massive project.

This is CPU bound for sure - I'm typically using just over half my 64GB RAM and my development drives are on RAIDed NVMe.

I'm still on a Ryzen 7 5800X, because that's what my client specified they wanted me to use 3.5 years ago. Even upgrading to (already 3 years old) 5950X would be a drop-in replacement and double the core count so I'd expect about double the performance (although maybe not quite, as there my be increased memory contention). At current prices for that CPU, that upgrade would pay for itself in terms within 1-2 weeks.

The reason I don't upgrade is policy - my client specified this exact CPU so that my development environment matches their standard setup.

The build farm argument makes sense in an office environment where the majority of developer machines are mostly idle most of the time. It's completely unsuitable for remote working situations where each developer has a single machine and latency and bandwidth to shared resource is slow.

magicalhippo · 9h ago
Why is the client so strict on what you use as a dev machine?
ralferoo · 5h ago
I work in game development. All the developers typically have the same spec machine, chosen at the start of the project to be fairly high end with the expectation that when the project ships it'll be a roughly mid range spec.
derelicta · 9h ago
I wonder what triggered this massive gains in term of CPU Perfs? Any major innovation I might have missed?
bsder · 10h ago
Or, perhaps, make it easier to run your stuff on a big machine over -> there.

It doesn't have to be the cloud, but having a couple of ginormous machines in a rack where the fans can run at jet engine levels seems like a no-brainer.

hulitu · 9h ago
> the top end CPU, AMD Ryzen 9 9950X

This is an "office" CPU. Workstation CPUs are called Epyc.

juped · 9h ago
Yeah. I would say, do get a better CPU, but do also research a bit deeper and really get a better CPU. Threadrippers are borderline workstation, too, though, esp. the pro SKUs.
fmajid · 7h ago
Threadrippers are workstation processors and support ECC, Epycs are servers and the 9950X is HEDT (high end desktop).
miohtama · 10h ago
Even better way to improve the quality of your computer sessions is "Just use Mac." Apple is so much ahead at the performance curve.
mgaunard · 10h ago
They have good performance, especially per watt, for a laptop.

Certainly not ahead of the curve when considering server hardware.

nine_k · 9h ago
Not just that; they have a decent GPU and the unified memory architecture which allows to directly run many ML models locally with good performance.

Server hardware is not very portable. Reserving a c7i.large is about $0.14/hour, this would equal the cost of an MBP M3 64GB in about two years.

Apple have made a killer development machine, I say this as a person who does not like Apple and macOS.

cornholio · 9h ago
Apple still has quite atrocious performance per $. So it economically makes sense for a top end developer or designer, but perhaps not the entire workforce let alone the non-professional users, students etc.
cycomanic · 8h ago
Funny thing we just talked about this in a thread 2 days ago. Comments like this leads me to dismiss anything coming from apple fan boys.

It's not like objective benchmarks disproving these sort of statements don't exist.