Why Is Web Performance Undervalued?

36 B56c 74 8/11/2025, 12:11:01 PM blaines-blog.com ↗

Comments (74)

Apreche · 4h ago
Because it’s not a consideration on the bottom line.

If someone comes to your company and says they want to give them money to buy an advertisement, nobody in power says “no thanks, that will make our website slow.” If someone in marketing says “put this tracking garbage on our site” nobody says “no can do, too slow.” If the designers, or executives looking at the design, are enamored with something really flashy looking nobody says “no, that will make the website slow.”

The engineers likely do complain it will make the website slow. I have been that engineer. But they are never in a position of power to overrule other parts of the company. This is especially true if it’s not a tech company. Web performance does now show up on the earnings report.

graemep · 4h ago
> Because it’s not a consideration on the bottom line.

I would say (maybe this is what you mean by consideration on"?) that it has an impact on the bottom line, but this is not obvious and not understood by the people in charge.

mlinhares · 4h ago
Its always the same reason, the business just doesn't hire people qualified to do the job.

If in 2025 you're not a content farm, your business is to get people to buy stuff from you, you don't have a team tracking every millisecond change in your p99 latency and page load speed across multiple devices, you're just incompetent.

bluGill · 4h ago
Businesses do hire people qualified for the job in general. However they have lots of different jobs with different qualifications needed and so they have lots of different qualifications in play.

I'm not an accountant so if I do something that negatively affects accountants I won't find out - unless what I do shows up in an audit. My company has put things in place so that it is unlikely I would accidentally do something that would show up in an audit (most of them are best practices that every company has). I do have a company credit card, and I can make other purchases on behalf of my company - but if I tried to send my brother in law a million dollars I doubt I could do that (not that I would)

As web engineers what do you have in place so that if someone who competent in a different area does something in the web area and breaks things will will notice and stop them?

graemep · 3h ago
I think part of the reason is that management usually know enough about accounting that they know enough to hire the right people, ask the experts in the area the right questions, and ensure someone implements best practices.

> if I tried to send my brother in law a million dollars I doubt I could do that (not that I would)

If you did it would almost certainly be noticed and you would face consequences. That is why something more complex than just a transfer (very often something very elaborate) is required for fraud.

Then again everyone accepts you need to do things to stop fraud, that the trade offs (things taking more time and effort, not being able to do somethings) from the necessary precautions.

mlinhares · 3h ago
Metrics, you have to track both network latency for all critical calls and page load times from multiple devices. There are multiple services out there that let you track this information.
WJW · 4h ago
Most businesses are indeed incompetent, but that's OK as it is usually not required to be all that competent to still make a lot of profit.
lesuorac · 4h ago
It seems fairly trackable though? Like money spend per second of loading time?

You do run into a weird problem where as the site gets faster for the p99 the median speed can get worse as people that originally avoiding the site over speed start to use it more often so you get a worse p99 population than before and the old-p99 creeps down into p50. But also you have more users so that's nice.

graemep · 3h ago
You have more users, hopefully more revenue, and a problem that can be solved by throwing connectivity and hardware at it - and you have already probably reduced your spend on one or both in making the site faster.
tossandthrow · 4h ago
If that is the case, it should be almost trivial to write up a paper that quantifies the cost of poor performance and executives ymwouldnlove to read it.
danielvaughn · 4h ago
Yep.

Another issue is that you often simply aren’t given the time to make it performant. Deadlines are heavily accelerated for web products. You’re barely allotted time to fix bugs, never mind enhancements.

Most web devs don’t want to make slow sites, they’re not given the opportunity to.

mwcz · 4h ago
They aren't given the opportunity to, that's true. We've also been in an industry-wide performance drought for so many years that many devs don't even realize how fast websites can be.
danielvaughn · 4h ago
Yep. I recently ran an experiment where I tried appending elements to a list in rapid succession. One with near-native JS and another with React. The former was about 40 times faster.
sgarland · 4h ago
TFA mentions this [0] delightful series of articles, in which it’s calculated that every KB of JS sent to clients cost Kroger $100K/yr.

[0]: https://dev.to/tigt/making-the-worlds-fastest-website-and-ot...

donatj · 4h ago
As for the tracking code from on high, holy hell you are right. We got bought by a big company and suddenly we've got giant support panels in the bottom left and JS loading from random domains we've got no way to keep our content security policy up to date with updated domains because they're out of our hands.
game_the0ry · 3h ago
> Because it’s not a consideration on the bottom line.

True if you work at non-technical company, like a bank.

fridder · 3h ago
It used to matter a bit more, or at least the initial page load speed did.
codingdave · 4h ago
There is a better approach, as an engineer, to get this type of point across. Don't just reject their solution... offer a better one. If they come to you saying they want tracking on a web site, ask what goal they are trying to achieve. Ask them what costs they are paying for the service they want you to implement. And then see if you can design a server-based system that gives them the info they want, and write up a proposal for it that includes the downsides and long-term hidden costs of their solutions. Whatever they are asking you, follow that pattern - treat them like a customer (which they are), determine their needs, determine their budget, and propose solutions that give a full comparison of the options.

Worst case scenario, they say no. But often you'll at least open a dialogue and get involved in the decision making. You might even get your solution implemented. And you are definitely more likely to be consulted on future decisions (as long as you are professional and polite during the discussions).

sarchertech · 1h ago
This is 100% the correct way to do things. The tactic of never saying no but proposing better alternatives is the best way to guide stakeholders into making better technical decisions.

However, it’s requires a lot more mental energy (and can be riskier) than just doing the exact dumb thing the jira ticket asks for, or just saying “this is bad” (and then doing the dumb thing anyway because there’s a deadline).

Because of that most people don’t do it and even food engineers won’t have the energy to do it all the time.

This is a huge part of why big companies can’t produce high quality, high performance software consistently.

fsflover · 4h ago
koakuma-chan · 4h ago
What a dystopian world we live in.
moomin · 4h ago
Some anecdata for you: I used to work for a price comparison website. We had pretty good metrics on how long pages took to load and what the drop-off from page to page of the process was. It will shock you not in the least that milliseconds translates into percentages lost pretty quickly. Speed up your sign up process and that is money in the bank.
jpdb · 4h ago
Web performance is probably/mostly valued as efficiently as it needs to be.

The numbers mentioned in the article are...quite egregious.

> Oh, Just 2.4 Megabytes. Out of a chonky 4 MB payload. Assuming they could rebuild the site to hit Alex Russell's target of 450 KB, that's conservatively $435,000,000 per year. Not too bad. And this is likely a profound underestimation of the real gain

This is not a "profound underestimation." Not by several orders of magnitude. Kroger is not going save anywhere even remotely close to $435 million dollars by reducing their js bundle size.

Kroger had $3.6-$3.8 billion in allocated capex in the year of 2024. There is no shot javascript bundle size is ~9% of their *total* allocated capex.

I work with a number of companies of similar size and their entire cloud spend isn't $435,000,000 -- and bandwidth (or even networking all up) isn't in their time 10 line items.

A leak showed that Walmart spent $580m a year on Azure: https://www.datacenterdynamics.com/en/news/walmart-spent-580...

These numbers are so insanely inflated, I think the author needs to rethink their entire premise.

StopVibeCoding · 2h ago
it's not just their direct cost, it's also the loss of revenue. the author wasn't arguing that they could save 435 million dollars in server costs.

Instead they were arguing that in addition to saving maybe a million or two in server costs, they would gain an additional 435 million dollars in revenue because less people would leave their website

nchmy · 1h ago
Bizarre that this had to be spelled out...
jerf · 4h ago
It may seem absurd that the apps is costing Kroger that much but my family's experience backs it up.

In the post-COVID era, my wife has become quite accustomed to digital shopping. We actually live closer to a Meijer, which is basically the Midwest's answer to Walmart, except it's decades older. (You may be able to thank Meijer for Super Walmarts; it's Meijer that proved out the concept of attaching a grocery store to a general superstore for Walmart, and it gave Walmart some difficulty penetrating in to the Midwest so they had to add it to compete.) Of course COVID caused a big app rush and at first everybody's app was pretty crappy, so we just stuck with the closest one.

Over time, Meijer's app slowed down pretty badly, so my wife ended up switching to Kroger. I saw a lot of Kroger bags. One of the biggest problems with the Meijer was that trying to add a second of any item was a synchronous round-trip to a rather busy and slow server, so goodness help you if you wanted, say, 6 bananas. Going from 1 to 6 could literally take 30 seconds on the worst days. And that was just the worst issue, the whole app was generally slow and prone to failure.

But somewhere around two years ago, clearly someone at Meijer got the performance religion and cleaned up their app and website. I still wouldn't call it blazing fast, but I would call it acceptable by modern standards, and it blew away the Kroger app of the time... again, not because it was pushing 120fps with super low latency, but just because it was fairly reasonable to use. Adding five more bananas is now just tapping the button five times, and while I can still kind of see the async requests chasing each other a bit, it pretty much always ends up converging on the correct number in a couple of seconds. So my wife switched back.

I don't know what Kroger's current performance is, because now that we don't have a problem we haven't been seeking solutions. So they've lost thousands of dollars of business over the years to Meijer from us.

An anecdote, of course, but I suspect a common one.

I put this out there in the hope that it will push more people into caring a bit more about performance. I think there's a fairly large range where "normal people" will use a sluggish app or website, and wander away, and if you do manage to rope them into a marketing survey they won't necessarily say it's because it's slow, you'll get other rationalizations, because it isn't a fully-conscious reaction and realization for them... but nevertheless, you'll have a very, very leaky funnel and just reading those surveys may not tell you why.

techdmn · 4h ago
My personal read on this is that everyone is still trying to recreate the "sudden success" of FAANG-like companies in their start-up phases. (Never mind how long it actually took them to become big.) Basically upper management incentivizes "big bets" that might turn into a "moon shot". Those bets are new features. You'll never get rich quick just by optimizing latency. You might get rich slowly, but how is that going to pump the stock this quarter / get me promoted?
JimDabell · 4h ago
It’s about to get worse. It doesn’t matter if you spend weeks optimising your web performance if visitors have to wait several seconds to go through a proof-of-work JavaScript widget, Cloudflare Turnstile, or a CAPTCHA to prove they aren’t AI crawlers before they can even see your site.
alerighi · 4h ago
Because these day JS frameworks are used even for things where a website with a server side MVC framework like in the old days (in whatever language, PHP, Java, Python, etc) would be just fine. Maybe just to add some stuff like form validation in the frontend whenever needed with jQuery or even plain JS.

Not to say that React is useless, it has its applications, but just 95% of the websites shouldn't need it, and I shouldn't download 20+Mb of JS files just to load the homepage of a site.

Another thing to consider, most people that work in tech have probably gigabit or better internet connections. Unfortunately, the user of the website don't have this luxury, and often use either mobile (4G if lucky) connections, or slow ADSL connection (at my house fiber has still to be brought, and I have a 13mbit ADSL).

I hate when just to load the homepage of a site it takes more than 30 seconds (I'm looking at you, ClickUp!). It shouldn't be acceptable: just use HTTP for what was created for, serving hyper text, and serve me hypertext. I would rather load in continuous small HTML files (that is fast even with slow connections because the latency is typically in the ms order even with ADSL) that download a full JS application each time I access a page.

knallfrosch · 4h ago
You know what web site visitors value more than a fast web site?

An auto-playing TikTok video where an influencer peddles shoes.

rossdavidh · 4h ago
1) among non-tech decision makers, it's hard to keep out excess features and excess content, because that means most people don't get their pet project onto the website. The compromise is "everything goes in", which gives everybody a successful project that they can point to (successful in the sense that it got on the website)

2) among tech decision-makers, i.e. alpha developers and young hotshots, the urge to use what FAANG uses was very strong, because that's how you get the trendy tech on your resume. Basically the same at (1), above, but for developers.

Exacerbating this is that each additional thing is only a small part of the problem: "No single raindrop believes it is responsible for the flood"

robviren · 4h ago
What am I gonna do, leave my bank because the website loaded slow? Get tickets through some other option? Order differ food because it took 10 seconds to load a menu? I hate the slow web so much, but it just gets lost in the churn of life. I'd just as soon not buy something because the logo was silly or they have weeds growing in their side walk. I respect a well engineered site but most don't.
sarchertech · 1h ago
If there’s a competitor I’ll definitely switch. For things like banks the pain would have to be pretty bad.

But for things like impulse purchases from social media ads, I’ll definitely just close the site if it’s just moderately slow.

There are small slowdowns that will change your behavior in little ways that add up. Maybe you have a Walmart pickup order in, and you think oh I’ll add some ice cream for dessert tonight. If you know the process is super slow, you might just remember how slow it is and decide it’s not worth it because you don’t really need ice cream anyway.

There are tons of studies showing that wait times costs money, and that users will drop if the load time is too long.

bluGill · 4h ago
I have ordered from someone else at times. I've given up on websites and payed 5x the price to a local store because their web site was so slow. It is really hard to track sales that you don't because the customer went elsewhere.
nchmy · 4h ago
> each KB of JavaScript sent to the client was costing the company $100,000 per year. How much is Kroger sending today? 2.4 Megabytes. Out of a chonky 4 MB payload. Assuming they could rebuild the site to hit a target of 450 KB, that's conservatively $435,000,000 per year.

This math isn't mathing for me, no matter how I slice it. Can someone help?

ben_w · 4h ago
I also think they made a mistake, but it doesn't look like a huge one?

4 MB ~= 4000 kB, (4000 kB - 450 kB) * $100,000/kB/year = $355,000,000/year

(With a bit of fiddling to get the same answer as them, I think they may have done this: (4000 kB + 350 kB) * $100,000/kB, though I wouldn't want to guess why this error happened).

nchmy · 2h ago
Hmm. But the JS is only 2.4mb, so not sure why the 4mb is probably being used.

Whatever the case, it's still immense numbers. So large, in fact, that I was skeptical about it. But Kroger has 150 billion in revenues and 2.5 billion in profits. I have to figure the loss is revenues, not profits - that would, indeed, be too high.

sgarland · 4h ago
Because the idea of DX was invented, and devs now clutch to it like it’s the last life preserver on a sinking ship.

“If you normalize your schema, it will be 20x smaller, and you’ll eliminate an entire class of bugs.”

“Hmm, but then we’ll have to write joins in the query, so we’ll pass.”

evantbyrne · 4h ago
Results like these are only possible when engineering leads have completely lost the plot. The checkout taking many minutes longer, if true, is bad enough that I doubt the problem is purely UI bloat. Either that or the benchmark itself is cooked.
zeroCalories · 5h ago
I agree that the problem is product development, but the framing is wrong. Engineers generally have a solid intuition for what will perform well, but when UX designers and PMs with only a vague idea of how these technologies work dreams up an idea, gives a deadline, then the engineers are evaluated on fulfilling those metrics, then the outcome will be obvious.
torginus · 4h ago
I disagree - it's often engineering's fault. It's been pretty consistent that a decently specced server, running PHP that was written sometime last century, generating static pages, redis cache in front, and serving static content via nginx, beats the everloving pants off whatever fotm microservice SPA monstrosity modern devs tend to come up with.

The most hilarious if you go to pirate sites (for stuff like comics, manga or movies), and it's 100x faster and works better than the official paid for alternative, even though I'm sure the former runs off some dudes former gamer PC in his bedroom.

extraisland · 4h ago
A lot of companies seem to architect their web apps to deal with millions of users. When in reality they may have a couple of hundred hitting the site at once.

This explodes the cost of development and it makes current web development miserable IME.

I am forced to deal with everything being totally overengineered when a Flask app with a PostgresSQL backend could probably do the job on a reasonably priced VPS.

ben_w · 4h ago
If it's done right, "millions" of users is something you can serve on racks of machines specced from the late 00s with all the commensurate sysadmin tools.

It's just that "done right" means the web pages are actually crafted to be what they need, and there's none of the modern extras like 1850 ad tracking agencies all being copied in, nor an ad server injecting just about anything…

sgarland · 4h ago
They cosplay as architecting them that way. IMO, one of the largest negative consequences to EverythingAsAService is that it allows people with little to no experience administering and tuning systems to run complex pieces of software. Sure, you can go spin up a managed Kafka cluster with a few clicks, but that means you can skip reading the docs, which means that you likely have no idea how to use it (or even when you should use it). Case in point for that: the number of people who think Kafka is a message queue.

This is also a huge contributor to the curse of Full Stack Engineering. “Full Stack” should mean that you have significant experience with and knowledge of every system and language you’re interacting with (I’ll give systems administration a pass for the sake of argument). For most web apps, that means frontend and backend, as well as some flavor of RDBMS, some flavor of caching, and likely a message queue. It likely also means you need to understand distributed systems. The number of developers I’ve met who tick all of those boxes is zero. Honestly, as soon as you include an RDBMS, it’s game over for most. Yes, you can get away with horrible things via ORMs, but as soon as you start hitting scaling limits, you’ll discover what people have known for decades: databases are tricky, and chucking everything into JSON columns with UUID PKs isn’t a good idea.

torginus · 3h ago
I used to work at a boring mid-sized finance-y company a while ago. Our stack was full Microsoft - IIS,ASP.NET,MS SQL, plus some appliance doing load balancing/API throttling stuff (not sure, not a sysadmin guy). The whole thing fit in a closet, handled 10k+ requests day and night.

All our code was basically - check user auth - do some stuff with the db/perhaps call out to some external service - serve request.

I never saw a single request above 50ms, with most being way less. Nobody ever talked about performance.

Then I moved to a fancy startup. EKS, nodejs servers, microservices of every denomination, GraphQL, every fashionable piece of tech cca 2018. I realized node was shockingly slow, and single threaded to boot, all that overhead added up to requests with a median latency of about 200ms, with the 99% latency measured in seconds.

Performance engineering was front and center, with dashboards, entire sprints dedicated to optimization, potential optimizations constantly considered, adding fake data and suspense to our React frontend, to hide latency etc. It was insane.

RajT88 · 4h ago
Totally agree. I did a lot of work with an eCom site (a big one; you probably see their brand name daily) for about 5 years and latency mattered a lot to them. Any extra latency was deadly, and they freaked out about latency going up by 100ms.

So then you load their site - unbelievable garbage, tons of pop-up ads for promos, video frames all over the place, high res graphics, megabytes of javascript.

The backend just as messy, with dozens and dozens of layers that made the latency budget by the time you reached the database backend super slim. Think: orders dropping when database request latency hit 10ms at the 99th percentile.

Insanity.

cosmic_cheese · 5h ago
And don’t forget that once the engineers are done, a nice thick layer of analytics junk is slathered on, plus whatever layer that allows marketing/sales to make arbitrary changes at will without code. By the time you’re done with all that even the best engineered web app has become a behemoth.

There’s are a few cases where the engineering side isn’t helping things though, like how the Spotify desktop app loads a full redundant set of JS dependencies for each pane since they’re each independent iframes, which they do so the teams responsible for the panes never have to interact.

extraisland · 4h ago
> Engineers generally have a solid intuition for what will perform well

I worked for about 15 years as a frontend developer. I've seen very little evidence of this being the case.

I've seen a huge amount of developers (backend, frontend doesn't matter much) will do things that are really dumb e.g. repeated look of values that don't change often, not trying to minimise roundtrips.

ben_w · 4h ago
It's not just that. I've seen plenty of technical talks where people are showing off (and a few jobs where we were required to use) stuff that's several layers of abstraction more complex than it needs to be.

Right now, I'm converting some C++ game code that is very obviously originally meant for a 68k mac (with resource forks etc.) into vanilla JS. It's marginally easier to work with than SwiftUI + VIPER, and I'm saying that as someone who has been working on iOS apps since the first retina iPod came out and has only 14 months experience of C++ and perhaps about that, perhaps a bit less than that, total experience of JS since getting one of those "lean foo in 24 hours" books from WHSmith using pocket money in the late 90s.

gchamonlive · 4h ago
Maybe it's culture. Software is hard and the less unknowns you have in development the better. Frontend tools have exploded in complexity[1], but they get the job done. There are great alternatives nowadays, like phoenix liveview, but they would require full rewrites and possibly a change in language and software architecture that most teams just can't do, maybe because they are already heavily invested in whatever framework they are using or they can't afford to shift to elixir (or other languages, paradigms...).

So they stay in this marshlands of bloated UI frameworks, and they need to push updates and new features which makes the problem worse.

We seem to try to explain everything in software in technical terms, but sometimes at the end of the day culture and communication plays a larger role I think. Software is built by humans after all.

[1]: https://news.ycombinator.com/item?id=34218003

extraisland · 4h ago
I totally disagree with his conclusion that companies want their website to be faster. They only care about performance issues if it causes a problem. If it does not, they couldn't care less.
bluGill · 4h ago
The problem is the problems are hard to measure. How do you know that "Joe" bought from someone else because the competitors website just "felt better". We know from UX research how speed affects how things feel. We know that customers will go elsewhere at times. We also know not everyone who visits your site will buy. I know statisticians have tools to measure the loss, but I'm not able to tell you how they do it or how accurate they are (they can give you a range).
azangru · 4h ago
> I totally disagree with his conclusion that companies want their website to be faster

But what do web developers want? What do web designers want? Some developers pride themselved on being craftsmen. They would write tests. They would design architectures. Why wouldn't they want websites they are building to be faster?

extraisland · 4h ago
Management doesn't care about it being crafted nicely. They want a ROI. Often it isn't easy for them to see the benefit of something being more performant, or looking better. It doesn't matter to them as often they won't ever use these systems. It just needs to function acceptably.

A huge number of places are not data-driven. Therefore it is difficult to show in anyway that improving service speed will improve ROI.

So even if you are a craftsmen, your colleagues aren't. They will never care, they have no incentive to, because management doesn't care.

I've totally given up with it and I can write fast JS code. I just don't get rewarded for it. In fact it has be a detriment to my career.

azangru · 4h ago
Management are management. They do managementy things. They do not develop. We, developers, do. For some things, we hold ourselves up to certain standards. Why not for site performance?

> I've totally given up with it and I've can write fast JS code. I just don't get rewarded for it. In fact it has be a detriment to my career.

Do you write tests? They also are something that doesn't directly bring money.

extraisland · 3h ago
> Management are management. They do managementy things. They do not develop. We, developers, do. For some things, we hold ourselves up to certain standards. Why not for site performance?

I've just explained why. What part didn't you understand?

> Do you write tests? They also are something that doesn't directly bring money

I do (I like to know my code works). That doesn't mean other people will.

Much like site performance unless there is an emphasis on quality, then many developers won't bother writing tests.

I've had people copy and paste tests, then jig the code around so they got the green tick in the IDE. The feature didn't work at all. The test was complete nonsense. I have colleagues that put up PRs where the code doesn't even compile.

threetonesun · 4h ago
As a web developer who also uses web sites I care less about speed than I do usability. Most of the time I'm on a 1Gbps+ connection, all I want is your site flows to make sense and any actions I take to be reliable and clearly handle errors. For things that are truly critical I want 99% of my UI to be precached by a native application, so we're only talking in data (and yes, keep that data small).

There are lots of good reasons to make your website faster, but given the number of sites I've seen that fall over and die if you block Google Analytics, I don't feel that it's the biggest issue most websites have.

azangru · 3h ago
> As a web developer who also uses web sites I care less about speed than I do usability. Most of the time I'm on a 1Gbps+ connection

Sure, I get it. The same argument can be applied to web accessibility. Most frontend developers are young and healthy. Should they care about accessibility of the sites they build?

threetonesun · 2h ago
It’s not the same argument at all. Accessibility is important. What I’m saying is if you want it to be fast offload the UI to a native app, don’t even bother me with a web page. If it’s critical serve it in plain text or simple HTML. Either of those are both fast and accessible.

The idea that most websites should broadly work for people even on a 2G signal is absurd. Some should. However I’m not going to try to configure a BMW and email dealers from the middle of the woods, and I’m sure they know their target audience is not either.

extraisland · 20m ago
> It’s not the same argument at all. Accessibility is important. What I’m saying is if you want it to be fast offload the UI to a native app, don’t even bother me with a web page. If it’s critical serve it in plain text or simple HTML. Either of those are both fast and accessible.

A web page and a native app all suffer from the same issue. It frequently needs to talk to a server somewhere. No you are downloading the UI/Logic, but often it needs to talk to a server.

> The idea that most websites should broadly work for people even on a 2G signal is absurd.

I worked in a large company and we did optimise for some random guy that was in Spain on a crappy 2G/3G signal (this was a real customer). It was a good test case of how the app responded with a poor bandwidth & signal. As a result the application would behave well when having poor signal.

Large companies such as google pore huge resources into optimising, that why YouTube (both their app and their mobile site) will work on a flakey connection on a train going through the countryside and something like kick.com won't.

Often It isn't the bandwidth that is frequently the issue. It is the latency between requests and stability of a signal. Sometimes a request can fail, the phone goes to sleep and sometimes that can suspend the browser thread. This affects higher bandwidth connections such as 4G and 5G.

If the web site/web app or even native app is coded poorly often you will get into a state where you have to reload the app.

Also downloading an app could be relatively large compared to a web page. If you just want to check the train times / bus times / closing time of a shop or similar it will take longer to use the app as you need to download the whole thing first.

> However I’m not going to try to configure a BMW and email dealers from the middle of the woods, and I’m sure they know their target audience is not either.

Things like this do happen. I've bought vehicles from farmhouses in the middle of nowhere in the UK. Bank transfers, road tax I have literally done in someone's garden.

gjsman-1000 · 4h ago
“conservatively $435,000,000 per year”

But how much would it cost to have a few senior engineers fix it, and ensure zero mistakes or missing functionality while doing it?

Even if I were CTO of Kroger… nope. I’m not doing it. I’m not spending months of engineer effort to save $435K, unless there’s proof of greater savings.

EDIT: Yes, I terribly misread that this is million, not thousand, which makes a lot more sense even though I do not believe, even for a second, this actually costs Kroger $435M a year.

jerf · 4h ago
Vastly, vastly less than that. Even as expensive as engineers are.

And "zero" mistakes is not part of the goal. It clearly wasn't in the requirements during initial development, why would they add it afterwards?

gjsman-1000 · 4h ago
Because, if there is even one mistake; the profit losses from that mistake, could easily eat up the savings.

Imagine having to tell management your optimization to save $400K cost $100K in engineer time and caused a $700K outage where the “Add to cart” button sometimes didn’t work. Great job. You’re possibly fired.

(Edit: Due to my misreading, add a few digits to the outage cost.)

jerf · 4h ago
That's not how you do risk analysis. You're proving too much [1]; you're proving that no company should ever change any working system because something could go wrong.

[1]: https://en.wikipedia.org/wiki/Proving_too_much

gjsman-1000 · 4h ago
I don’t care if you can justify it in an academic sense; your company’s boardroom is going to say:

- Savings $400K

- Direct expenditures $100K

- Mistake expenditures $700K

- Net loss $400K; immediate loss $800K

And that’s it. You’re fired, replaced with someone who is better at not fixing things that ain’t broken; who wouldn’t have made this mistake in the first place. And heaven help you if your code is deployed before the Nintendo Switch 2 launch (or another major launch) when you made this mistake; or if you just ruined another company’s launch and your company’s contract with them to support it. Pointing to a Wikipedia article, musing about how risk analysis should be done, isn’t going to save your skin.

bluGill · 4h ago
If mistakes cost that much you need something in place to prevent them anyway. Because I guarantee that eventually you will need some change made (tax laws change?) and you then get that same risk.
ozgrakkurt · 2h ago
Bean counter logic example A.

The articles fault is trying to appeal to this logic at all.

Reality is you don’t know how to estimate this, I don’t know either and I doubt anyone really knows.

This kind of logic is used to push things in the direction that the person wants, it really doesn’t seem genuine

gwd · 4h ago
> I’m not spending months of engineer effort to save $435K

Would you do it to save 1000x that much? Because you're missing 3 zeroes.

gjsman-1000 · 4h ago
That would make a lot more sense (pardon my misreading)…

… but at the same time, I don’t believe for a second this actually causes a $435M loss. There are way too many assumptions - for example, if people are ordering groceries, they are way more tolerant of delays for needs than, say, the latest TV deal.

The loss that it causes on conversion rate for a brand new company trying to get leads does not track with buying oatmeal.

StopVibeCoding · 4h ago
for a company the size of kroger, $435M is an extremely conservative loss. If someone did proper research and determined that it's 3x, maybe 4x that much I would belive them.
tossandthrow · 5h ago
It is not undervalued, it is just priced.

The author does not seem to understand the concept of premature optimization.

ceejayoz · 5h ago
I have gigabit internet at home, and am in Australia for a month.

It isn't premature. It's very much needed.

gchamonlive · 5h ago
What do you mean by priced? And how did you come to the conclusion that that concept is alien to the author? Do you mean to tell that premature optimization by itself explains sluggishness of websites? I think you could be a bit more clear in your comment.
Ekaros · 5h ago
Well, in case of web technologies is in general price of any user facing optimization... Which these companies do not want to pay.