Why Everybody Is Losing Money On AI

62 speckx 70 9/5/2025, 4:55:46 PM wheresyoured.at ↗

Comments (70)

candiddevmike · 2h ago
This is a really well written article and contains references to back up the claims made. This part was mind blowing though:

> Cursor sends 100% of their revenue to Anthropic, who then takes that money and puts it into building out Claude Code, a competitor to Cursor. Cursor is Anthropic's largest customer. Cursor is deeply unprofitable, and was that way even before Anthropic chose to add "Service Tiers," jacking up the prices for enterprise apps like Cursor.

boron1006 · 1h ago
I don’t think it’s necessarily bad to be unprofitable but definitely weird to be sending 100% of your revenue to what is essentially your main competitor
dmonitor · 1h ago
It's even weirder from Anthropic's standpoint. Your #1 customer is buying all your product to resell it at loss.
pimeys · 1h ago
Interesting to see what will happen if Cursor goes down...
xnx · 1h ago
People will move to one of the Cursor alternatives that are as good or better?
simonw · 1h ago
> Please don't waste your breath saying "costs will come down." They haven't been, and they're not going to.

Cost to run a million tokens through GPT-3 Da-Vinci in 2022: $60

Cost to run a million tokens through GPT-5 today: $1.25

jjfoooo4 · 50m ago
I think what we'll eventually see is frontier models getting priced dramatically more expensive (or rate limited), and more people getting pickier about what they send to frontier models vs cheaper, less powerful ones. This is already happening to some extent, with Opus being opt-in and much more restricted than Sonnet within Claude Code.

An unknown to me: are the less powerful models cheaper to serve, proportional to how much less capable they are than frontier models? One possible explanation for why e.g. OpenAI was eager to retire GPT 4 is that those older models are still money losers.

simonw · 6m ago
Everything I've seen makes me suspect that models have continually got more efficient to serve.

The strongest evidence is that the models I can run on my own laptop got massively better over the last three years, despite me keeping the same M2 64GB machine without upgrading it.

Compare original LLaMA from 2023 to gpt-oss-20b from this year - same hardware, huge difference.

The next clue is the continuing drop in API prices - at least prior to the reasoning rush of the last few months.

One more clue: o3. OpenAI's o3 had a 80% price drop a few months ago which I believe was due to them finding further efficiencies in serving that model at the same quality.

My hunch is that there are still efficiencies to be wrung out here. I think we'll be able to tell if that's not holding if API prices stop falling over time.

cwmma · 1h ago
yes but due to reasoning models the same query uses VASTLY more tokens today then a couple years ago
simonw · 1h ago
Sure, if you enable reasoning for your prompt. A lot of prompts don't need that.
mewpmewp2 · 1h ago
I'm not making any claims as to whether AI will become profitable or when, but if there's a new tech that has high potential or is highly desirable, I think it's expected that initially money will be lost.

Simply because strategically if there's high long term potential, it initially makes sense to put more money in than you get out of.

Not saying that AI is this, but if you determined that you have a golden goose that laid out 10 trillion USD worth of eggs when it got 10 years old, how much would you pay for it in the auction, and what would you have to show for it for the initial 9 years?

Now what if the golden goose scaled to 10 trillion each year linearly? First years people sound of mind would overpay for what it makes.

pphysch · 1h ago
The issue is we're moving past the "initially" phase and people are starting to suspect that the $10T golden goose is mythical. GPT-5??
mewpmewp2 · 1h ago
I think it's more complex than that. You need to get really specific to calculate the potential value. It's entirely possible that there's 30 use-cases where it's very valuable, it's possible there's 80 use-cases where it's not valuable, and it's unclear how these use-cases are going to balance out in the future. To calculate whether all of this is over or undervalued would require analyzing and understanding all of those use cases and their impact very carefully. There's a lot of direct and indirect value, presumably no one is capable of currently calculating all of that anywhere near accuracy so people are making intuition based guesses on whatever data they can get hold of, but again - not so clear.

I personally think that there's many levels of innovation still to come in terms of robotics, APIs/Frameworks/Coding languages/Infra specifically for LLMs to provide easier and more foolproof ways to code and do things otherwise. I think it's far from played out. I personally think that a lot of potential is still untapped.

patall · 1h ago
Than why don't they raise prices? If AI developers were only worth 200k a year, nobody would pay X times those salaries and development would be cheaper. Similar, if none of the AI coding companies had free offerings, they would have less inference cost or more revenue. Yet they have the feeling that they need to offer those, likely because of competition. The article paints it as if the big companies are a factor of 2 away from profitability. Would absolutely nobody use AI if their tokens were double the price? I highly doubt that.
therein · 1h ago
Meanwhile, I stopped using AI months ago. Have no subscriptions to any of the AI services anymore. Life goes on, quality of life is pretty good, haven't suffered in any way nor do I feel like I'm missing out.
mewpmewp2 · 1h ago
Yeah, but I have also dreamed of living in the woods, being completely self sustainable blissfully. It doesn't mean there aren't capitalists out there looking to produce and sell more, and people out there looking to buy.
watwut · 1h ago
The difference is that choice to live out in the woods costs you. Choice to not have a phone costs you. A choice to not pay ai at this point ... does not cost you unless you live in special situation.
FL33TW00D · 1h ago
"Please don't waste your breath saying "costs will come down." They haven't been, and they're not going to."

Yes, every new technology has always stayed exorbitantly priced in perpetuity.

cwmma · 1h ago
He isn't saying they won't ever come down, he's saying they will not be coming down any time soon due to structural factors he discusses in the article.
cmrdporcupine · 42m ago
There has to be a name for the fallacy where people in our profession imagine that everything in technology follows Moore's law -- even when it doesn't.

We're standing here with a kind of survivorship bias because of all the technologies we use daily that did cost reduce and make it. Plenty did not. We just forget about them.

leptons · 1h ago
Computers do get more powerful, but the price for a decent system has been about the same for a long time. $1500 got me a good system 10 years ago, but I'd still be paying $1500 for a good system today.
warkdarrior · 1h ago
The first mobile phone, the Motorola DynaTAC 8000X, was launched in 1984 for $3,995 (more than $12k in 2025 dollars). So we should expect a 12x cost reduction in LLMs over 40 years.
floren · 57m ago
Adjusted for inflation the Model T cost about $25k. A new car doesn't cost $2k today. Are LLMs phones, or cars?
km3r · 1h ago
> At this point, it's becoming obvious that it is not profitable to provide model inference, despite Sam Altman recently saying that OpenAI was.

Except the authors own provided data says it cost them $2B in inference costs to generate $4B in revenue. Yes training costs push it negative, but this is like tech growth 101, debt now to grow faster leads to larger potential upsides in the future.

ASinclair · 1h ago
Training costs keep exploding and several companies are providing frontier models. They'll have to continue shoveling tons of money into training just to stay in place with respect to the competition. So you can't just ignore training costs.
palata · 1h ago
> Yes training costs push it negative

But training will have to stay forever, right? Otherwise the LLM will be stuck with outdated information...

serjester · 1h ago
The big labs have 50+% margins on serving the models, the training is where they lose money. But every new model boosts OpenAI's revenue growth which is unheard of at their size (300+% YoY). Therefore it's completely reasonable to keep doubling down and making bigger bets.

Most people miss that they have almost a billion free users that are waiting to be monetized. Google makes 400B a year and it's crazy to think OpenAI can't achieve some percentage of that. Why would you slow down and let Google catch up for the sake of short term profitability.

jjfoooo4 · 59m ago
The article claims otherwise:

> In fact, even if you remove the cost of training models from OpenAI's 2024 revenues (provided by The Information), OpenAI would still have lost $2.2 billion fucking dollars.

gdbsjjdn · 1h ago
I think Ed hits on an interesting point about the new user who spends $4 on a TODO file. Current LLM users are very enthusiastic about finding different models for different use cases and evaluating the cost-benefit of those models. But the average end user doesn't give a shit. If LLMs are going to "eat the world" they need to either be a lot better in the median case (bad prompts, bad model selection) or they need to be so cost-effective that you can farm out your query to an ensemble and choose the result dialogue-tree-style.
exe34 · 1h ago
> If LLMs are going to "eat the world" they need to either be a lot better in the median case (bad prompts, bad model selection) or they need to be so cost-effective that you can farm out your query to an ensemble and choose the result dialogue-tree-style.

LLMs have been around for two years. it took decades before the PC really took hold.

gdbsjjdn · 1h ago
Segways existed for a long time and they never took off. Zeppelins too. Not every technology automatically gets good just because time passes.
palata · 1h ago
> LLMs have been around for two years. it took decades before the PC really took hold.

But virtually everybody has been using LLMs already. How long would it have taken for the PC if everybody had had the opportunity to use one for more than a year?

wood_spirit · 1h ago
The IBM PC was an overnight success. It was less than a decade after the first “PCs” and it was the hockey stick moment. I remember x86 clones being seemingly everywhere in just a year or two
BobbyJo · 1h ago
1) Chat GPT is nearly 3 years old, and LLMs were around before that.

2) Yes, they still have some time to fit the market better, but that doesn't change what they'll need to do to fit the market better.

bbreier · 1h ago
Does controversy cause articles to slide on HN? I noticed that this had more points in less time than several articles ranked above it, which surprises me a bit

e.g. at time of writing a post about MentraOS has 11 points in 1 hour compared to this article's 51 in 53 minutes, but this is ranked 58th to Mentra's 6

bbreier · 1h ago
It has dropped to 100th while the MentraOS post remains at 5th. Is HN pushing negative PR for AI down the ranks?
wood_spirit · 1h ago
Some people must be flagging it

Dang, can we chat about collaborative filtering bubbles please?

Jimmc414 · 1h ago
Cursor burning cash to subsidize Anthropic's losses to subsidize Amazon's compute investments is their problem, not mine.

The people writing all of these "AI is unprofitable" pieces are doing financial journalism similar to analyzing the dot-com bubble by looking at pets.com's burn rate. The infra overspend was real as well as the bankruptcies, but it existentially foolish for a business to ignore the behavioral shift that was taking place.

I have to constant remind myself to stop arguing and evangelizing about AI. There is a growing crowd who insists that AI is a waste of money and that AI cannot do things I'm already doing on a daily basis.

Every minute spent explaining to AI skeptics is a minute not spent actually capitalizing on the asymmetry. They don't want to hear it anyway and I have little incentive to convince anyone otherwise.

The companies bleeding money to serve AI below cost prices won't last, but thats all more the reason use them now while they're cheap.

uludag · 50m ago
I think the main fear is that these products will become so enshitified and engrained into everywhere that, looking back, we'll be wishing we didn't depend so much on the technology. For example, the Overton window around social media has shifted so much to the point that it's pretty normal to hear views that social media is a net negative to society and we'd be better off without it.

Obviously the goal of these companies is to generate as much profit as possible as soon as possible. They will turn the tables eventually. The asymmetry will go in the opposite direction, maybe to the extend that one takes advantage of the current asymmetry.

Jimmc414 · 6m ago
I don't disagree with anything you've said.
vb-8448 · 1h ago
Nice read, but I'd add an objection here: even if models don't improve any more, and they raise the standard subscription to 100$/month, I'd still buy it (and a lot of other people, I guess) because I'd extract far more value from it.
patall · 1h ago
That's also what I do not get. The companies are unprofitable because of competition, not because what they do cannot be profitable.
leptons · 1h ago
If it costs more to produce a result than a customer is willing to pay, then the company will either be unprofitable (sell at a loss) or just close up shop. The cost for running LLMs is much higher than what customers are likely to want to pay, and that has nothing to do with competition from other LLM companies, it's a result of high cost of cutting-edge hardware, infrastructure, and the massive amount of electricity it consumes - so much electricity that tech companies are now building power plants to power them (which are very expensive to build). It's a massive cost, and all for the hope that people will continue to accept AI slop.
BobbyJo · 1h ago
Does that get them to the TAM they need to justify current valuations though? I'd guess not.
vb-8448 · 1h ago
Obviously not, but my point is that they aren't losing money because it's intrinsically non-profitable on mass scale, but because of the market in this specific period.
panosv · 1h ago
What about Google? Anyone has any insights on their unit economics since they own the models and the infrastructure (which is also custom TPUs)? Are they doing better or are they in the same money losing business?
seanalltogether · 1h ago
It feels like Google should be able to come up with a revenue figure for search ai results right? How many people do a search but don't click on any links because they just read the ai blurb, but advertisers are still charged for being visible on the page.
soneca · 1h ago
I think the most interesting part is that, in AI, software does not have zero marginal cost anymore. You can’t build once and scale to billions just investing in infrastructure.

Still, companies like OpenAI and Twitter are doing just that. Thus losing money.

Will AI evolve to be again as regular software or will the business model of tech AI become closer to what traditional non-tech companies are?

How the WalMart of AI will look like?

Does SaaS with very high prices and very thin margins even work as a scalable business model?

m_a_g · 1h ago
The cost can be significantly reduced immediately and drastically if OpenAI or Anthropic were to choose to do so.

By simply stopping the training of new models, profitability can be achieved on the same day.

With the existing models, we have already substantial use cases, and there are numerous unexplored improvements beyond the LLM, tailored specifically to the use case.

ten_hands · 1h ago
This only works if all the AI companies collude to stop training at the same time, since the company that trains the last model will have a massive market advantage. That not only seems extremely unlikely but is almost certainly illegal.
antiloper · 1h ago
Current frontier models are not good enough because they still suffer from major hallucinations, sycophancy, and context drift. So there has to be at least (and I have no reason to believe it will be the last, GPT-5 demonstrates that the transformer architectures are hitting diminishing returns) one more training cycle.
palata · 1h ago
> By simply stopping the training of new models, profitability can be achieved on the same day.

But then they stop being up-to-date with... the world, right?

crooked-v · 1h ago
Ah, but see, those existing uses cases allow for merely finite profit, instead of the infinitely growing profit that late stage capitalism demands.
bionhoward · 1h ago
Is this really surprising given how VC funded capitalism works? Spend money to build amazing technology and gain market share, then eventually flip into extraction mode.

Yes, a pullback will kill some weaker companies, but not the ones with sufficient true fans. Plus, we’re talking about a wide-ranging technological revolution with unknown long term limits and economics, you don’t just give up because you’re afraid to spend some money.

I don’t want to pay Anthropic, because I don’t trust them, but I will absolutely pay cursor, because I trust them, and I doubt I’m alone. My cursor usage goes to GPT-5, too, so it’s definitely not 100% Anthropic, even if I’m the only idiot using GPT5 on Cursor

It’s fun to innovate. Making money is a happy byproduct of value creation. Isn’t the price of success always paid in advance, anyway? Why would winning AI tech companies pack it up and stop crushing it over the long term just because they’re afraid to lose someone else’s money in the short term? Wouldn’t capitulation guarantee losses moreso than continued effort?

ciconia · 1h ago
> total revenue: $4B > compute for training models: -$3B > compute for running models: -$2B > employee salaries: -$700M

Though not really representative of what users of said models may experience financially, at this point the question should be raised: if AI compute is 7x more expensive than developer salaries, what's the point? I thought the whole idea was to save money on human resources...

sejje · 1h ago
Someday (probably), a model will be trained once that is better than a human at coding, and it will only need trained once.

It can then be used indefinitely for the cost of inference, which is cheap and will continue getting cheaper.

dpritchett · 1h ago
This sounds a whole lot like Pascal's wager (or Roko's basilisk, if you prefer) for trillionaires.
golergka · 1h ago
> OpenAI spent 50% of its revenue on inference compute costs alone

This means that they operate existing models with very healthy 50% profit margin, that’s excellent unit economics actually. Losing money by investing more into R&D that you make is not the same as burning it by selling a dollar for 90 cents.

toddmorey · 1h ago
50% margins would be actually low and concerning for a saas business. What makes software an attractive business is how well it scales. The standard for saas has been at least 80% or higher margins.

Most all saas accounts require lengthy and generous free trials and boy AI compute throws a bit of a hand grenade into that PLG strategy. It’s just new economics that the industry will have to figure out.

habinero · 1h ago
Revenue minus compute is not profit lol. You still have to pay salaries and rent on your buildings, which are usually the biggest expenses.
hall0ween · 1h ago
I’m confused. If 50% of the revenue goes to inference, that means the other 50% goes into research?
toddmorey · 1h ago
My understanding is it means half of what a subscriber pays is spent just on the compute required as you chat with the models. Which leaves the other have to be divided up among salaries, marketing, R&D, etc.
umbauk · 1h ago
I mean, we know everyone is losing money on AI. I thought from the title it was going to explain why. As in, why are they choosing to lose all that money?
GuB-42 · 1h ago
The obvious reason is that they think it will pay off in the future. Google didn't start profitable, it is now one of the most profitable companies in the world.

Those who invest in money-losing AI believe that it will be the next Google and that profits will come.

Or alternatively, they hope to sell to the greater fool.

candiddevmike · 1h ago
Irrational exuberance
tristor · 1h ago
> As in, why are they choosing to lose all that money?

Executive egos and market hype

hall0ween · 1h ago
The market hype is real. Check-signers at businesses expect LLMs to have the ability the AI CEOs talk about in their interviews and conferences but don’t exist (and are no where near existing).
smeeth · 1h ago
Another day, another person not getting discounted cash flow.

Models trained in 2025 don’t ship until 2026/7. That means the $3bn in 2025 training costs show up as expense now, while the revenue comes later. Treating that as a straight loss is just confused.

OAI’s projected $5bn 2025 loss is mostly training spend. If you don’t separate that out with future revenues, you’re misreading the business.

And yes, inference gross margins are positive. No idea why the author pretends they aren’t.

eu · 1h ago
so the bubble will burst at some point…
deeviant · 1h ago
I don't understand these posts. Do people not understand how venture capital works?

The majority of these companies know they are burning money, but more than that knew they would be losing money at this point and beyond. That is the play, the thesis is: AI will dominate nearly everything in the near future, the play is to own a piece of that. Investors are willing to risk their investment for a chance of getting a piece of the pie.

Posts that flail around yelling companies 'losing money', without addressing the central premise are just wasting words.

In short, do you think AI is not going to dominate nearly everything? Great, talk about that. If you do believe is, then talk about something other than the completely reasonable and expected state of investors and companies fighting for a piece of the pie.

As a somewhat related tangent, people seem to not understand the likely cost trajectory of model training/inference costs:

* Models will reach a 'good enough' point where further training will be mostly focused on adding recent data. (For specific market segments, not saying that we'll have a universal model anytime soon, but we'll soon have one that is 'good enough' at c++, might already be there).

* Model architecture and infrastructure will improve and adapt. I work for a company that was among the first use deep learning to control real-time kinetic processes in production scenarios, our first production hardware was a nvidia Jetson, we had a 200ms time budget for inference, and our first model took over 2000! We released our product, running under 200ms, *using the same hardware* the only difference was improvements in the cuDNN library and some other drive updates and some domain specific improves on our YOLO implementation. Long story short, yes inference costs are huge, but they are also massively disruptable.

* Hardware will adapt. Nvidia cash machine will continue, right now nvidia hardware is optimized for balance between training and inference, where TPUs, the newer ones are more tilted towards inference. I would be surprized if other hardware companies don't force Nvidia to give the more inference based solution and 2-3x cost savings at time point in the next 5 years. And for all I know, perhaps a hardware startup will disrupt Nvidia, it would be one of the most lucrative hardware plays on the planet.

Focusing inference cost is a deadend to understanding the trajectory of AI, understanding the *capability* of AI is the answer to understanding it's place in the future.