I asked it to make a drawing of the US with every state numbered from biggest to smallest with 1 being the largest.
Maine was #89 (That is not a typo.) and Oregon was #1.
OpenAI as a company simply cannot exist without a constant influx of investor money. Burning money on every request is not a viable business model. Companies built on OpenAI/Anthropic are similarly deeply unprofitable businesses.
OpenAI needs to convert to a for-profit to get any more of the funding that Softbank promised (that its also unclear how Softbank itself would raise) or to get significant cash from anyone else. Microsoft can block this and probably will.
It all reminds me of that Paddy's Dollars bit from it's always sunny.
"We have no money and no inventory... there's still something we can do... that's still a business somehow..."
infecto · 1h ago
Isn’t it old news that the full for-profit is not happening and they renegotiated the terms that would make the current proposed PBC a solution as it meets the economic terms?
I have no idea if OpenAI succeeds or not but I find arguments like yours difficult to understand. Most businesses are not using these systems to draw a map. Maybe the release of 5 is lackluster but it does not change that there is some value in these tools today and ignoring R&D (which is definitely a huge cost) they run at a profit.
dylan604 · 41m ago
> ignoring R&D (which is definitely a huge cost) they run at a profit.
how can you say such a hand wavy comment with a straight face? you can't just ignore a huge cost for a company and suddenly they are profitable. that's Enron level moronic. without constant R&D, the company gets beat by competitors that continue R&D. the product is not "good enough" to not continue improving.
if i ignored my major costs in my finances, i could retire, but i can't go to the grocery store and walk out with a basket of food while telling them that i'm ignoring this major cost in my life.
get real
infecto · 26m ago
I don’t know why so many take these discussion with such a high emotional level. Has the ability to constructively discuss a topic been lost? I know you usually respond with high emotion and brash but at least try to be constructive.
It’s a valid point and that’s the biggest question when it comes to the medium to long term business plan. Those R&D costs are an important part of it. My point is that since runtime is profitable there is a lot more runway to figure out how to tweak R&D spend in such a way that it becomes a viable business for the long term.
There are a lot of questions that they need to answer to get to pure profitability but they are also the fastest growing company on a MAU number in history with a product that you can see has a chance at become profitable from all sides. They may fail or become sidelined but the hyperbole and lack of critical discussion here is disappointing.
emccue · 59m ago
The entire US stock market is propped up by big tech companies spending massively on Data Centers and GPUs for AI. OpenAI is valued higher than Netflix.
A company that can pull in single digit billions in revenue for hundreds of billions in expenses just doesn't make sense.
> Most businesses are not using these systems t̶o̶ ̶d̶r̶a̶w̶ ̶a̶ ̶m̶a̶p̶.̶
FTFY
And no - while it might be obvious from the outside in that it probably won't happen, the continued existence of the business is still predicated on conversion to a for-profit. They don't just need the amount of money they've already "raised", they need too keep getting more money forever.
infecto · 55m ago
FTFY? Cute, but you’re arguing against a strawman. My point wasn’t that companies are using GPT to draw maps, it’s that dismissing the tech based on one goofy output ignores the far more common, revenue-generating use cases already in production.
As for “single-digit billions in revenue vs. hundreds of billions in expenses,” that’s just bad math. You’re conflating the total AI capex from hyperscalers with OpenAI’s own P&L. Yes, training is capital-intensive, but the marginal cost to serve (especially at scale) is much lower, and plenty of deployments are already profitable on an operating basis when you strip out R&D burn.
The funding structure question is fair, the for-profit conversion path matters but pretending the whole business is propped up solely by infinite investor charity is just wrong.
emccue · 52m ago
Microsoft AI Revenue In 2025: $13 billion, with $10 billion from OpenAI, sold "at a heavily discounted rate that essentially only covers costs for operating the servers."
Take the emotional level down a notch. You seemed to miss the point. Hyperscaler spend does not equate to OpenAI P&L.
bathtub365 · 3m ago
[delayed]
anon191928 · 1h ago
burning money worked for Uber. As long as they can IPO or get cheap debt from governm friends any valuation can work. Uber lost double digit billions as an app with no edge or anythin. It always made no sense beyond 1 billion
gtirloni · 1h ago
> no edge or anythin
I wouldn't say they had no edge. They had a huge advantage over traditional taxi companies. You can argue that a local Uber-like app could be easily implemented, that's where the investors came in to flood the markets and ensure other couldn't compete easily.
The situation is in no way similar to OpenAI's. OpenAI truly has no edge over Anthropic and others. AGI is not magically emerging from LLM's and they don't seem to have an alternative (nobody does but they promised it and got the big bucks so now it's their problem).
emccue · 1h ago
Uber's whole schtick is being what was an already profitable business model (Taxis) with lower overhead/easier access.
That money they burned was on customer acquisition, building infrastructure, etc. The unit economics of paying to be driven to the airport or Benihanas was always net positive.
They weren't losing money on every customer, even paying ones. There just isn't a business model here.
xnx · 1h ago
> burning money worked for Uber.
TBD. Some people did well while Uber gave money away, but Uber is not net profitable over its lifetime.
Analemma_ · 54m ago
Uber raised something like $50 billion in debt and equity before it went public, but after 15 years of losing money, it has finally started making profits… just in time for Waymo to arrive and eat its lunch. Of course, Uber could themselves get into the self-driving game, but their entire profit story to investors relies on pushing costs away from them onto drivers; it vanishes entirely if they have to maintain their own fleet.
Uber is profitable on a cash basis, but if you’re a public investor, you got fleeced by the early-stage venture money and debtholders. I don’t think it will ever pay back what it raised.
xnx · 51m ago
Agree.
> Uber could themselves get into the self-driving game
They tried. Made a little progress, killed someone, and gave up (rightfully so).
dvfjsdhgfv · 1h ago
The way they do this in Europe is that an enterpreneur buys a fleet of cars and then gets a visa for a number of folks from Bangladesh and other areas who don not own any of these cars and ride them in turns (they also sleep like 10 in one appartment but that's a different story). The owner gets the money and distributes them to the actual drivers. Uber says they are innocent as they are not in an emplyer-employee position with any of these drivers.
This model worked for the fleet owners so far because the Saudi gave enough money so that both (1) the customers were happy, (2) the cash from the ride could be divided between owners and drivers in a way these drivers complained only to a certain extent.
But the last two years (the only profitable ones) are much worse, both for the drivers and fleet owners. There is still sunk cost in there, but once the cars get old enough they will need to think well whether to buy/lease the next batches.
riku_iki · 1h ago
Uber had limited underpowered competition, so they could win starvation game.
OpenAI competes with google, who can drop 50B/y into AI hype for very long time.
sillyfluke · 1h ago
the 1.5 mil bonus to tech staff announcement prior to chatgpt 5 release makes even more sense now. They knew it would be difficult to manage public expectations and wanted to counter the short-term (in the best case) drop in morale in the company.
riku_iki · 56m ago
I think that 1.5m bonus is likely stocks with 500B valuation. There are other rumors they want outsiders to be allowed to buy stocks with 500B valuation.
dvfjsdhgfv · 1h ago
"...while Uber has achieved profitability, some analyses suggest that a substantial portion of these profits may come from an increased revenue share at the expense of drivers' earnings".
So let's imagine is 2040 and OpenAI is finally profitable. Now, Uber did this by increasing prices, firing some staff and paying smaller wages to drivers. And all this while having near-monopoly in certain areas. What realistic measures would they need to take in order to compete with, say, Google? Because I just wish them good luck with that.
Telemakhos · 1h ago
I had it create a map "in the style of a history textbook." It came up with something that looks worse than I imagined: https://pasteboard.co/3zGy5ti4hHuT.jpg
sillyfluke · 1h ago
Does it get the non-drawing written text list version right at least?
emccue · 1h ago
It regurgitates the list in text form, which is almost certainly in the training data.
But this company is valued more than Netflix. The bar should not be this low.
sillyfluke · 1h ago
Yeah, I was just curious how deep the abyss was in this instance.
hodgehog11 · 2h ago
Yes, GPT-5 is more of an iteration than anything else, and to me this says more about OpenAI than the rest of the industry. However, I think the majority of the improvements over the past year have been difficult to quantify using benchmarks. Users often talk about how certain models "feel" smarter on their particular tasks, and we won't know if the same is true for GPT-5 until people use it for a while.
The "GPT-5 will show AGI" hype was always a ridiculously high bar for OpenAI, and I would argue that the quest for that elusive AGI threshold has been an unnecessary curse on machine learning and AI development in general. Who cares? Do we really want to replace humans? We should want better and more reliable tools (like Claude Code) to assist people, and maybe cover some of the stuff nobody wants to do. This desire for "AGI" is delivering less value and causing us to put focus on creative tasks that humans actually want to do, putting added stress on the job market.
The one really bad sign in the launch, at least to me, was that the developers were openly admitting that they now trust GPT-5 to develop their software MORE than themselves ("more often than not, we defer to what GPT-5 says"). Why would you be proud of this?
9rx · 52m ago
> Do we really want to replace humans?
AGI doesn't really replace humans, it merely provides a unified model that can be hooked up to carry out any number of tasks. Fundamentally no different than how we already write bespoke firmware for every appliance, except instead of needing specialized code for each case, you can simply use the same program for everything. To that extent, software developers have always been trying to replace humans — so the answer from the HN crowd is a resounding yes!
> We should want better and more reliable tools
Which is what AGI enables. AGI isn't a sentience that rises up to destroy us. There may be some future where technology does that, but that's not what we call AGI. As before, it is no different than us writing bespoke software for every situation, except instead of needing a different program for every situation, you have one program that can be installed into a number of situations. Need a controller for your washing machine? Install the AGI software. Need a controller for your car's engine? Install the same AGI software!
It will replace the need to write a lot of new software, but I suppose that is ultimately okay. Technology replaced the loom operator, and while it may have been devastating to those who lost their loom operator jobs, is anyone today upset about not having to operate a loom? We found even more interesting work to do.
hodgehog11 · 24m ago
> Which is what AGI enables.
I appreciate the well-crafted response, but respectfully disagree with this sentiment, and I think it's a subtle point. Remember the no free lunch theorems: no general program will be the best at all tasks. Competent LLMs provide an excellent prior from which a compelling program for a particular task can be obtained by finetuning. But this is not what OpenAI, Google, and Anthropic (to a lesser extent) are interested in, as they don't really facilitate it. It's never been a priority.
They want to create a digital entity for the purpose of supremacy. Aside from DeepMind, these groups really don't care about how this tech can assist in problems that need solving, like drug discovery or climate prediction or discovery of new materials (e.g. batteries) or automation of hell jobs. They only care about code assistance to accelerate their own progress. I talk to their researchers at conferences and it frustrates me to no end. They want to show off how "human-like" their model is, how it resembles humans in creative writing and painting, how it beats humans on fun math and coding competitions that were designed for humans with a limited capacity to memorize, how it provides "better" medical opinions than a trained physician. That last use case is pushing governments to outlaw LLMs for medicine entirely.
A lab that claims to push toward AGI is not interested in assisting mankind toward a brighter future. They want to be the first for bragging rights, hype, VC funding, and control.
9rx · 2m ago
> no general program will be the best at all tasks.
Perhaps I wasn't entirely clear, but AGI isn't expected to be the best at all tasks. The bar is only as compared to a human, which also isn't the best at all tasks.
But you are right that nobody knows how to make them good at even some tasks. Hence why everyone is so concerned about LLMs writing code. After all, if you had "true" AGI, what would you need code for? It is well understood that AGI isn't going to happen. What many are banking on, however, is that AGI can be simulated if LLMs can pull off being good at one task (coding).
> They want to be the first for bragging rights, hype, VC funding, and control.
That's the motivation for trying to create AGI (at least pretending to), but not AGI itself.
usefulcat · 56m ago
> Why would you be proud of this?
Isn't it obvious? They have a huge vested interest in getting people to believe that it's very useful, capable, etc.
bluefirebrand · 1h ago
> Do we really want to replace humans?
Unfortunately for a substantial number of people the answer to this question seems to be a resounding "yes"
gtirloni · 1h ago
With those people being business owners, investors, etc, 100% of the time.
The other 99% would like automation to make their lives easier. Who wouldn't want the promised tech utopia? Unfortunately, that's not happening so it's understandable that people are more concerned than joyous about AI.
NoMoreNicksLeft · 57m ago
>With those people being business owners, investors, etc, 100% of the time.
How can one run a business by replacing humans, if no humans are left with enough income to buy your products?
I suspect that the desire to "replace humans" runs far deeper than just shortsighted business wants.
bluefirebrand · 51m ago
> How can one run a business by replacing humans, if no humans are left with enough income to buy your products?
If you control all of the wealth and resources and you have fully automated all of the production of everything you could ever want, then why would you need other humans to buy anything?
mutkach · 3h ago
The focus now is not the model, but the Product - "here we improve the usuability by removing the choice between models", "here is a better voice for tts", "here is a nice interface for previewing html"
Only about 5 minutes of the whole presentation are dedicated to enterprise usage (COO in an interview sort of indirectly confirms that haven't figured it out yet).
And they are cutting the costs already (opaque routing between models for non-API users is a clear sign of that). The term "AGI" is dropped, no more exponential scaling bullshit - just incremental changes over the time and only over select few domains.
Actually it is a more welcoming sign and not concerning at all that this technology matures and crystallizes around this point. We will charitably forget and forgive all the insane claims made by Sam Altman in the previous years. He can also forget about cutting ties with Microsoft for that same reason.
baggachipz · 1h ago
> ...when they need to find any way possible to squeeze paid subscribers out of their (money losing) free user base.
Also note that they're losing money on their paid subscribers.
ndr_ · 1h ago
Some of the problems with GPT-5 in ChatGPT could actually be due to new model that is in place to route requests to the actual GPT-5 models. There are four models in the GPT-5 family, and I could reproduce the faulty "blueberry" test result only with the "gpt-5-chat" (aka "gpt-5-main") model through the API. This model is there to answer (near) instantly and it falls in the non-thinking category of LLMs. The "blueberry" test represents what they are particularly bad at (and what OpenAI set out to solve with o1). The other thinking models in the family, including gpt-5-nano, solve this correctly.
profstasiak · 1h ago
so can we please stop talking of AGI until counting letter in a word are not hard?
Havoc · 2h ago
The messaging is all over the place anyway. Not so long ago OAI was talking about faster iterations and warning people to not expect huge leaps. (A position that makes sense imo). Yet people talk about AGI in a serious manner?
gtirloni · 1h ago
> We are now confident we know how to build AGI as we have traditionally understood it. We believe that, in 2025, we may see the first AI agents "join the workforce" and materially change the output of companies. We continue to believe that iteratively putting great tools in the hands of people leads to great, broadly-distributed outcomes.
Yes a quote from a meme post on Reddit. No doubt he has been overselling the future for a while but why use a quote with the wrong context?
Analemma_ · 44m ago
See the sibling comment from AlexandrB. Altman and tons of other hype men in tech do this thing where they make outrageous promises, then retcon as “just jokes” whichever ones don’t come true, so that they can never be disproven. It’s a swindle made all the more irritating by the enablers like you who go “why did you take the joke seriously?” to get cred on the internet while helping the scam continue.
Or to put it another way, do you think Altman denounced all the hype (and subsequent investment dollars) he got because of the “AGI achieved internally” post? Did he say to anyone “hey, that was a meme post, don’t take it seriously”? Or did he milk it for all it was worth before only later quietly climbing down from that when it was no longer paying dividends. Again, duplicitous and disingenuous behavior.
infecto · 23m ago
Again, out of all the quotes to call out the weakest one was picked.
nocoiner · 12m ago
“Hidden, poorly internally labeled fiat@ account”
coldpie · 1h ago
I don't think anyone serious is talking about AGI from LLMs, no.
I find this pattern in tech hype really frustrating. Someone in a leadership role in a major tech company/VC promises something outrageous. Time passes and the promise never materializes. People then retcon the idea that "everybody knew that wasn't going to happen". Well either "everybody" doesn't include Elon Musk[1], Sam Altman, or Marc Andreessen[2] or these people are liars. No one seems to be held to their track record of being right or wrong, instead people just latch on to the next outrageous promise as if the previous one was fulfilled.
> Elon Musk[1], Sam Altman, or Marc Andreessen[2] ... these people are liars.
Bingo. These people are salesmen & marketers. Lying to sell a product (including gathering funding & pumping company stock) is literally the job description. If they weren't good at it, they wouldn't hold the positions they do.
beart · 39m ago
Being a good salesperson does not require lying. It's not the job that's doing the lying, but the person lying to you.
phist_mcgee · 1h ago
And yet altman talks about AGI being imminent, but his company has only ever produced LLMs.
parineum · 1h ago
Now why would the CEO of an AI company say something like that!?
AstroBen · 1h ago
Is it a given that they need to unrealistically hype everything? To me it just seems like he's killing any and all credibility he had
Probably a bad long term strategy?
I mean other non-AI companies use hype too sure.. but it's maybe a little sprinkle of 1.1x on top aimed to highlight their best features. Here we're going full on 100x of reality
coldpie · 21m ago
> To me it just seems like he's killing any and all credibility he had. Probably a bad long term strategy?
He's already got more money than God and there's an infinite supply of suckers who think wealth and skill/intelligence are correlated for him to keep feeding off of (see also Goop and Tesla, incredibly successful companies also run by wealthy liars). Sam Altman will be just fine.
parineum · 24m ago
It's not a given but Altman is a public figure for a reason while I don't know the names of any of the other CEOs off the top of my head. He talks a lot and when he talks, it's about AI. Even talking about the dangers of AI is hype because it implies it's an important topic to discuss now because it's imminent.
phist_mcgee · 1h ago
I do know that AGI has a different meaning internally to what we think it means:
Given the difference between GPT-3 and GPT-4, a fair numbering for "GPT-5" is probably "GPT-4.2".
bbstats · 1h ago
Gemini 3.0 is gonna cook
FergusArgyll · 1h ago
Maybe the brain drain was real? we'll find out from gemini 3 I guess
anonzzzies · 1h ago
For sure, but not for that reason; there is currently no one with a plan how to go from current (LLMs) to a better model. It's some 'more focused training' 'better prompting' 'agentic' 'smarter lookups' 'better tooling'. But fundamentally, this model is simply shagged out and it'll get a little better with the above, but the jump everyone is waiting for cannot happen without a new model invention.
FergusArgyll · 1h ago
My point is; maybe we can't prove that until deepmind gives us their best shot
anonzzzies · 1h ago
But aren't deepmind, in this infinite money AI times, giving it their best shot?
rvz · 1h ago
Am I right to say that "AGI" was just...cancelled again?
Did we just get scammed right in front of our eyes with an overhyped release and what is now an underwhelming model if the point was that GPT-5 was supposed to be trustworthy enough for serious use-cases and it can't even count or reason about letters?
So much for the "AGI has been achieved internally" nonsense with the VCs and paid shills on X/Twitter bullshitting about the model before the release.
mutkach · 1h ago
Not only "AGI" is cancelled but they also sort of admitted that so-called "scaling" "laws" don't work anymore. Scaling inference kinda still works, but obviously is bounded by context size and haystack-and-needle diminishing accuracy. So the promise of even steadily moving towards AGI is dubious at best.
baobun · 1h ago
How were you still eating that?
kanak8278 · 1h ago
The most funny part of the demo was colored chats. That also behind a paywall. I was like are they become instagram
cwrichardkim · 3h ago
> They admitted that they were, and I am not lying about this, paywalling chat colors. […] This is a feature that a company adds when they are out of ideas
This observation + sherlocking cursor suggests that perhaps sherlocking is the ideation strategy. Curious to see if they’re subsidizing token costs specifically to farm and Sherlock ideas
dentemple · 1h ago
Yeah, I agree with the OP here. After all this time, being able to change the chat colors at this point has some real We-reached-the-bottom-of-the-backlog energy, and they're just now implementing the ideas that weren't considered important enough before by the PMs to consider.
It hardly feels like a next generation release.
As a related anecdote (not saying that this is industry standard, just pointing out my own experience), the startup I work for launched their app four years ago, and, for all four of those years, we've had "Implement a Dark Mode design" sitting at the bottom of our own backlog. Higher priority feature requests are always pre-empting it.
msabalau · 1h ago
The core product failure here is overhyping incremental improvement, eroding trust.
PMs operating at this level ought to be bringing in some low cost UX improvements alongside major features. That simply isn't a sign that they've run ought of backlog. (That said, it is rather pathetic to paywall this)
A moment's consideration ought to show that Open AI has plenty of significant work they they can be doing, even if the core model never gets any better than this.
davydm · 3h ago
Issues like this are why I don't use ai agents for code. I don't want to sift through the bullshit confidently spewed out by the model.
It doesn't understand anything. It can't possibly "understand my codebase". It can only predict tokens, and it can only be useful if the pattern has been seen before. Even then, it will product buggy replicas, which I've pointed out during demos. I disabled the ai helpers in my IDEs because the slop the produce is not high quality code, often wrong, often misses what I wanted to achieve, often subtly buggy. I don't have the patience to deal with that, and I don't want to waste the time on it.
Time is another aspect of this conversation, with people claiming time wins, but the data not backing it up, possibly due to a number of factors intrinsic to our squishy evolved brains. If you're interested, go find gurwinder's article on social media and time - I think the same forces are at work in the ai-faithful.
mrits · 1h ago
There is a threshold that every developer needs for them to make it be worth their time. For me that has already been met. Your comment makes me think that you don't believe it will start producing higher quality code than you anytime soon.
I think most of us are in the camp that even though we don't need AI right now we believe we will not be valuable in the near future without being highly proficient with the tooling.
bluefirebrand · 53m ago
> even though we don't need AI right now we believe we will not be valuable in the near future
This reads to me like you don't think you're valuable right now either
VeejayRampay · 1h ago
the whole event was shit, but we're all past the point where we can just say that, because the technology is now so entrenched that it's become unavoidable, so everything has now to jump through hoops to justify its existence and its greatness
Maine was #89 (That is not a typo.) and Oregon was #1.
OpenAI as a company simply cannot exist without a constant influx of investor money. Burning money on every request is not a viable business model. Companies built on OpenAI/Anthropic are similarly deeply unprofitable businesses.
OpenAI needs to convert to a for-profit to get any more of the funding that Softbank promised (that its also unclear how Softbank itself would raise) or to get significant cash from anyone else. Microsoft can block this and probably will.
It all reminds me of that Paddy's Dollars bit from it's always sunny.
"We have no money and no inventory... there's still something we can do... that's still a business somehow..."
I have no idea if OpenAI succeeds or not but I find arguments like yours difficult to understand. Most businesses are not using these systems to draw a map. Maybe the release of 5 is lackluster but it does not change that there is some value in these tools today and ignoring R&D (which is definitely a huge cost) they run at a profit.
how can you say such a hand wavy comment with a straight face? you can't just ignore a huge cost for a company and suddenly they are profitable. that's Enron level moronic. without constant R&D, the company gets beat by competitors that continue R&D. the product is not "good enough" to not continue improving.
if i ignored my major costs in my finances, i could retire, but i can't go to the grocery store and walk out with a basket of food while telling them that i'm ignoring this major cost in my life.
get real
It’s a valid point and that’s the biggest question when it comes to the medium to long term business plan. Those R&D costs are an important part of it. My point is that since runtime is profitable there is a lot more runway to figure out how to tweak R&D spend in such a way that it becomes a viable business for the long term.
There are a lot of questions that they need to answer to get to pure profitability but they are also the fastest growing company on a MAU number in history with a product that you can see has a chance at become profitable from all sides. They may fail or become sidelined but the hyperbole and lack of critical discussion here is disappointing.
A company that can pull in single digit billions in revenue for hundreds of billions in expenses just doesn't make sense.
> Most businesses are not using these systems t̶o̶ ̶d̶r̶a̶w̶ ̶a̶ ̶m̶a̶p̶.̶
FTFY
And no - while it might be obvious from the outside in that it probably won't happen, the continued existence of the business is still predicated on conversion to a for-profit. They don't just need the amount of money they've already "raised", they need too keep getting more money forever.
As for “single-digit billions in revenue vs. hundreds of billions in expenses,” that’s just bad math. You’re conflating the total AI capex from hyperscalers with OpenAI’s own P&L. Yes, training is capital-intensive, but the marginal cost to serve (especially at scale) is much lower, and plenty of deployments are already profitable on an operating basis when you strip out R&D burn.
The funding structure question is fair, the for-profit conversion path matters but pretending the whole business is propped up solely by infinite investor charity is just wrong.
Capital Expenditures in 2025: $80 billion
---
Amazon AI Revenue In 2025: $5 billion
Capital Expenditures in 2025: $105 billion
---
Google AI Revenue: $7.7 Billion (at most)
Capital Expenditures in 2025: $75 Billion
---
Meta AI Revenue: $2bn to $3bn
Capital Expenditures In 2025: $72 Billion
---
The math is bad, but its not "bad math."
(Numbers from here: https://www.wheresyoured.at/the-haters-gui/)
I wouldn't say they had no edge. They had a huge advantage over traditional taxi companies. You can argue that a local Uber-like app could be easily implemented, that's where the investors came in to flood the markets and ensure other couldn't compete easily.
The situation is in no way similar to OpenAI's. OpenAI truly has no edge over Anthropic and others. AGI is not magically emerging from LLM's and they don't seem to have an alternative (nobody does but they promised it and got the big bucks so now it's their problem).
That money they burned was on customer acquisition, building infrastructure, etc. The unit economics of paying to be driven to the airport or Benihanas was always net positive.
They weren't losing money on every customer, even paying ones. There just isn't a business model here.
TBD. Some people did well while Uber gave money away, but Uber is not net profitable over its lifetime.
Uber is profitable on a cash basis, but if you’re a public investor, you got fleeced by the early-stage venture money and debtholders. I don’t think it will ever pay back what it raised.
> Uber could themselves get into the self-driving game
They tried. Made a little progress, killed someone, and gave up (rightfully so).
This model worked for the fleet owners so far because the Saudi gave enough money so that both (1) the customers were happy, (2) the cash from the ride could be divided between owners and drivers in a way these drivers complained only to a certain extent.
But the last two years (the only profitable ones) are much worse, both for the drivers and fleet owners. There is still sunk cost in there, but once the cars get old enough they will need to think well whether to buy/lease the next batches.
OpenAI competes with google, who can drop 50B/y into AI hype for very long time.
So let's imagine is 2040 and OpenAI is finally profitable. Now, Uber did this by increasing prices, firing some staff and paying smaller wages to drivers. And all this while having near-monopoly in certain areas. What realistic measures would they need to take in order to compete with, say, Google? Because I just wish them good luck with that.
But this company is valued more than Netflix. The bar should not be this low.
The "GPT-5 will show AGI" hype was always a ridiculously high bar for OpenAI, and I would argue that the quest for that elusive AGI threshold has been an unnecessary curse on machine learning and AI development in general. Who cares? Do we really want to replace humans? We should want better and more reliable tools (like Claude Code) to assist people, and maybe cover some of the stuff nobody wants to do. This desire for "AGI" is delivering less value and causing us to put focus on creative tasks that humans actually want to do, putting added stress on the job market.
The one really bad sign in the launch, at least to me, was that the developers were openly admitting that they now trust GPT-5 to develop their software MORE than themselves ("more often than not, we defer to what GPT-5 says"). Why would you be proud of this?
AGI doesn't really replace humans, it merely provides a unified model that can be hooked up to carry out any number of tasks. Fundamentally no different than how we already write bespoke firmware for every appliance, except instead of needing specialized code for each case, you can simply use the same program for everything. To that extent, software developers have always been trying to replace humans — so the answer from the HN crowd is a resounding yes!
> We should want better and more reliable tools
Which is what AGI enables. AGI isn't a sentience that rises up to destroy us. There may be some future where technology does that, but that's not what we call AGI. As before, it is no different than us writing bespoke software for every situation, except instead of needing a different program for every situation, you have one program that can be installed into a number of situations. Need a controller for your washing machine? Install the AGI software. Need a controller for your car's engine? Install the same AGI software!
It will replace the need to write a lot of new software, but I suppose that is ultimately okay. Technology replaced the loom operator, and while it may have been devastating to those who lost their loom operator jobs, is anyone today upset about not having to operate a loom? We found even more interesting work to do.
I appreciate the well-crafted response, but respectfully disagree with this sentiment, and I think it's a subtle point. Remember the no free lunch theorems: no general program will be the best at all tasks. Competent LLMs provide an excellent prior from which a compelling program for a particular task can be obtained by finetuning. But this is not what OpenAI, Google, and Anthropic (to a lesser extent) are interested in, as they don't really facilitate it. It's never been a priority.
They want to create a digital entity for the purpose of supremacy. Aside from DeepMind, these groups really don't care about how this tech can assist in problems that need solving, like drug discovery or climate prediction or discovery of new materials (e.g. batteries) or automation of hell jobs. They only care about code assistance to accelerate their own progress. I talk to their researchers at conferences and it frustrates me to no end. They want to show off how "human-like" their model is, how it resembles humans in creative writing and painting, how it beats humans on fun math and coding competitions that were designed for humans with a limited capacity to memorize, how it provides "better" medical opinions than a trained physician. That last use case is pushing governments to outlaw LLMs for medicine entirely.
A lab that claims to push toward AGI is not interested in assisting mankind toward a brighter future. They want to be the first for bragging rights, hype, VC funding, and control.
Perhaps I wasn't entirely clear, but AGI isn't expected to be the best at all tasks. The bar is only as compared to a human, which also isn't the best at all tasks.
But you are right that nobody knows how to make them good at even some tasks. Hence why everyone is so concerned about LLMs writing code. After all, if you had "true" AGI, what would you need code for? It is well understood that AGI isn't going to happen. What many are banking on, however, is that AGI can be simulated if LLMs can pull off being good at one task (coding).
> They want to be the first for bragging rights, hype, VC funding, and control.
That's the motivation for trying to create AGI (at least pretending to), but not AGI itself.
Isn't it obvious? They have a huge vested interest in getting people to believe that it's very useful, capable, etc.
Unfortunately for a substantial number of people the answer to this question seems to be a resounding "yes"
The other 99% would like automation to make their lives easier. Who wouldn't want the promised tech utopia? Unfortunately, that's not happening so it's understandable that people are more concerned than joyous about AI.
How can one run a business by replacing humans, if no humans are left with enough income to buy your products?
I suspect that the desire to "replace humans" runs far deeper than just shortsighted business wants.
If you control all of the wealth and resources and you have fully automated all of the production of everything you could ever want, then why would you need other humans to buy anything?
Only about 5 minutes of the whole presentation are dedicated to enterprise usage (COO in an interview sort of indirectly confirms that haven't figured it out yet). And they are cutting the costs already (opaque routing between models for non-API users is a clear sign of that). The term "AGI" is dropped, no more exponential scaling bullshit - just incremental changes over the time and only over select few domains. Actually it is a more welcoming sign and not concerning at all that this technology matures and crystallizes around this point. We will charitably forget and forgive all the insane claims made by Sam Altman in the previous years. He can also forget about cutting ties with Microsoft for that same reason.
Also note that they're losing money on their paid subscribers.
"Reflections" by Sam Altman, January 2025 - https://blog.samaltman.com/reflections
Or to put it another way, do you think Altman denounced all the hype (and subsequent investment dollars) he got because of the “AGI achieved internally” post? Did he say to anyone “hey, that was a meme post, don’t take it seriously”? Or did he milk it for all it was worth before only later quietly climbing down from that when it was no longer paying dividends. Again, duplicitous and disingenuous behavior.
I find this pattern in tech hype really frustrating. Someone in a leadership role in a major tech company/VC promises something outrageous. Time passes and the promise never materializes. People then retcon the idea that "everybody knew that wasn't going to happen". Well either "everybody" doesn't include Elon Musk[1], Sam Altman, or Marc Andreessen[2] or these people are liars. No one seems to be held to their track record of being right or wrong, instead people just latch on to the next outrageous promise as if the previous one was fulfilled.
[1] https://electrek.co/2025/03/18/elon-musk-biggest-lie-tesla-v...
[2] https://dailyhodl.com/2022/06/01/billionaire-and-tech-pionee...
> Elon Musk[1], Sam Altman, or Marc Andreessen[2] ... these people are liars.
Bingo. These people are salesmen & marketers. Lying to sell a product (including gathering funding & pumping company stock) is literally the job description. If they weren't good at it, they wouldn't hold the positions they do.
Probably a bad long term strategy?
I mean other non-AI companies use hype too sure.. but it's maybe a little sprinkle of 1.1x on top aimed to highlight their best features. Here we're going full on 100x of reality
He's already got more money than God and there's an infinite supply of suckers who think wealth and skill/intelligence are correlated for him to keep feeding off of (see also Goop and Tesla, incredibly successful companies also run by wealthy liars). Sam Altman will be just fine.
https://www.fanaticalfuturist.com/2025/01/microsoft-and-open...
Massive grain of salt though.
Did we just get scammed right in front of our eyes with an overhyped release and what is now an underwhelming model if the point was that GPT-5 was supposed to be trustworthy enough for serious use-cases and it can't even count or reason about letters?
So much for the "AGI has been achieved internally" nonsense with the VCs and paid shills on X/Twitter bullshitting about the model before the release.
This observation + sherlocking cursor suggests that perhaps sherlocking is the ideation strategy. Curious to see if they’re subsidizing token costs specifically to farm and Sherlock ideas
It hardly feels like a next generation release.
As a related anecdote (not saying that this is industry standard, just pointing out my own experience), the startup I work for launched their app four years ago, and, for all four of those years, we've had "Implement a Dark Mode design" sitting at the bottom of our own backlog. Higher priority feature requests are always pre-empting it.
PMs operating at this level ought to be bringing in some low cost UX improvements alongside major features. That simply isn't a sign that they've run ought of backlog. (That said, it is rather pathetic to paywall this)
A moment's consideration ought to show that Open AI has plenty of significant work they they can be doing, even if the core model never gets any better than this.
It doesn't understand anything. It can't possibly "understand my codebase". It can only predict tokens, and it can only be useful if the pattern has been seen before. Even then, it will product buggy replicas, which I've pointed out during demos. I disabled the ai helpers in my IDEs because the slop the produce is not high quality code, often wrong, often misses what I wanted to achieve, often subtly buggy. I don't have the patience to deal with that, and I don't want to waste the time on it.
Time is another aspect of this conversation, with people claiming time wins, but the data not backing it up, possibly due to a number of factors intrinsic to our squishy evolved brains. If you're interested, go find gurwinder's article on social media and time - I think the same forces are at work in the ai-faithful.
I think most of us are in the camp that even though we don't need AI right now we believe we will not be valuable in the near future without being highly proficient with the tooling.
This reads to me like you don't think you're valuable right now either