GPT-5 is a joke. Will it matter?

35 dbalatero 49 8/13/2025, 2:30:30 AM bloodinthemachine.com ↗

Comments (49)

827a · 3h ago
GPT-5 is a product that represents the needs of its users. They have 700M weekly active users, the vast majority of which just want a better google, therapist, life coach, or programming mate; not some superintelligent being that can solve the Riemann hypothesis.

The reason why true singularity-level ASI will never happen isn't because the technology cannot support it (though, to be clear: It can't). Its because no one actually wants it. The capital markets will take care of the rest.

mlyle · 3h ago
> The reason why true singularity-level ASI will never happen isn't because the technology cannot support it (though, to be clear: It can't). Its because no one actually wants it.

Plenty of people want something capable of doing tasks well beyond what GPT-5 can and that are equally capable to a proficient human.

If you can do that cheaper or faster or more available than said skilled human, there is definitely a market for it.

827a · 3h ago
The issue is, what you're describing actually doesn't require intelligence, beyond a point. A mathematician who can solve the Reimann hypothesis isn't likely to be all that good of an L8 AWS Engineer; but what AWS would pay a lot of money for is a digital L8 AWS Engineer, not a digital Terrance Tao. The deep reflection that AI frontier labs have to do, soon, is: If its not intellect, what are we missing?
klipt · 3h ago
I'm sure plenty of cancer patients would pay good money for a digital genius doctor who can custom design a drug to target their cancer.
827a · 1h ago
Again, this doesn't require intelligence, beyond a point. Breakthrough medical therapies aren't invented by 180 IQ Einsteinian hyper-intellectuals.

My point in being pedantic about this is to just point out that an extreme amount of value could be generated by these systems if they only attained the capability to do the things 110 IQ humans already do today. Instead of optimizing toward that goal, the frontier labs seem obsessed with optimizing toward some other weird goal of hyper-intelligence that has very unclear marketability. When pressed on this, leaders in these companies will say "its transferable"; that hyper-intellectuality will suddenly unlock realistic 110 IQ use-cases; but we're increasingly seeing, quite clearly to me, that this isn't the case. And there's very good first principal reasons why you should believe that it won't be.

Gud · 11m ago
No, because Einstein-level intelligence is rare.

But 180 IQ intelligence that doesn’t sleep doesn’t stop with instant access to the worlds knowledge will absolutely be able to revolutionise the sciences.

Avicebron · 3h ago
embodiment?
esalman · 3h ago
Nobody will pay them 1.5m bonus for developing ASI though.

As much as we hate on meta, open models are the answer.

aurareturn · 3h ago
I agree with your first part. There are a ton of people who want to use LLMs to solve the Riemann hypothesis but those people will need a different model with vastly more compute than just regular old ChatGPT.

ASI isn't what people want or not. It's an AGI that is able to self improve at a rapid rate. We haven't built something that can self improve continuously yet. It's not related to whether people want it or not. Personally, I do want ASI.

techpineapple · 3h ago
> just want a better google, therapist, life coach, or programming mate; not some superintelligent being that can solve the Riemann hypothesis

Except it seems like it's a worse google, therapist, life coach and programming mate, with the personality of someone who spends all their time trying to solve the Riemann hypothesis.

827a · 3h ago
That's probably true for the time being, though I'd consider that a simple miscalculation on their part than the grand expression of market dynamics that I asserted. They're course-correcting on this and are planning to add more warmth back to its responses.
mlyle · 3h ago
To me, GPT-5 feels slightly better. And rumor is the cost for OpenAI to provide it is much less.
georgemcbay · 3h ago
Any of the LLMs (including google's own) is a better "google" than traditional google search. Not really for any technical reason as much as it was that google was perfectly willing to cede the search war to garbage SEO sites as long as they were driving eyeballs to adsense.

But the rest I agree with.

EcommerceFlow · 3h ago
1) Sam said only 7% of PLUS users were using thinking models. This auto-router is probably one of the biggest innovations for "normie use" ever.

2) Maybe I'm biased because I'm using GPT5-Pro for my coding, but so far it's been quite good. Normal thinking mode isn't substantially better than o3 IMO, but that's a limitation of data/search.

leshokunin · 37m ago
I use it several hours day. 5 is definitely slower. I’m not certain the quality has improved. I do hate that it keeps saying it’ll think even longer and take up to a minute to do stuff.
Uehreka · 3h ago
I used to say that the dumbest conversation about AI was about whether it was “actually intelligent”, but I was wrong: It’s the conversation about whether it’s “overhyped”.

Like, I don’t care how much Sam Altman hyped up GPT-5, or how many dumb people fell for it. I figured it was most likely GPT-5 would be a good improvement over GPT-4, but not a huge improvement, because, idk, that’s mostly how these things go? The product and price are fine, the hype is annoying but you can just ignore that.

If you feel it’s hard to ignore it, then stop going on LinkedIn.

All I want to know is which of these things are useful, what they’re useful for, and what’s the best way to use them in terms of price/performance/privacy tradeoffs.

zdragnar · 2h ago
The problem with the hype is the market distortion for startups looking for funding.

I don't know if it improved at all lately, but for awhile it seemed like every startup I could find was the same variation on "healthcare note taking AI".

ec109685 · 3h ago
How can you say this about a product that has 700M MAU’s:

> As is true with a good many tech companies, especially the giants, in the AI age, OpenAI’s products are no longer primarily aimed at consumers but at investors

dse1982 · 1h ago
Because the users pay an unrealistically low price. You aim for money and right now you make money via the investors and not the users. Would 700M people use it if they had to pay a realistic price? I doubt it.
ho_lee_phuk · 3h ago
It is a good product. But probably not a great one.
PaulStatezny · 2h ago
I've read plenty of criticism about ChatGPT 5, but as a Plus user I'm surprised nobody has brought this up:

Speed.

ChatGPT 5 Thinking is So. Much. Slower. than o4-mini and o4-mini-high. Like between 5 and 10 times slower. Am I the only one experiencing this? I understand they were "mini" models, but those were the current-gen thinking models available to Pro. Is GPT 5 Thinking supposed to be beefier and more effective? Because the output feels no better.

djfobbz · 3h ago
GPT-4 seemed more snappy and faster...my experience with GPT-5 has been nothing short of poor. It takes too much time thinking. I'd rather have a generally good answer really fast than a very good one after 25-35s.
duxup · 2h ago
Agreed and I feel like I get the same output with a longer wait.
jbellis · 3h ago
GPT-5 is the best model now for writing code by a significant margin.

https://brokk.ai/power-rankings

lvl155 · 3h ago
I don’t know how you can say that with a straight face. It’s simply not the best and by a wide margin. No one doing any significant agentic workflow would consider GPT-5 over Sonnet 4. Not even close.
petesergeant · 3h ago
The big, big mistake here was routing, imo. I wanna choose my intelligence level for a question, not have the machine make that decision for me. Sam Altman over-hyping stuff is not new. I have had GPT-5 do some very impressive work for me, I've also had it fall on its ass many times.

Also, that gpt-5-nano can't handle the work that gpt-4o-mini could (and is somehow slower?) is also surprising and bad, but they'd really painted themselves into a corner with the version numbers.

SchemaLoad · 3h ago
Not ideal for you maybe but for the average user it's very much an improvement.
petesergeant · 51m ago
> for the average user it's very much an improvement

what are you basing that on?

charcircuit · 3h ago
I think most people will just use whatever is the default instead of manually adjusting it for each query. I think it makes sense to adapt to the user's query since that can more efficiently allocate resources for higher quality results that can be returned to the user faster.
therein · 3h ago
Is it just me or is anyone else worried what will happen when the industry realizes LLMs are just not going to be the path to AGI? We are building nuclear power plants, massive datacenters that make our old datacenters look like toys.

We are investing literal hundreds of billions into something that is looking more and more likely to flop than succeed.

What scares me the most is we are being steered into a sunk cost fallacy. Industry will continue to claim it is just around the corner, more and more infrastructure will be built, even underground water is being rationed in certain places because AI datacenters apparently deserve priority.

Are we being forced into a situation where we are invested too much in this to come face to face with it doesn't work and it makes everything worse?

What is this capacity being built for? It no longer makes any sense.

vpribish · 1h ago
so i don't see the path to AGI either, certainly not with LLMs. but there is some very useful stuff to be done with deep learning, and LLMs are a pretty amazin advance for search, translation, communication and user interfaces. but it's not the next industrial revolution.

i don't see the sunk-cost fallacy angle, just the sunk costs. the capital allocators will absolutely shut off the spigot when they see that it isn't going to yield. yeah, there could be some dark data centers. not the end of the world, just another ai winter at worst - maybe a dot-com crash... whatever.

the world is way bigger than the techo chamber

shubhamjain · 2h ago
Even if AI progress all stops and we are stuck with these models, it would at least take a few years to make use them to the full potential. What gets lost in this AGI debate is how impressive are these models without even reaching it. And the places where it has been applied are just tip of the iceberg.

It's just like dot-com bubble, everyone was pumped that Internet is going to take over the world. And even though, the bubble popped, the Internet did eventually take over the world.

distalx · 3h ago
Probably valid sunk cost fallacy, but it makes me wonder what will happen to the applications and systems being built on top of LLMs? If we face limitations or setbacks, will these innovations survive, or could we see a backlash against all thinking machines, reminiscent of Isaac Asimov's cautionary tales?
lostmsu · 28m ago
I'm more worried what will happen when people in denial will be faced with AI replacing their job. I used to hire outsource devs of _relatively_ poor quality (think just below avg. Wipro), but who completed descent amount of projects, for frontend. I will never do that again, unless I have to work with audio or video which modern LLMs can't test on their own.
andrewflnr · 3h ago
Well, we can for sure put the nuke plants to good use. Maybe for desalination to replace the water supplies.
aurareturn · 3h ago

  Is it just me or is anyone else worried what will happen when the industry realizes LLMs are just not going to be the path to AGI? We are building nuclear power plants, massive datacenters that make our old datacenters look like toys.
Nothing will happen. LLMs even at GPT5 intelligence but scaled up with significantly with higher context size, faster inference speed, and lower cost would change the world.
SchemaLoad · 3h ago
In what way. So far almost all the change has been in spamming social media and customer support.
aurareturn · 2h ago
There are many ideas (society enhancing ones) that I have that I can't do due to context size being too low, inference being too slow, or price being too high.

For example, I want to use an LLM system to track every promise a politician makes ever and see if he or she actually did it. No one is going to give me $1 billion to do this. But I think it would enhance society.

I don't need an AGI to do this idea. I think current LLMs are already more than good enough. But LLM prices are still way to high to do this cost effectively. When inference is as cheap relatively as serving up a web page, LLMs will be ubiquitous.

dse1982 · 1h ago
Out of pure curiousity: do your really think that would make any significant difference for voter behavior? My understanding is that one of the biggest misunderstandings of the last decade+, was that center and leftist politicians assumed it would keep people from voting against their interests if you point out the lies of the relevant politicians and how their policies actually go against their voters interests. I mean that was the whole point of the fact-checking stuff gaining traction in mainstream-media during that time: just to be mostly abolished again because people are just not interested in truth and facts as much as we like to assume. Not not at all. But not as much as we tend to think.

Please don't get me wrong, I am not trying to be sarcastic here. I would love to see a perspective – just any perspective – of how to get out of the current political situation. Not just in the US but in many other countries the same playbook is followed by authoritarians with just as much success as in the US. So if you have material or some reasoning at hand why more information for the population would make a difference on the voting behaviour I would be super-interested. Thanks in advance!

aurareturn · 1h ago
I don't know if it will make any difference. But this sort of large scale data categorization is possible only with an LLM. Previously, you probably needed dozens of people working full time to do this. Now you just have a few GPUs do it.

I have a lot more ideas that are gated by low context size, inference speed, and price per token.

The bottomline is that we don't need AGI to change the world. Just more an cheaper compute.

Ekaros · 28m ago
I have been toying with idea to make candidate chooser on what members of parliament actually voted for. That data set is pretty limited and readily available. Just get the vote record and identify key bills. No LLM needed.
SchemaLoad · 3h ago
We are likely in for the biggest market crash in history. Even if LLMs are super useful and will be with us going forward, that doesn't mean they will be profitable enough to justify the spending.

Open source and last years models will be good enough for 99% of people. Very few are going to pay billions to use the absolute best model there is.

Fade_Dance · 1h ago
Datacenter build-outs are being financed by mega caps with fortress balance sheets and free cash flows that are coming in well above expectations. They are still able to do things like route hundreds of billions in buybacks.

And regardless of being great investments or not, all of those companies have a burning desire for accelerated depreciation to lower their effective tax rate, which data center spend offers.

The more bubbly names will likely come down to earth eventually, but the growth stock sell-off we say in '22 due to the termination of the zero interest rate environment will probably dwarf it in scale. That was a true DotCom 2.0 bubble, with vaporware EV companies with nothing more than a render worth 10 billion, fake meat worth 10 billion, web chat worth 100 billion, 10 billion dollar treadmill companies... Space tourism, LIDAR... So many of those names have literally gone down 90 to 99%. It's odd to me that we don't look at that as what it was - a true dotcom bubble 2.0. The AI related stuff looks tame in comparison to what we saw just a few years ago.

vpribish · 1h ago
nah, you're being hyperbolic. run the actual numbers. just the US GDP is 28T - AI investment in 2024 was like 250B. it will barely leave a dent
techpineapple · 3h ago
> We are likely in for the biggest market crash in history

I don't think there's the kind of systemic risk that you had in a say 2008 is there?, but I do think there is likely to be a "correction" to put it lightly.

tootie · 3h ago
I honestly wonder how much they even believe their own hype. Altman is a world class circus barker for AI. I seriously doubt his sincerity about anything. Obviously Zuck is putting his money where his mouth is but idk how a data center the size of Manhattan is a means to any useful end.

I'm just imagining in 2030 when there is an absolutely massive secondary market for used GPUs.

Fade_Dance · 1h ago
I suspect (but obviously can't confirm, although this is a thought I've had before) that they are now much more of a "normal" company than they were before. I doubt the workforce feels like a god-tier supergroup either - after all there are many places you can work as an AI researcher now, and no doubt many of them left to make a startup or take that hundred million dollar zuck paycheck.

Which leaves Altman frankly looking increasingly awkward with his revolutionary proclamations. His company just isn't as cool as it used to be anymore. Jobs era Apple was arguably a cooler company.

georgemcbay · 3h ago
> Is it just me or is anyone else worried what will happen when the industry realizes LLMs are just not going to be the path to AGI?

Yes, some people are concerned, see for example a recent Hank Greene YouTube video:

https://www.youtube.com/watch?v=VZMFp-mEWoM

I'm probably more concerned than he is in that a large market correction would be bad, but even scarier are the growing (in number and power) techo-religious folks who really think we are on the cusp of creating an electronic god and are willing to trash what's left of the planet in an attempt to birth that electronic god that they think will magically save us.

techpineapple · 3h ago
> Are we being forced into a situation where we are invested too much in this to come face to face with it doesn't work and it makes everything worse?

Yes, but don't worry, they're getting close to the government so they'll get bailed out.

> What is this capacity being built for? It no longer makes any sense.

Give more power to Silicon Valley billionaires.