GPT5 Is Horrible

17 druskacik 6 8/8/2025, 8:25:34 AM old.reddit.com ↗

Comments (6)

dtagames · 13m ago
It's not the model you're unhappy with; it's the new router function, which attempts to choose the model (using another LLM and RAG).

Those decisions are steered by costs, so it will choose the cheapest (worst) model unless compelled to do otherwise.

Cursor quietly added this type of routing months ago, referring to it only as "automatic" model selection. At the same time, they moved that product to price tiers much more inline with these announced for GPT5.

JimmyBuckets · 2h ago
The reviews on reddit are overwhelmingly negative. There's no way their QA/UX team didn't catch this drop in performance. The cost savings must be substantial to push this through in spite of this.
A_D_E_P_T · 5h ago
For my use cases it doesn't appear to be any better than o3/o3-pro.

Also, GPT-5 pro is a lot slower than o3-pro. My two most recent queries taking 17 and 18 minutes, whereas o3-pro would probably take 4-5.

Surprisingly, at generic writing-prose tasks (e.g. compose a poem in the style of X), GPT-5 is still noticeably inferior to DeepSeek and Kimi.

Honestly, I'm tempted to cancel my $200/month subscription.

This probably means that we're in the "slow AI / stagnation" timeline, so at least we're not going to get paperclipped by 2027.

NitpickLawyer · 4h ago
> For my use cases it doesn't appear to be any better than o3/o3-pro.

It doesn't have to be better, it has to be "as close to those as possible", while being cost efficient to run and serve at scale.

> This probably means that we're in the "slow AI / stagnation" timeline

I'd say we're more in the "ok, we got the capabilities in huge models, now we need to make them smaller, faster and scalable" timeline. If they capture ~80-90% of the capabilities of their strongest models while reducing costs a lot (they've gone from 40-60$/Mtok to 10$/Mtok) then they're starting to approach a break-even point and slowly make money off of serving tokens.

There's also a move towards having specialised models (code w/ claude, long context w/ gemini, etc) and oAI seem to have gone in this direction as well. They've said for a long time that gpt5 will be a "systems" update and not necessarily a core model update. That is, they have a routing model that takes a query and routes it towards the best model for the task. Once devs figure out how to use this to their advantage, the "vibes" will improve.

efilife · 1h ago
https://old.reddit.com/r/ChatGPT/comments/1mkd4l3/gpt5_is_ho...

I had no idea this many people were so attached to a LLM. This sounds absolutely terrible

porridgeraisin · 1h ago
Jeez... that's....interesting to say the least.