It feels slower, but if the quality is better, so one response will do instead of multiple follow up questions, it’s still faster overall. It’s also still orders of magnitude faster than doing the research manually.
I’m reminded of that Louis CK joke about people being upset about the WiFi not working on their airplane.
labrador · 3h ago
GPT-5 is better as a conversational thinking partner than GPT-4o. It's answers are more concise, focused and informative. The conversation flows. GPT-5 feels more mature than GPT-4o with less juvenile "glazing."
I can't speak to other uses such as coding, but as a sounding board GPT-5 is better than GPT-4o, which was already pretty good. GPT-5's personality has definitely shifted to a more professional tone which I like.
I do understand why people are missing the more synchophant persoanlity of GPT-4o, but I'm not one of them.
saulpw · 2h ago
That sounds 10% better, not 10x better. That's close enough to 'peaked'.
al_borland · 58m ago
By definition, if something is still getting 10% better each year it hasn’t yet peaked. Not even close.
labrador · 2h ago
Agreed. Sam Altman definitely over-hyped GPT-5. It's not so much more capable that it deserves a major version number bump.
torginus · 1h ago
I still think it's a solid achievement, but weirdly positioned. It's their new-poverty spec model, available to everyone, and is likely not too large.
It's decently good at coding, and math, beating current SOTA of Opus 4.1 by a small margin, while being much cheaper and faster to run, hinting at likely a much smaller model.
However it's no better at trivia or writing emails or essays, which is what regular people who use ChatGPT through the website actually care about, making this launch come off as awkward.
3836293648 · 2h ago
Surely a major version bump says more about the internals than the capabilities
labrador · 2h ago
I see your point from a software engineering perspective but unfortuately that's not how the public sees it. The common perception is that we are making leaps towards AGI. I never thought AGI was close so I'm not disappointed, but a lot of people seem to be. On the other hand, I've seen comments like "I guess my fears of a destructive super-intelligence were over-blown."
hoppp · 57m ago
They gonna release new models like apple releases iphones, same stuff little tweaks and improvements.
kjkjadksj · 2h ago
People seem to make this exact comment on here at every gpt release. I wonder what gpt we ought to actually be on? 1.4.6?
labrador · 2h ago
In retrospect I would have named it as follows:
GPT-4 -> GPT-4 Home
GPT-5 -> GPT-4 Enterprise
Because my impression after using GPT-5 that it is designed to satisfy the needs of Microsoft mainly. Microsoft has no interest in making AI therapists or AI companions, probably because of the legal liability. Also, that's outside their core business.
pseudo_meta · 1h ago
API is noticeably slower for me, sometimes up to 10x slower.
Upon some digging, it seems that part of the slowdown is due to the gpt-5 models by default doing some reasoning (reasoning effort "medium"), even for the nano or mini model. Setting the reasoning effort to "minimal" improves the speed a lot.
However, to be able to set the reasoning effort you have to switch to the new Response API, which wasn't a lot of work, but more than just changing a URL.
hirvi74 · 1h ago
I'm noticing significant differences already.
Code seems to work on the first try more often for me too.
Perhaps my favorite change so far is the difference of verbosity. Some of the responses I am receiving when asking trivial questions are now are merely a handful of sentences instead of a dissertation. However, dissertation mode comes back when appropriate, which is also nice.
Edit: Slightly tangential, but I forgot to ask, do any of you all have access to the $200/month plan? If so, how does that model compare to GPT-5?
gooodvibes · 2h ago
Not having the choice to use the old models is a horrible user experience. Taking 4o away so soon was a crime.
I don’t feel like I got anything new, I feel like something got taken away.
hirvi74 · 1h ago
4o and perhaps a few other of the older models are coming back. Altman already stated so.
darepublic · 2h ago
They took away o3 on plus for this :(
Buttons840 · 47m ago
o3 was surprisingly good at research. I once saw it spend 6 full minutes researching something before giving an answer, and I wasn't using the "research" or "deep think" or whatever it's called, o3 just decided on its own to do that much research.
binarymax · 3h ago
My primary use case for LLMs are running jobs at scale over an API, and not chat. Yes it's very slow, and it is annoying. Getting a response from GPT-5-mini for <Classify these 50 tokens as true or false> takes 5 seconds, compared to GPT-4o which takes about a second.
hoppp · 50m ago
If its 5 seconds maybe you are better off renting a GPU server and running the inference where the data is, without round trips and you can use gpt-oss
jscheel · 2h ago
Doing quite a bit of that as well, but I’ve held off moving anything to gpt-5 yet. Guessing it’s a capacity issue right now.
mikert89 · 3h ago
Have you used the Pro version? Its incredible
iwontberude · 3h ago
This is intended to be a discussion thread speculating about why ChatGPT 5 is so slow and why it seems to be no better than previous versions.
I’m reminded of that Louis CK joke about people being upset about the WiFi not working on their airplane.
I can't speak to other uses such as coding, but as a sounding board GPT-5 is better than GPT-4o, which was already pretty good. GPT-5's personality has definitely shifted to a more professional tone which I like.
I do understand why people are missing the more synchophant persoanlity of GPT-4o, but I'm not one of them.
It's decently good at coding, and math, beating current SOTA of Opus 4.1 by a small margin, while being much cheaper and faster to run, hinting at likely a much smaller model.
However it's no better at trivia or writing emails or essays, which is what regular people who use ChatGPT through the website actually care about, making this launch come off as awkward.
GPT-4 -> GPT-4 Home
GPT-5 -> GPT-4 Enterprise
Because my impression after using GPT-5 that it is designed to satisfy the needs of Microsoft mainly. Microsoft has no interest in making AI therapists or AI companions, probably because of the legal liability. Also, that's outside their core business.
Upon some digging, it seems that part of the slowdown is due to the gpt-5 models by default doing some reasoning (reasoning effort "medium"), even for the nano or mini model. Setting the reasoning effort to "minimal" improves the speed a lot.
However, to be able to set the reasoning effort you have to switch to the new Response API, which wasn't a lot of work, but more than just changing a URL.
Code seems to work on the first try more often for me too.
Perhaps my favorite change so far is the difference of verbosity. Some of the responses I am receiving when asking trivial questions are now are merely a handful of sentences instead of a dissertation. However, dissertation mode comes back when appropriate, which is also nice.
Edit: Slightly tangential, but I forgot to ask, do any of you all have access to the $200/month plan? If so, how does that model compare to GPT-5?
I don’t feel like I got anything new, I feel like something got taken away.