Between Opus aand GPT-5, it's not clear there's a substantial difference in software development expertise. The metric that I can't seem to get past in my attempts to use the systems is context awareness over long-running tasks. Producing a very complex, context-exceeding objective is a daily (maybe hourly) ocurrence for me. All I care about is how these systems manage context and stay on track over extended periods of time.
What eval is tracking that? It seems like it's potentially the most imporatnt metric for real-world software engineering and not one-shot vibe prayers.
RobinL · 11m ago
Totally agree. At the moment I find that frontier LLMs are able to solve most of the problems I throw at them given enough context. Most of my time is spent working out what context they're missing when they fail. So the thing that would help me most is much a much more focussed ability to gather context.
For my use cases, this is mostly needing to be really home in on relevant code files, issues, discussions, PRs. I'm hopeful that GPT5 will be a step forward in this regard that isn't fully captured in the benchmark results. It's certainly promising that it can achieve similar results more cheaply than e.g. Opus.
nadis · 21m ago
It's pretty vague, but the OP had this callout:
>"GPT‑5 is the strongest coding model we’ve ever released. It outperforms o3 across coding benchmarks and real-world use cases, and has been fine-tuned to shine in agentic coding products like Cursor, Windsurf, GitHub Copilot, and Codex CLI. GPT‑5 impressed our alpha testers, setting records on many of their private internal evals."
swader999 · 1h ago
If GPT 5 truly has 400k context, that might be all it needs to meaningfully surpass Opus.
andrewmutz · 58m ago
Having a large context window is very different from being able to effectively use a lot of context.
To get great results, it's still very important to manage context well. It doesn't matter if the model allows a very large context window, you can't just throw in the kitchen sink and expect good results
dimal · 1h ago
Even with large contexts there's diminishing returns. Just having the ability to stuff more tokens in context doesn't mean the model can effectively use it. As far as I can tell, they always reach a point in which more information makes things worse.
simonw · 1h ago
It's 272,000 input tokens and 128,000 output tokens.
Byamarro · 58m ago
More of a question is its context rot tendency than the size of its context :)
LLMs are supposed to load 3 bibles into their context, but they forget what they were about to do after loading a 600LoC of locales.
AS04 · 1h ago
400k context with 100% on the fiction livebench would make GPT-5 the undisputably best model IMHO. Don't think it will achieve that though, sadly.
tekacs · 49m ago
Coupled with the humungous price difference...
logicchains · 37m ago
>Between Opus aand GPT-5, it's not clear there's a substantial difference in software development expertise.
If there's no substantial difference in software development expertise then GPT-5 absolutely blows Opus out of the water due to being almost 10x cheaper.
realusername · 1h ago
Personally I think I'll wait for another 10x improvement for coding because with the current way it's going, they clearly need that.
fsloth · 1h ago
From my experience when used through IDE such as Cursor the current gen Claude model enables impressive speedruns over commodity tasks. My context is a CAD application I’ve been writing as a hobby. I used to work in that field for a decade so have a pretty good touch on how long I would expect tasks to take. I’m using mostly a similar software stack as that at previous job and am definetly getting stuff done much faster on holiday at home than at that previous work. Of course the codebase is also a lot smaller, intrinsic motivation, etc, but still.
realusername · 10m ago
I've done pretty much the same as you (Cursor/Claude) for our large Rails/React codebase at work and the experience has been horrific so far, I reverted back to vscode.
42lux · 1h ago
How often do you have to build the simple scaffolding though?
bdangubic · 1h ago
context awareness over long-running tasks
don’t have long-running tasks, llms or not. break the problem down into small manageable chunks and then assemble it. neither humans nor llms are good at long-running tasks.
bastawhiz · 1h ago
> neither humans nor llms are good at long-running tasks.
That's a wild comparison to make. I can easily work for an hour. Cursor can hardly work for a continuous pomodoro. "Long-running" is not a fixed size.
echelon · 57m ago
Humans can error correct.
LLMs multiply errors over time.
beoberha · 1h ago
A series of small manageable chunks becomes a long running task :)
If LLMs are going to act as agents, they need to maintain context across these chunks.
vaenaes · 1h ago
You're holding it wrong
risho · 1h ago
over the last week or so I have put probably close to 70 hours into playing around with cursor and claude code and a few other tools (its become my new obsession). I've been blown away by how good and reliable it is now. That said the reality is in my experience the only models that actually work in any sort of reliable way are claude models. I dont care what any benchmark says because the only thing that actually matters is actual use. I'm really hoping that this new gpt model actually works for this usecase because competition is great and the price is also great.
rcarr · 18m ago
I think some of this might come down to stack as well. I watched a t3.gg video[1] recently about Convex[2] and how the nature of it leads to the AI getting it right first time more often. I've been playing around with it the last few days and I think I agree with him.
I think the dev workflow is going to fundamentally change because to maximise productivity out of this you need to get multiple AIs working in parallel so rather than just jumping straight into coding we're going to end up writing a bunch of tickets out in a PM tool (Linear[3] looks like it's winning the race atm) and then working out (or using the AI to work out) which ones can be run in parallel without causing merge conflicts and then pulling multiple tickets into the your IDE/Terminal and then cycling through the tabs and jumping in as needed.
Atm I'm still not really doing this but I know I need to make the switch and I'm thinking that Warp[4] might be best suited for this kind of workflow, with the occasional switch over to an IDE when you need to jump in and make some edits.
Sure sounds interesting but... Where on earth do you actually find the time to sit through a 1.5 hour yt video?!
throwaway_2898 · 1h ago
How much of the product were you able to build to say it was good/reliable? IME, 70 hours can get you to a PoC that "works", building beyond the initial set of features — like say a first draft of all the APIs — does it do well once you start layering features?
neuronexmachina · 28m ago
> That said the reality is in my experience the only models that actually work in any sort of reliable way are claude models.
Anecdotally, the tool updates in the latest Cursor (1.4) seem to have made tool usage in models like Gemini much more reliable. Previously it would struggle to make simple file edits, but now the edits work pretty much every time.
zarzavat · 26m ago
The magic is the prompting/tool use/finetuning.
I find that OpenAI's reasoning models write better code and are better at raw problem solving, but Claude code is a much more useful product, even if the model itself is weaker.
ralfd · 1h ago
Just replying to ask you next week what your assessment on GPT5 is.
Centigonal · 1h ago
Ditto here, except I'm using Roo and it's Claude and Gemini pro 2.5 that work for me.
pamelafox · 1h ago
I am testing out gpt-5-mini for a RAG scenario, and I'm impressed so far.
I used gpt-5-mini with reasoning_effort="minimal", and that model finally resisted a hallucination that every other model generated.
GPT4: Collaborating with engineering, sales, marketing, finance, external partners, suppliers and customers to ensure …… etc
GPT5: I don't know.
Upon speaking these words, AI was enlightened.
potatolicious · 51m ago
This feels like honestly the biggest gain/difference. I work on things that do a lot of tool calling, and the model hallucinating fake tools is a huge problem. Worse, sometimes the model will hallucinate a response directly without ever generating the tool call.
The new training rewards that suppress hallucinations and tool-skipping hopefully push us in the right direction.
jumploops · 1h ago
If the model is as good as the benchmarks say, the pricing is fantastic:
For context, Claude Opus 4.1 is $15 / 1M for input tokens and $75/1M for output tokens.
The big question remains: how well does it handle tools? (i.e. compared to Claude Code)
Initial demos look good, but it performs worse than o3 on Tau2-bench airline, so the jury is still out.
addaon · 1h ago
> Output: $10 / 1M tokens
It's interesting that they're using flat token pricing for a "model" that is explicitly made of (at least) two underlying models, one with much lower compute costs than the other; and with use ability to at least influence (via prompt) if not choose which model is being used. I have to assume this pricing model is based on a predicted split between how often the underlying models get used; I wonder if that will hold up, if users will instead try to rouse the better model into action more than expected, or if the pricing is so padded that it doesn't matter.
mkozlows · 1h ago
That's how the browser-based ChatGPT works, but not the API.
simianwords · 1h ago
> that is explicitly made of (at least) two underlying models
what do you mean?
addaon · 44m ago
> a smart and fast model that answers most questions, a deeper reasoning model for harder problems, and a real-time router that quickly decides which model to use based on conversation type, complexity, tool needs, and explicit intent (for example, if you say “think hard about this” in the prompt).
In the API, there’s no router. Developers just pick whether they use the reasoning model or non-thinking ChatGPT model.
croemer · 1h ago
> GPT‑5 also excels at long-running agentic tasks—achieving SOTA results on τ2-bench telecom (96.7%), a tool-calling benchmark released just 2 months ago.
Yes, but it does worse than o3 on the airline version of that benchmark. The prose is totally cherry picker.
Fogest · 1h ago
How does the cost compare though? From my understanding o3 is pretty expensive to run. Is GPT-5 less costly? If so if the performance is close to o3 but cheaper, then it may still be a good improvement.
low_tech_punk · 1h ago
I find it strange that GPT-5 is cheaper than GPT-4.1 in input token and is only slightly more expensive in output token. Is it marketing or actually reflecting the underlying compute resources?
AS04 · 1h ago
Very likely to be an actual reflection. That's probably their real achievement here and the key reason why they are actually publishing it as GPT-5. More or less the best or near to it on everything while being one model, substantially cheaper than the competition.
mehmetoguzderin · 1h ago
Context-free grammar and regex support are exciting. I wonder what, or whether, there are differences from the Lark-like CFG of llguidance, which powers the JSON schema of the OpenAI API [^1].
Yeah that was the only exciting part of the announcement for me haha. Can't wait to play around with it.
I'm already running into a bunch of issues with the structured output APIs from other companies like Google and OpenAI have been doing a great job on this front.
jngiam1 · 9m ago
I was a little bummed that there wasn't more about better MCP support in ChatGPT, hopefully soon.
It was (attempted to be) solved by a human before, yet not merged...
With all the great coding models OpenAI has access to, their SDK team still feels too small for the needs.
catigula · 1h ago
I thought we were going to have AGI by now.
RS-232 · 1h ago
No shot. LLMs are simple text predictors and they are too stupid to get us to real AGI.
To achieve AGI, we will need to be capable of high fidelity whole brain simulations that model the brain's entire physical, chemical, and biological behavior. We won't have that kind of computational power until quantum computers are mature.
evantbyrne · 45m ago
It will be interesting to see if humans can manage to bioengineer human-level general intelligence into another species before computers.
machiaweliczny · 41m ago
I call bullshit. No need for simulation. Can be achieved via RL with some twist
bopbopbop7 · 31m ago
“some twist” is doing a lot of heavy lifting in that statement.
IAmGraydon · 24m ago
Not going to happen any time soon, if ever. LLMs are extremely useful, but the intelligence part is an illusion that nearly everyone appears to have fallen for.
henriquegodoy · 1h ago
I dont think there's so much difference from opus 4.1 and gpt-5, probably just the context size, waiting for the gemini 3.0
low_tech_punk · 1h ago
Tried using gpt-5 family with response API and got error "gpt-5 does not exist or you don't have access to it". I guess they are not rolling out in lock step with the live stream and blog article?
low_tech_punk · 45m ago
Can confirm that they are rolling out. It's working for me.
diggan · 1h ago
Seems they're doing rollout over time, I'm not seeing it anywhere yet.
zaronymous1 · 34m ago
Can anyone explain to me why they've removed parameter controls for temperature and top-p in reasoning models, including gpt-5? It strikes me that it makes it harder to build with these to do small tasks requiring high-levels of consistency, and in the API, I really value the ability to set certain tasks to a low temp.
low_tech_punk · 1h ago
The ability to specify a context-free grammar as output constraint? This blows my mind. How do you control the auto regressive sampling to guarantee the correct syntax?
evnc · 1h ago
I assume they're doing "Structured Generation" or "Guided generation", which has been possible for a while if you control the LLM itself e.g. running an OSS model, e.g. [0][1]. It's cool to see a major API provider offer it, though.
The basic idea is: at each auto-regressive step (each token generation), instead of letting the model generate a probability distribution over "all tokens in the entire vocab it's ever seen" (the default), only allow the model to generate a probability distribution over "this specific set of tokens I provide". And that set can change from one sampling set to the next, according to a given grammar. E.g. if you're using a JSON grammar, and you've just generated a `{`, you can provide the model a choice of only which tokens are valid JSON immediately after a `{`, etc.
You sample only from tokens that could possibly result in a valid production for the grammar. It's an inference-only thing.
low_tech_punk · 1h ago
ah, thanks!
timhigins · 1h ago
I opened up the developer playground and the model selection dropdown showed GPT-5 and then it disappeared. Also I don't see it in ChatGPT Pro. What's up?
Fogest · 1h ago
It's probably being throttled due to high usage.
IAmGraydon · 23m ago
Not showing in my Pro account either. As someone else mentioned, I’m sure it’s throttling due to high use right now.
jaflo · 58m ago
I just wish their realtime audio pricing would go down but it looks like GPT-5 does not have support for that so we’re stuck with the old models.
6thbit · 1h ago
Seems they have quietly increased the context window up to 400,000
I wonder how good it is compared to Claude Sonnet 4, and when it's coming to GitHub Copilot.
I almost exclusively wrote and released https://github.com/andrewmcwattersandco/git-fetch-file yesterday with GPT 4o and Claude Sonnet 4, and the latter's agentic behavior was quite nice. I barely had to guide it, and was able to quickly verify its output.
ivape · 4m ago
Musk after GPT5 launch: "OpenAI is going to eat Microsoft alive"
What the fuck?
Nobody else saw the cursor ceo looking through the gpt5 generated code, mindlessly scrolling saying "this looks roughly correct, i would love to merge that" LOL
You can't make this up
siva7 · 2m ago
amazing time to be alive, alone for this clown show
isoprophlex · 4m ago
This is the ideal software engineer. You may not like it, but this is what peak software engineering looks like.
What eval is tracking that? It seems like it's potentially the most imporatnt metric for real-world software engineering and not one-shot vibe prayers.
For my use cases, this is mostly needing to be really home in on relevant code files, issues, discussions, PRs. I'm hopeful that GPT5 will be a step forward in this regard that isn't fully captured in the benchmark results. It's certainly promising that it can achieve similar results more cheaply than e.g. Opus.
>"GPT‑5 is the strongest coding model we’ve ever released. It outperforms o3 across coding benchmarks and real-world use cases, and has been fine-tuned to shine in agentic coding products like Cursor, Windsurf, GitHub Copilot, and Codex CLI. GPT‑5 impressed our alpha testers, setting records on many of their private internal evals."
To get great results, it's still very important to manage context well. It doesn't matter if the model allows a very large context window, you can't just throw in the kitchen sink and expect good results
If there's no substantial difference in software development expertise then GPT-5 absolutely blows Opus out of the water due to being almost 10x cheaper.
don’t have long-running tasks, llms or not. break the problem down into small manageable chunks and then assemble it. neither humans nor llms are good at long-running tasks.
That's a wild comparison to make. I can easily work for an hour. Cursor can hardly work for a continuous pomodoro. "Long-running" is not a fixed size.
LLMs multiply errors over time.
If LLMs are going to act as agents, they need to maintain context across these chunks.
I think the dev workflow is going to fundamentally change because to maximise productivity out of this you need to get multiple AIs working in parallel so rather than just jumping straight into coding we're going to end up writing a bunch of tickets out in a PM tool (Linear[3] looks like it's winning the race atm) and then working out (or using the AI to work out) which ones can be run in parallel without causing merge conflicts and then pulling multiple tickets into the your IDE/Terminal and then cycling through the tabs and jumping in as needed.
Atm I'm still not really doing this but I know I need to make the switch and I'm thinking that Warp[4] might be best suited for this kind of workflow, with the occasional switch over to an IDE when you need to jump in and make some edits.
[1]: https://www.youtube.com/watch?v=gZ4Tdwz1L7k
[2]: https://www.convex.dev/
[3]: https://linear.app/
[4]: https://www.warp.dev/
Anecdotally, the tool updates in the latest Cursor (1.4) seem to have made tool usage in models like Gemini much more reliable. Previously it would struggle to make simple file edits, but now the edits work pretty much every time.
I find that OpenAI's reasoning models write better code and are better at raw problem solving, but Claude code is a much more useful product, even if the model itself is weaker.
I used gpt-5-mini with reasoning_effort="minimal", and that model finally resisted a hallucination that every other model generated.
Screenshot in post here: https://bsky.app/profile/pamelafox.bsky.social/post/3lvtdyvb...
I'll run formal evaluations next.
GPT4: Collaborating with engineering, sales, marketing, finance, external partners, suppliers and customers to ensure …… etc
GPT5: I don't know.
Upon speaking these words, AI was enlightened.
The new training rewards that suppress hallucinations and tool-skipping hopefully push us in the right direction.
Input: $1.25 / 1M tokens (cached: $0.125/1Mtok) Output: $10 / 1M tokens
For context, Claude Opus 4.1 is $15 / 1M for input tokens and $75/1M for output tokens.
The big question remains: how well does it handle tools? (i.e. compared to Claude Code)
Initial demos look good, but it performs worse than o3 on Tau2-bench airline, so the jury is still out.
It's interesting that they're using flat token pricing for a "model" that is explicitly made of (at least) two underlying models, one with much lower compute costs than the other; and with use ability to at least influence (via prompt) if not choose which model is being used. I have to assume this pricing model is based on a predicted split between how often the underlying models get used; I wonder if that will hold up, if users will instead try to rouse the better model into action more than expected, or if the pricing is so padded that it doesn't matter.
what do you mean?
From https://openai.com/index/gpt-5-system-card/
Yes, but it does worse than o3 on the airline version of that benchmark. The prose is totally cherry picker.
[^1]: https://github.com/guidance-ai/llguidance/blob/f4592cc0c783a...
I'm already running into a bunch of issues with the structured output APIs from other companies like Google and OpenAI have been doing a great job on this front.
It was (attempted to be) solved by a human before, yet not merged... With all the great coding models OpenAI has access to, their SDK team still feels too small for the needs.
To achieve AGI, we will need to be capable of high fidelity whole brain simulations that model the brain's entire physical, chemical, and biological behavior. We won't have that kind of computational power until quantum computers are mature.
The basic idea is: at each auto-regressive step (each token generation), instead of letting the model generate a probability distribution over "all tokens in the entire vocab it's ever seen" (the default), only allow the model to generate a probability distribution over "this specific set of tokens I provide". And that set can change from one sampling set to the next, according to a given grammar. E.g. if you're using a JSON grammar, and you've just generated a `{`, you can provide the model a choice of only which tokens are valid JSON immediately after a `{`, etc.
[0] https://github.com/dottxt-ai/outlines [1] https://github.com/guidance-ai/guidance
https://platform.openai.com/docs/models/gpt-5
So, at least twice larger context than those
EDIT: It's out now
https://github.com/spullara/models
I almost exclusively wrote and released https://github.com/andrewmcwattersandco/git-fetch-file yesterday with GPT 4o and Claude Sonnet 4, and the latter's agentic behavior was quite nice. I barely had to guide it, and was able to quickly verify its output.
https://x.com/elonmusk/status/1953509998233104649
Anyone know why he said that?
You can't make this up
/s
Looks like they're trying to lock us into using the Responses API for all the good stuff.
No comments yet