OpenAI's new GPT-5 models announced early by GitHub

62 bkolobara 52 8/7/2025, 8:06:48 AM theverge.com ↗

Comments (52)

deepdarkforest · 2h ago
> It handles complex coding tasks with minimal prompting...

I find it interesting how marketers are trying to make minimal prompting a good thing, a direction to optimize. Even if i talk to a senior engineer, i'm trying to be specific as possible to avoid ambiguities etc. Pushing the models to just do what they think its best is a weird direction. There are so many subtle things/understandings of the architecture that are just in my head or a colleagues head. Meanwhile, i found that a very good workflow is asking claude code to come back with clarifying questions and then a plan, before just starting to execute.

ls-a · 2h ago
This works well with managers. They think if the task title on jira is a one liner, then it's that simple to implement.
consp · 1h ago
Usually it's exactly the opposite, more often due to missing containment and requirements so you get a vague oneliner. Infinity hours it is ...
ls-a · 1h ago
Then you have engineers that will agree with the manager on everything
KronisLV · 1h ago
> Meanwhile, i found that a very good workflow is asking claude code to come back with clarifying questions and then a plan, before just starting to execute.

RooCode supports various modes https://docs.roocode.com/basic-usage/using-modes

For example, you can first use the Ask mode to explore the codebase and answer your questions, as well as ask you its own about what you want to do. Then, you can switch over to the Code mode to do the actual implementation, or the model itself will ask you to switch to it in other modes, because it's not allowed to change files in the Ask mode.

I think that approach works pretty well, especially when you document what needs to be done in a separate Markdown file or something along the lines of it, that can be then referenced if you have to clean the context, like a new refactoring task for what's been implemented.

> I find it interesting how marketers are trying to make minimal prompting a good thing, a direction to optimize.

This seems like a good thing, though. You're still allowed to be as specific as you want to, but the baseline is a bit better.

igleria · 1h ago
> I find it interesting how marketers are trying to make minimal prompting a good thing

They do that because IMHO the average person seems to prefer something to be easy, rather than correct.

c048 · 1h ago
This is why I don't listen at all to the fearmongers that say programmers will disappear. At most, our jobs will slightly change.

There will always be people that describe a problem, and you'll always need people actually figuring out what's actually wrong.

croes · 1h ago
The problem isn’t the AI but the management that believes the PR. It doesn’t matter if AI can replace developers but if the management thinks it can.
ACCount36 · 1h ago
What makes you look at existing AI systems and then say "oh, this totally isn't capable of describing a problem or figuring out what's actually wrong"? Let alone "this wouldn't EVER be capable of that"?
satyrun · 2m ago
At this point it is just straight denial.

Like when a relationship is obviously over. Some people enjoy the ending fleeting moments while others delude themselves that they just have to get over the hump and things will go back to normal.

benterix · 1h ago
> What makes you look at existing AI systems and then say "oh, this totally isn't capable of describing a problem or figuring out what's actually wrong"?

I wouldn't say they're completely incapable.

* They can spot (and fix) low hanging fruit instantly

* They will also "fix" things that were left out there for a reason and break things completely

* even if the code base fits entirely in their context window, as does the complete company knowledge base, including Slack conversations etc., the proposed solutions sometimes take a very strange turn, in spite of being correct 57.8% of the time.

ACCount36 · 1h ago
That's about right. And this kind of performance wouldn't be concerning - if only AI performance didn't go up over time.

Today's AI systems are the worst they'll ever be. If AI is already capable of doing something, you should expect it to become more capable of it in the future.

binary132 · 59m ago
why is “the worst they’ll ever be” such a popular meme with the AI inevitabilist crowd and how do we make their brains able to work again?
ACCount36 · 45m ago
It's popular because it's true.

By now, the main reason people expect AI progress to halt is cope. People say "AI progress is going to stop, any minute now, just you wait" because the alternative makes them very, very uncomfortable.

IsTom · 32m ago
We're somewhere on an S-curve and you can't really determine on which part by just looking at the past progress.
croes · 1h ago
That’s not how it works. There are already cases where the fix of one problem made a previous existing capability worse.
ACCount36 · 50m ago
That's exactly how it works. Every input of AI performance improves over time, and so do the outcomes.

Can you damage existing capabilities by overly specializing an AI in something? Yes. Would you expect that damage to stick around forever? No.

OpenAI damaged o3's truthfulness by frying it with too much careless RL. But Anthropic's Opus 4 proves that you can get similar task performance gains without sacrificing truthfulness. And then OpenAI comes back swinging with an algorithmic approach to train their AIs for better truthfulness specifically.

croes · 1h ago
Turn the question around „oh, this totally is capable of describing a problem and figuring out what's actually wrong“

Even a broken clock is right two times a day.

The question is reliability.

What worked today may not work tomorrow and vice versa.

nojito · 1h ago
Because people are overprompting and creating crazy elaborate harnesses. My prompts are maybe 1 - 2 sentences.

There is a definite skill gap between folks who are using these tools effectively and those who do not.

Ratelman · 2h ago
Interesting/unfortunate/expected that GPT-5 isn't touted as AGI or some other outlandish claim. It's just improved reasoning etc. I know it's not the actual announcement and it's just a single page accidentally released, but it at least seems more grounded...? Have to wait and see what the actual announcement entails.
throwaway525753 · 1h ago
At this point it's pretty obvious that the easy scaling gains have been made already and AI labs are scrounging for tricks to milk out extra performance from their huge matrix product blobs:

-Reasoning, which is just very long inference coupled with RL

-Tool use aka an LLM with glue code to call programs based on its output

-"Agents" aka LLMs with tools in a loop

Those are pretty neat tricks, and not at all trivial to get actionable results from (from an engineering point of view), mind you. But the days of the qualitative intelligence leaps from GPT-2 to 3, or 3 to 4, are over. Sure, benchmarks do get saturated, but at incredible cost and forcing AI researchers to make up new "dimensions of scaling" as the ones they were previously banking on stalled. And meanwhile it's all your basic next token prediction blob running it all, just with a few optimizing tricks.

My hunch is that there won't be a wondorous life turning AGI (poorly defined anyway), just consolidating existing gains (distillation, small language models, MoE, quality datasets, etc.) and finding new dimensions and sources of data (biological data and 'sense-data' for robotics come to mind).

binary132 · 55m ago
This is the worst they’ll ever be! It’s not just going to be an ever slower asymptotic improvement that never quite manages to reach escape velocity but keeps costing orders of magnitude more to research, train, and operate….
nialv7 · 2h ago
I wonder whether the markets will crash if gpt5 flops. Because it might be the model that cements the idea that, yes, we have hit a wall.
qsort · 2h ago
I'm the first to call out ridiculous behavior by AI companies but short of something massively below expectations this can't be bad for openai. GPT-5 is going to be positioned as a product for the general public first and foremost. Not everyone cares about coding benchmarks.
benterix · 1h ago
> massively below expectations

Well, the problem is that the expectations are already massive, mostly thanks to sama's strategy of attracting VC.

nialv7 · 1h ago
llama 4 basically (arguably) destroyed Meta's LLM lab, and it wasn't even that bad of a model.
ben_w · 1h ago
OpenAI's announcements are generally a lot more grounded than the hype surrounding them and their stuff.

e.g. if you look at Altman's blog of "superintelligence in a few thousand days", what he actually wrote doesn't even disagreeing with LeCun (famously a nay-sayer) about the timeline.

Imustaskforhelp · 2h ago
Yeah, I guess it wouldn't be that big but it will have a lot of hype around it.

I doubt it can even beat opus 4.1

bkolobara · 3h ago
The actual announcement (now deleted on GitHub's blog): https://archive.is/IoMEg
billytrend · 2h ago
Did they photoshop the screenshot from https://github.blog/changelog/2025-05-19-github-models-built... ? Other than the model id, it’s identical.
ukblewis · 1h ago
I get that it looks suspicious, but here’s the archive link: https://archive.is/2025.08.07-035308/https://github.blog/cha...
nxobject · 1h ago
Is the announcement implying that "mainline" GPT-5 is now a reasoning model?

> gpt-5: Designed for logic and multi-step tasks.

blixt · 1h ago
I think the promise back when all the separate reasoning / multimodal models were out was that GPT-5 would be the model to bring it all together (which mostly comes down to audio/video I think since o3/o4 do images really well).
om8 · 1h ago
Of course it is. GPT-5 is one of the most anticipated things in AI right now. To live up to the hype, it needs to be a reasoning model.
ed_mercer · 1h ago
Damn interns!
therodeoen · 3h ago
they are comparing it to llama 4 and cohere v2 in the image…
fnord77 · 2h ago
sama posted a picture of the death star yesterday