OpenAI delays launch of open-weight model

137 martinald 100 7/12/2025, 1:07:33 AM twitter.com ↗

Comments (100)

Y_Y · 2h ago
My hobby: monetizing cynicism.

I go on Polymarket and find things that would make me happy or optimistic about society and tech, and then bet a couple of dollars (of some shitcoin) against them.

e.g. OpenAI releasing an open weights model before September is trading at 81% at time of writing - https://polymarket.com/event/will-openai-release-an-open-sou...

Last month I was up about ten bucks because OpenAI wasn't open, the ceasefire wasn't a ceasefire, and the climate metrics got worse. You can't hedge away all the existential despair, but you can take the sting out of it.

heeton · 2h ago
My friend does this and calls it “hedging humanity”. Every time some big political event has happened that bums me out, he’s made a few hundred.
hereme888 · 2h ago
people still use crypto? I thought the hype died around the time when AI boomed.
yorwba · 34m ago
People use crypto on Polymarket because it doesn't comply with gambling regulations, so in theory isn't allowed to have US customers. Using crypto as an intermediary lets Polymarket pretend not to know where the money is coming from. Though I think a more robust regulator would call them out on the large volume of betting on US politics on their platform...
unsupp0rted · 1h ago
Bitcoin is higher than ever. People can't wait until it gets high enough that they can sell it for dollars, and use those dollars to buy things and make investments in things that are valuable.
esperent · 1h ago
> Bitcoin is higher than ever

That's just speculation though. I saw a cynical comment on reddit yesterday that unfortunately made a lot of sense. Many people now are just so certain that the future of work is not going to include many humans, so they're throwing everything into stocks and crypto, which is why they remain so high even in the face of so much political uncertainty. It's not that people are investing because they have hope. People are just betting everything as a last ditch survival attempt before the robots take over.

Of course this is hyperbolic - market forces are never that simple. But I think there might be some truth to it.

dmd · 5m ago
What does 'just' mean here? The monetary value of a thing is what people will pay you for it. Full stop.
ben_w · 1h ago
Unfortunatley crypto hype is still high; and I think still on the up, but that's vibes not market analysis.
adidoit · 7h ago
Not sure if it's coincidental that OpenAI's open weights release got delayed right after an ostensibly excellent open weights model (Kimi K2) got released today.

https://moonshotai.github.io/Kimi-K2/

OpenAI know they need to raise the bar with their release. It can't be a middle-of-the-pack open weights model.

sigmoid10 · 1h ago
They might also be focusing all their work on beating Grok 4 now, since xAi has a significant edge in accumulating computing power and they opened a considerable gap in raw intelligence tests like ARC and HLE. OpenAI is in this to win the competitive race, not the open one.
unsupp0rted · 1h ago
> They might also be focusing all their work on beating Grok 4 now,

With half the key team members they had a month prior

sigmoid10 · 40m ago
I'm starting to think talent is way less concentrated in these individuals than execs would have investors believe. While all those people who left OpenAI certainly have the ability to raise ridiculous sums of venture capital in all sorts of companies, Anthropic remains the only offspring that has actually reached a level where they can go head-to-head with OpenAI. Zuck now spending billions on snatching those people seems more like a move out of desperation than a real plan.
lossolo · 7h ago
This could be it, especially since they announced last week that it would be the best open-source model.
reactordev · 4h ago
Technically they were right when they said it, in their minds. Things are moving so fast that in a week, it will be true again.
ryao · 8h ago
Am I the only one who thinks mention of “safety tests” for LLMs is a marketing scheme? Cars, planes and elevators have safety tests. LLMs don’t. Nobody is going to die if a LLM gives an output that its creators do not like, yet when they say “safety tests”, they mean that they are checking to what extent the LLM will say things they do not like.
natrius · 8h ago
An LLM can trivially instruct someone to take medications with adverse interactions, steer a mental health crisis toward suicide, or make a compelling case that a particular ethnic group is the cause of your society's biggest problem so they should be eliminated. Words can't kill people, but words can definitely lead to deaths.

That's not even considering tool use!

thayne · 7h ago
Part of the problem is due to the marketing of LLMs as more capable and trustworthy than they really are.

And the safety testing actually makes this worse, because it leads people to trust that LLMs are less likely to give dangerous advice, when they could still do so.

jdross · 34m ago
Spend 15 minutes talking to a person in their 20's about how they use ChatGPT to work through issues in their personal lives and you'll see how much they already trust the "advice" and other information produced by LLMs.

Manipulation is a genuine concern!

ryao · 8h ago
This is analogous to saying a computer can be used to do bad things if it is loaded with the right software. Coincidentally, people do load computers with the right software to do bad things, yet people are overwhelmingly opposed to measures that would stifle such things.

If you hook up a chat bot to a chat interface, or add tool use, it is probable that it will eventually output something that it should not and that output will cause a problem. Preventing that is an unsolved problem, just as preventing people from abusing computers is an unsolved problem.

0points · 1h ago
> This is analogous to saying a computer can be used to do bad things if it is loaded with the right software.

It's really not. Parent's examples are all out-of-the-box behavior.

ronsor · 8h ago
As the runtime of any program approaches infinity, the probability of the program behaving in an undesired manner approaches 1.
ryao · 8h ago
That is not universally true. The yes program is a counter example:

https://www.man7.org/linux/man-pages/man1/yes.1.html

cgriswald · 7h ago
Devil's advocate:

(1) Execute yes (with or without arguments, whatever you desire).

(2) Let the program run as long as you desire.

(3) When you stop desiring the program to spit out your argument,

(4) Stop the program.

Between (3) and (4) some time must pass. During this time the program is behaving in an undesired way. Ergo, yes is not a counter example of the GP's claim.

ryao · 7h ago
I upvoted your reply for its clever (ab)use of ambiguity to say otherwise to a fairly open and shut case.

That said, I suspect the other person was actually agreeing with me, and tried to state that software incorporating LLMs would eventually malfunction by stating that this is true for all software. The yes program was an obvious counter example. It is almost certain that all LLMs will eventually generate some output that is undesired given that it is determining the next token to output based on probabilities. I say almost only because I do not know how to prove the conjecture. There is also some ambiguity in what is a LLM, as the first L means large and nobody has made a precise definition of what is large. If you look at literature from several years ago, you will find people saying 100 million parameters is large, while some people these days will refuse to use the term LLM to describe a model of that size.

cgriswald · 7h ago
Thanks, it was definitely tongue-in-cheek. I agree with you on both counts.
pesfandiar · 7h ago
The society has accepted that computers bring more benefit than harm, but LLMs could still get pushback due to bad PR.
bilsbie · 8h ago
PDFs can do this too.
xigoi · 4h ago
In such a case, the author of the PDF can be held responsible.
jiggawatts · 7h ago
Twitter does it at scale.
selfhoster11 · 4h ago
Yes, and a table saw can take your hand. As can a whole variety of power tools. That does not render them illegal to sell to adults.
ZiiS · 3h ago
It dose render them illigal to sell without studying their safety.
vntok · 3h ago
An interesting comparison.

Table saws sold all over the world are inspected and certified by trusted third parties to ensure they operate safely. They are illegal to sell without the approval seal.

Moreover, table saws sold in the United States & EU (at least) have at least 3 safety features (riving knife, blade guard, antikickback device) designed to prevent personal injury while operating the machine. They are illegal to sell without these features.

Then of course there are additional devices like sawstop, but it is not mandatory yet as far as I'm aware. Should be in a few years though.

LLMs have none of those board labels or safety features, so I'm not sure what your point was exactly?

xiphias2 · 2h ago
They are somewhat self regulated, as they can cause permament damage to the company that releases them, and they are meant for general consumers without any training, unlike table saws that are meant for trained people.

An example is the first Microsoft bot that started to go extreme rightwing when people realized how to make it go that direction. Grok had a similar issue recently.

Google had racial issues with its image generation (and earlier with image detection). Again something that people don't forget.

Also an OpenAI 4o release was encouraging stupid things to people when they asked stupid questions and they just had to roll it back recently.

Of course I'm not saying that that's the real reason (somehow they never say that the problem is with performance for not releasing stuff), but safety matters with consumer products.

latexr · 1h ago
> They are somewhat self regulated, as they can cause permament damage to the company that releases them

And then you proceed to give a number of examples of that not happening. Most people already forgot those.

anonymoushn · 4h ago
The closed weights models from OpenAI already do these things though
pyuser583 · 5h ago
The problem is “safety” prevents users from using LLMs to meet their requirements.

We typically don’t critique the requirements of users, at least not in functionality.

The marketing angle is that this measure is needed because LLMs are “so powerful it would be unethical not to!”

AI marketers are continually emphasizing how powerful their software is. “Safety” reinforces this.

“Safety” also brings up many of the debates “mis/disinformation” brings up. Misinformation concerns consistently overestimate the power of social media.

I’d feel much better if “safety” focused on preventing unexpected behavior, rather than evaluating the motives of users.

bongodongobob · 8h ago
Books can do this too.
derektank · 5h ago
Major book publishers have sensitivity readers that evaluate whether or not a book can be "safely" published nowadays. And even historically there have always been at least a few things publishers would refuse to print.
selfhoster11 · 4h ago
All it means is that the Overton window on "should we censor speech" has shifted in the direction of less freedom.
ben_w · 1h ago
There's a reason the inherititors of the coyright* refused to allow more copies of Mein Kampf to be produced until that copyright expired.

* the federal state of Bavaria

buyucu · 3h ago
At the end of the day an LM is just a machine that talks. It might say silly things, bad things, nonsensical things, or even crazy insane things. But end the end of the day it just talks. Words don't kill.

LM safety is just a marketing gimmick.

hnaccount_rng · 2h ago
We absolutely regulate which words you can use in certain areas. Take instructions on medicine for one example
123yawaworht456 · 8h ago
does your CPU, your OS, your web browser come with ~~built-in censorship~~ safety filters too?

AI 'safety' is one of the most neurotic twitter-era nanny bullshit things in existence, blatantly obviously invented to regulate small competitors out of existence.

no_wizard · 7h ago
It isn’t. This is dismissive without first thinking through the difference of application.

AI safety is about proactive safety. Such an example: if an AI model could be used to screen hiring applications, making sure it doesn’t have any weighted racial biases.

The difference here is that it’s not reactive. Reading a book with a racial bias would be the inverse; where you would be reacting to that information.

That’s the basis of proper AI safety in a nutshell

ryao · 7h ago
As someone who has reviewed people’s résumés that they submitted with job applications in the past, I find it difficult to imagine this. The résumés that I saw had no racial information. I suppose the names might have some correlation to such information, but anyone feeding these things into a LLM for evaluation would likely censor the name to avoid bias. I do not see an opportunity for proactive safety in the LLM design here. It is not even clear that they even are evaluating whether there is bias in such a scenario when someone did not properly sanitize inputs.
kalkin · 5h ago
> I find it difficult to imagine this

Luckily, this is something that can be studied and has been. Sticking a stereotypically Black name on a resume on average substantially decreases the likelihood that the applicant will get past a resume screen, compared to the same resume with a generic or stereotypically White name:

https://www.npr.org/2024/04/11/1243713272/resume-bias-study-...

bigstrat2003 · 5h ago
That is a terrible study. The stereotypically black names are not just stereotypically black, they are stereotypical for the underclass of trashy people. You would also see much higher rejection rates if you slapped stereotypical white underclass names like "Bubba" or "Cleetus" on resumes. As is almost always the case, this claim of racism in America is really classism and has little to do with race.
stonogo · 4h ago
"Names from N.C. speeding tickets were selected from the most common names where at least 90% of individuals are reported to belong to the relevant race and gender group."

Got a better suggestion?

thayne · 7h ago
> but anyone feeding these things into a LLM for evaluation would likely censor the name to avoid bias

That should really be done for humans reviewing the resumes as well, but in practice that isn't done as much as it should be

selfhoster11 · 4h ago
If you're deploying LLM-based decision making that affects lives, you should be the one held responsible for the results. If you don't want to do due diligence on automation, you can screen manually instead.
jowea · 4h ago
Social media does. Even person to person communication has laws that apply to it. And the normal self-censorship a normal person will engage in.
123yawaworht456 · 3h ago
okay. and? there are no AI 'safety' laws in the US.

without OpenAI, Anthropic and Google's fearmongering, AI 'safety' would exist only in the delusional minds of people who take sci-fi way too seriously.

https://en.wikipedia.org/wiki/Regulatory_capture

for fuck's sake, how more obvious could they be? sama himself went on a world tour begging for laws and regulations, only to purge safetyists a year later. if you believe that he and the rest of his ilk are motivated by anything other than profit, smh tbh fam.

it's all deceit and delusion. China will crush them all, inshallah.

derektank · 5h ago
iOS certainly does by limiting you to the App Store and restricring what apps are available there
selfhoster11 · 4h ago
They have been forced to open up to alternative stores in the EU. This is unequivocally a good thing, and a victory for consumer rights.
olalonde · 7h ago
Especially since "safety" in this context often just means making sure the model doesn't say things that might offend someone or create PR headaches.
SV_BubbleTime · 5h ago
Don’t draw pictures of celebrities.

Don’t discuss making drugs or bombs.

Don’t call yourself MechaHitler… which I don’t care that while scenario was objectively funny on its sheer ridiculousness.

recursive · 8h ago
I also think it's marketing but kind of for the opposite reason. Basically I don't think any of the current technology can be made safe.
nomel · 7h ago
Yes, perfection is difficult, but it's relative. It can definitely be made much safer. Looking at the analysis of pre vs post alignment makes this obvious, including when the raw unaligned models are compared to "uncensored" models.
simianwords · 2h ago
I hope the same people questioning ai safety (which is reasonable) don’t also hold concern on Grok due to the recent incident.

You have to understand that a lot of people do care about these kind of things.

ignoramous · 47m ago
> Nobody is going to die

Callous. Software does have real impact on real people.

Ex: https://news.ycombinator.com/item?id=44531120

eviks · 8h ago
Why is your definition of safety so limited? Death isn't the only type of harm...
ryao · 8h ago
There are other forms of safety, but whether a digital parrot says something that people do not like is not a form of safety. They are abusing the term safety for marketing purposes.
eviks · 8h ago
You're abusing the terms by picking either the overly limited ("death") or overly expansive ("not like") definitions to fit your conclusion. Unless you reject the fact that harm can come from words/images, a parrot can parrot harmful words/images, so be unsafe.
jazzyjackson · 6h ago
it's like complaining about bad words in the dictionary

the bot has no agency, the bot isn't doing anything, people talk to themselves, augmenting their chain of thought with an automated process. If the automated process is acting in an undesirable manner, the human that started the process can close the tab.

Which part of this is dangerous or harmful?

ryao · 8h ago
The maxim “sticks and stones can break my bones, but words can never hurt me” comes to mind here. That said, I think this misses the point that the LLM is not a gatekeeper to any of this.
jiggawatts · 7h ago
I find it particularly irritating that the models are so overly puritan that they refuse to translate subtitles because they mention violence.
eviks · 8h ago
Don't let your mind potential be limited by such primitive slogans!
jrflowers · 8h ago
> Am I the only one who thinks mention of “safety tests” for LLMs is a marketing scheme?

It is. It is also part of Sam Altman’s whole thing about being the guy capable of harnessing the theurgical magicks of his chat bot without shattering the earth. He periodically goes on Twitter or a podcast or whatever and reminds everybody that he will yet again single-handedly save mankind. Dude acts like he’s Buffy the Vampire Slayer

ks2048 · 8h ago
You could be right about this being an excuse for some other reason, but lots of software has “safety tests” beyond life or death situations.

Most companies, for better or worse (I say for better) don’t want their new chatbot to be a RoboHitler, for example.

ryao · 8h ago
It is possible to turn any open weight model into that with fine tuning. It is likely possible to do that with closed weight models, even when there is no creator provided sandbox for fine tuning them, through clever prompting and trying over and over again. It is unfortunate, but there really is no avoiding that.

That said, I am happy to accept the term safety used in other places, but here it just seems like a marketing term. From my recollection, OpenAI had made a push to get regulation that would stifle competition by talking about these things as dangerous and needing safety. Then they backtracked somewhat when they found the proposed regulations would restrict themselves rather than just their competitors. However, they are still pushing this safety narrative that was never really appropriate. They have a term for this called alignment and what they are doing are tests to verify alignment in areas that they deem sensitive so that they have a rough idea to what extent the outputs might contain things that they do not like in those areas.

halfjoking · 4h ago
It's overblown. Elon shipped Hitler grok straight to prod

Nobody died

pona-a · 1h ago
Playing devil's advocate, what if it was more subtle?

Prolonged use of conversational programs does reliably induce certain mental states in vulnerable populations. When ChatGPT got a bit too agreeable, that was enough for a man to kill himself in a psychotic episode [1]. I don't think this magnitude of delusion was possible with ELIZA, even if the fundamental effect remains the same.

Could this psychosis be politically weaponized by biasing the model to include certain elements in its responses? We know this rhetoric works: cults have been using love-bombing, apocalypticism, us-vs-them dynamics, assigned special missions, and isolation from external support systems to great success. What we haven't seen is what happens when everyone has a cult recruiter in their pocket, waiting for a critical moment to offer support.

ChatGPT has an estimated 800 million weekly active users [2]. How many of them would be vulnerable to indoctrination? About 3% of the general population has been involved in a cult [3], but that might be a reflection of conversion efficiency, not vulnerability. Even assuming 5% are vulnerable, that's still 40 million people ready to sacrifice their time, possessions, or even their lives in their delusion.

[1] https://www.rollingstone.com/culture/culture-features/chatgp...

[2] https://www.forbes.com/sites/martineparis/2025/04/12/chatgpt...

[3] https://www.peopleleavecults.com/post/statistics-on-cults

mystraline · 9h ago
To be completely and utterly fair, I trust Deepseek and Qwen (Alibaba) more than American AI companies.

American AI companies have shown they are money and compute eaters, and massively so at that. Billions later, and well, not much to show.

But Deepseek cost $5M to develop, and made multiple novel ways to train.

Oh, and their models and code are all FLOSS. The US companies are closed. Basically, the US ai companies are too busy treating each other as vultures.

kamranjon · 8h ago
Actually the majority of Google models are open source and they also were pretty fundamental in pushing a lot of the techniques in training forward - working in the AI space I’ve read quite a few of their research papers and I really appreciate what they’ve done to share their work and also release their models under licenses that allow you to use them for commercial purposes.
simonw · 8h ago
"Actually the majority of Google models are open source"

That's not accurate. The Gemini family of models are all proprietary.

Google's Gemma models (which are some of the best available local models) are open weights but not technically OSI-compatible open source - they come with usage restrictions: https://ai.google.dev/gemma/terms

kamranjon · 7h ago
You’re ignoring the T5 series of models that were incredibly influential, the T5 models and their derivatives (FLAN-T5, Long-T5, ByT5, etc) have been downloaded millions of times on huggingface and are real workhorses. There are even variants still being produced within the last year or so.

A yea the Gemma series is incredible and while maybe not meeting the standards of OSI - I consider them to be pretty open as far as local models go. And it’s not just the standard Gemma variants, Google is releasing other incredible Gemma models that I don’t think people have really even caught wind of yet like MedGemma, of which the 4b variant has vision capability.

I really enjoy their contributions to the open source AI community and think it’s pretty substantial.

NitpickLawyer · 4h ago
> But Deepseek cost $5M to develop, and made multiple novel ways to train

This is highly contested, and was either a big misunderstanding by everyone reporting it, or maliciously placed there (by a quant company, right before the stock fell a lot for nvda and the rest) depending on who you ask.

If we're being generous and assume no malicious intent (big if), anyone who has trained a big model can tell you that the cost of 1 run is useless in the big scheme of things. There is a lot of cost in getting there, in the failed runs, in the subsequent runs, and so on. The fact that R2 isn't there after ~6 months should say a lot. Sometimes you get a great training run, but no-one is looking at the failed ones and adding up that cost...

jampa · 4h ago
They were pretty explicit that this was only the cost in GPU hours to USD for the final run. Journalists and Twitter tech bros just saw an easy headline there. It's the same with Clair Obscur developer's Sandfall, where the people say that the game was made by 30 people, when there were 200 people involved.
badsectoracula · 1h ago
These "200 people" were counted from credits which list pretty much everyone who even sniffed at the general direction of the studio's direction. The studio itself is ~30 people (just went and check on their website, they have a team list with photos for everyone). The rest are contractors whose contributions usually vary wildly. Besides, credits are free so unless the the company are petty (see Rockstar not crediting people on their games if they leave before the game is released even if they worked on it for years) people err on the site on crediting everyone. Personally i've been credited on a game that used a library i wrote once and i learned about it years after the release.

Most importantly those who mention that the game was made by 30 people do it to compare it with other much larger teams with hundreds if not thousands of people and those teams use contractors too!

NitpickLawyer · 1h ago
> They were pretty explicit that this was only the cost in GPU hours to USD for the final run.

The researchers? Yes.

What followed afterwards, I'm not so sure. There was clearly some "cheap headlines" in the media, but there were also some weird coverage being pushed everywhere, from weird tlds, and they were all pushing nvda dead, cheap deepseek, you can run it on raspberries, etc. That might have been a campaign designed to help short the stocks.

Aunche · 8h ago
$5 million was the gpu hour cost of a single training run.
dumbmrblah · 7h ago
Exactly. Not to minimize Deepseeks tremendous achievement, but that $5 million was just for the training run, not the GPUs used they purchased before, and all the OpenAI API calls they likely used to assist in synthetic data generation.
baobabKoodaa · 38m ago
> American AI companies have shown they are money and compute eaters

Don't forget they also quite literally eat books

ryao · 9h ago
Wasn’t that figure just the cost of the GPUs and nothing else?
rpdillon · 8h ago
Yeah, I hate that this figure keeps getting thrown around. IIRC, it's the price of 2048 H800s for 2 months at $2/hour/GPU. If you consider months to be 30 days, that's around $5.7M, which lines up. What doesn't line up is ignoring the costs of facilities, salaries, non-cloud hardware, etc. which will dominate costs, I'd expect. $100M seems like a fairer estimate, TBH. The original paper had more than a dozen authors, and DeepSeek had about 150 researchers working on R1, which supports the notion that personnel costs would likely dominate.
moralestapia · 6h ago
>ignoring the costs of facilities, salaries, non-cloud hardware, etc.

If you lease, those costs are amortized. It was definitely more than $5M, but I don't think it was as high as $100M. All things considered, I still believe Deepseek was trained at one (perhaps two) orders of magnitude lower cost than other competing models.

3eb7988a1663 · 8h ago
That is also just the final production run. How many experimental runs were performed before starting the final batch? It could be some ratio like 10 hours of research to every one hour of final training.
rynn · 8h ago
IncreasePosts · 8h ago
Deepseek R1 was trained at least partially on the output of other LLMs. So, it might have been much more expensive if they needed to do it themselves from scratch.
nomel · 8h ago
Lawsuit, since it was against OpenAI TOS: https://hls.harvard.edu/today/deepseek-chatgpt-and-the-globa...
buyucu · 2h ago
Deepseek is far more worthy of the name OpenAI than Sam Altman's ClosedAI.
refulgentis · 8h ago
> Billions later, and well, not much to show.

This is obviously false, I'm curious why you included it.

> Oh, and their models and code are all FLOSS.

No?

krackers · 9h ago
Probably the results were worse than K2 model released today. No serious engineer would say it's for "safety" reasons given that ablation nullifies any safety post-training.
simonw · 8h ago
I'm expecting (and indeed hoping) that the open weights OpenAI model is a lot smaller than K2. K2 is 1 trillion parameters and almost a terabyte to download! There's no way I'm running that on my laptop.

I think the sweet spot for local models may be around the 20B size - that's Mistral Small 3.x and some of the Gemma 3 models. They're very capable and run in less than 32GB of RAM.

I really hope OpenAI put one out in that weight class, personally.

NitpickLawyer · 4h ago
Early rumours (from a hosting company that apparently got early access) was that you'd need "multiple h100s to run it", so I doubt it's a gemma - mistral small tier model..
etaioinshrdlu · 8h ago
It's worth remembering that the safety constraints can be successfully removed, as demonstrated by uncensored fine-tunes of Llama.
buyucu · 3h ago
Probably ClosedAI's model was not as good as some of the models being released now. They are delaying it to do some last minute benchmark hacking.
dorkdork · 9h ago
Maybe they’re making last minute changes to compete with Grok 4?
stonogo · 9h ago
we'll never hear about this again