Center for AI Safety's new spokesperson suggests „burning down labs"

14 MrBuddyCasino 25 5/20/2025, 7:52:31 AM twitter.com ↗

Comments (25)

brap · 12h ago
If you’re wondering how qualified this guy is to be making such outrageous claims about AI, his background is that he’s a... journalist
palmotea · 6h ago
> If you’re wondering how qualified this guy is to be making such outrageous claims about AI, his background is that he’s a... journalist

Who is qualified? The people making the technology whose short-term interest is in advancing it, despite the costs?

echelon · 12h ago
Doesn't mean lay people have the wrong ideas about where this is going.

It seems obvious that we'll keep building this tech up until the very end.

Every AI coding startup is trying to automate the software engineer and reduce the cost of software, yet engineers fling themselves at the problem.

In the GitHub Copilot AMA today they were practically giddy about it. And in OpenAI Codex's Reddit AMA, they stated the mission was to make the delivery of end-to-end well-defined software a fully solved problem within a decade.

"But what about your own jobs?", people ask.

Crickets.

We're going to be building it until we have no more to contribute. Not a value statement, just a statement of what evidently looks to be the case.

Now I don't personally have a problem with software automation - I'd love to automate myself and do more. I might even be one of the people building the thing. But I have a problem with trillion dollar companies unilaterally owning the means of production and seeing the recursive upside behind a mile-tall moat. I'd never be able to work for company that rivals nation states.

AI will only work for all of us if there is no single, limited set of winners. If there is no moat.

Not sure we'll be so lucky.

consp · 12h ago
> they stated the mission was to make the delivery of end-to-end well-defined software a fully solved problem within a decade

You should look into the "business language as a programming" platforms and see how they are doing. Trying to eliminate all IT except for the bare minimum for years and going nowhere. I'm sceptical at all the "AI" companies branding their own product as the panacea. It is all marketing.

echelon · 11h ago
I thought that about Waymo cars a decade ago.
palmotea · 6h ago
> Every AI coding startup is trying to automate the software engineer and reduce the cost of software, yet engineers fling themselves at the problem.

Despite our massively inflated opinions of our intellectual abilities, software engineers actually tend to be pretty stupid and unwise.

flave · 12h ago
I assume this is a strategy (the man is a marketing/pr strategist) borrowed from certain climate change advocacy groups which is basically to take up the most extreme possible position and hope to “shift the Overtone Window”.

Personally I think it’s a. Immoral b. Dangerous and c. Self defeating.

Immoral because deceiving is actually quite bad regardless of your justification.

Dangerous because you will attract true-believers who are neurotic, mentally ill or misguided and really believe what you’re saying as a strategic thing. You will create a movement that you don’t control and is just wrong about the world.

Self-defeating partly for the above reasons but mostly because you may discredit your position with the people who understand “just enough” and those people run many major companies, countries etc.

Y_Y · 12h ago
I'm guessing you're referring to the Overton Window https://en.wikipedia.org/wiki/Overton_window but now you've got me wondering if that's just a clever turn of phrase and there's some poetic analogy to higher harmonics and compactly-supported SDE-estimation tools.

https://en.wikipedia.org/wiki/Overtone https://en.wikipedia.org/wiki/Window_function

flave · 12h ago
! Yes I was.

But please also see:

Ovaltime Window: the period where you can drink a hot drink Oval-Time Window: a real window where you can watch cricket in SW London Overdone Window: when food is just perfectly overcooked

Y_Y · 12h ago
I like the cut of your jib. May I propose:

Ovaltine Window: You view all politics with distaste because you are drinking a sorry substitute for a cup of tea.

Overtime Window: Rather than continue working extra hours you decide to self-defenestrate

---

Further reading:

Ambrose Bierce - The Devil's Dictionary, https://www.gutenberg.org/files/972/972-h/972-h.htm

Douglas Adams - The Meaning of Liff, https://ia601202.us.archive.org/18/items/calibre_library_76....

KaiserPro · 13h ago
I mean its not a novel viewpoint.

But I don't understand what he hopes to achieve. I mean its just going to mean that the development they fear so much being done elsewhere.

Also I wonder what they think AI is going to do to us?

Its unlikley that AI is going to kill us, it'll be the civil unrest caused by unemployment.

iLoveOncall · 13h ago
The only way AI will kill us is of boredom from hearing all those ridiculous predictions over and over.
impossiblefork · 12h ago
We've only had LLMs for seven years, and it's all basically all refinements of the transformer with minor additions.

I think there are paths to substantial architecture improvements-- things resembling edge transformers, things like adding extra tokens sort of like how that is done in vision transformers etc.

For this reason I think employment doomerism isn't very unreasonable. The architectures aren't here-- we simply haven't figured out how to substantially improve the transformer, but it's surely possible.

I myself made experiments recently where I got improved performance on some demanding toy problems. There must be hundreds of people working on this kind of thing, and even if there aren't the general research is building knowledge that will eventually guide people to design better architectures.

iLoveOncall · 12h ago
I for one cannot wait to see how ChatGPT-17.3 with Prime Edge Transformers Optimum++ will take over my job and come kill me in my house by... generating slightly less inaccurate text?
impossiblefork · 12h ago
So, what if future models could actually program and do mathematics like the best professionals and experts?

A world where you can ask this ChatGPT-17.3 to generate a version control tool with these or that properties, solving some list of problems with previous version control tools, and then it does so over the weekend at the cost of $1000 in GPU rent?

Presumably at the point it will not be sensible to employ very many programmers. A couple of experts would determine requirements and then they would feed that into the machines and the required software, mathematics etc. would be produced, analyzed by a small group of experts and then used to produce requirements for the next version.

iLoveOncall · 12h ago
So, what if this never happens?
palmotea · 6h ago
>> So, what if future models could actually program and do mathematics like the best professionals and experts?

> So, what if this never happens?

We can hope they fail, but the goal of these people are working towards is the replacement of humans for economically valuable intellectual work. We should take those goals seriously.

impossiblefork · 11h ago
Well, then it doesn't, but why would you expect that to be the case?

Why would transformers be so ideal that there is no major improvement left?

iLoveOncall · 10h ago
Why would you expect otherwise? You're the one with an extraordinary claim.

I'm not saying that transformers are ideal or that there's no major improvement left. I'm saying that a major improvement over something that is barely a net positive is nowhere near the extinction level event you're making it seem to be.

lostmsu · 9h ago
It's not like the proof to the claim is not extraordinary. 4GB of weights demonstrates reasoning abilities unthinkable 4 years ago.
iLoveOncall · 6h ago
Next token generation is not reasoning.
lostmsu · 6h ago
I would argue that is more reasoning than you do in the specific instances of SotA LLMs. Because you are demonstrating a rather simple logical mistake by claiming "T is in NTG => T is not in R" when you have no information about relationship between NTG and R sets.

But I wasn't even talking about reasoning. I specifically said "demonstration".

fnands · 13h ago
It turns out anyone can create a non-profit called the "Center for AI Safety" and then post wild things on twitter.

Is this news?

actionfromafar · 12h ago
No.

(Caveat: if it gets enough traction, it is news. Reference: see the lineup of the current administration.)

hengheng · 12h ago
Ah, so his recommended course of action is to "destroy a pipeline", and he will advocate for doing this and probably write a book called "how to destroy a pipeline" and he will absolutely not destroy a pipeline.

I've seen this before haven't I.