The AI Doomers Are Getting Doomier

3 joegibbs 3 8/22/2025, 12:39:46 AM theatlantic.com ↗

Comments (3)

motorest · 4m ago
From the article.

> The past few years have been terrifying for Soares and Hendrycks, who both lead organizations dedicated to preventing AI from wiping out humanity.

Just to state the obvious: there is a monumental conflict of interests in this sort of organization. Of course those who present themselves as solutions and even paladins defending us against a problem will have a vested interest in convincing everyone there is indeed a problem, and it is huge, even if said problem does not exist at all.

joegibbs · 4h ago
I disagree on the AI 2027 thing, I don't think that we've seen any evidence of self-improvement or a rate of increase in abilities that it suggests. If anything I think that the rate of improvement has slowed since GPT3, without some entirely new architecture I don't see any takeoff scenario. The only way I can imagine AI killing everyone is if somebody decided, for some inscrutable reason, to put an LLM in charge of nuclear missile launches.

The psychosis is worrying but I think an artefact of a new technology that people don't have an accurate mental model around (similar to but worse than the supernatural powers attributed to radio, television etc). Hopefully AI companies will provide more safeguards against it but even without them I think that people will eventually understand the limitations and realise that it's not in love with them, doesn't have a genius new theory of physics and makes things up.