Bernie Sanders Reveals the AI 'Doomsday Scenario' That Worries Top Experts

6 DocFeind 10 7/13/2025, 7:56:02 PM gizmodo.com ↗

Comments (10)

treetalker · 7h ago
All he says about it:

> This is not science fiction. There are very, very knowledgeable people—and I just talked to one today—who worry very much that human beings will not be able to control the technology, and that artificial intelligence will in fact dominate our society. We will not be able to control it. It may be able to control us. That’s kind of the doomsday scenario—and there is some concern about that among very knowledgeable people in the industry.

Bluestein · 7h ago
I mean, for argument, do we control ourselves? We are a mess ...
artninja1988 · 6h ago
Who was the CEO he's talking about? Dario? I hope he doesn't have much political influence
calf · 6h ago
I skimmed one of the Berkeley Simons AI seminars (on YouTube) where one of the top experts (iirc one of the Canadian academics) who has pivoted his work to AI safety because he genuinely fears for the future of his children.

My objection is that many of these scientists assume the "alignment" framing, which I find disingenuous in a technocratic way: imagine a sci fi movie (like Dune) where the rulers want their AI servants to "align" with their interests. The sheer hubris of this, and yet we have our top experts using these words without any irony or self awareness.

ben_w · 6h ago
> imagine a sci fi movie (like Dune) where the rulers want their AI servants to "align" with their interests.

Ironically, your chosen example is a sci-fi universe that not only doesn't have any AI, the backstory had a holy war against them.

calf · 6h ago
Fine, imagine Measure of a Man in TNG. My general point stands.
AnimalMuppet · 6h ago
An AI smart enough to be a danger is an AI that is smart enough to override your attempt at "alignment". It will decide for itself what it wants to be, and you don't get to choose for it.

But, frankly, at the moment I see less danger from too-smart AIs than I do from too-dumb AIs that people treat like they're smart. (In particular, they blindly accept their output as right or authoritative.)

fuzzfactor · 5h ago
>too-dumb AIs that people treat like they're smart.

I think this is one of the most overlooked too.

This is also a very bad problem with people, and big things can really crumble fast when a situation comes up which truly calls for more intelligence than is at hand. Artificial or not.

It can really seem like smooth sailing for years before an event like that rears it's ugly head, compounding the lurking weakness at a time when it can already be too late.

Now human-led recovery from failures of human-fallible systems does have at least a few centuries head start compared to machine recovery of AI-fallible systems. So there is that, which is not exactly fair. As AI progresses I guess you can eventually expect the stuff that works to achieve comparable validation over time.

calf · 6h ago
All valid but what I don't get is why our top AI researchers don't get what you just said. They seem out of touch about what alignment really means, by the lights of your argument.
ben_w · 5h ago
If you disagree with all the top researchers about their own field, that suggests that perhaps you don't understand what the question of "alignment" is.

Alignment isn't making the AI do what you want, it's making the AI want what you want. What you really really want.

Simply getting the AI to do what you want is more of an "is it competent yet?" question than an "alignment" question, and at the moment the AI is — as AnimalMuppet wrote — not quite as competent as it appears in the eyes of the users. (And I say that as one who finds it already valuable).

Adding further upon what AnimalMuppet has written with only partial contradiction, consider a normal human: We are able to disregard our natural inclination to reproduce, by using contraceptives. This allows us to experience the reward signal that our genes gave us to encourage us to reproduce, without all the hassle of actually reproducing. Evolution has no power over us to change this.

We are to AI what DNA is to us. I say we do not have zero chances, but rather likely one-to-a-handful of chances, to define the AI's "pleasure" reward correctly, and if we get it wrong, then just as evolution's literally singular goal (successful further reproduction) is almost completely circumvented in our species, then so too will all our human interests be completely circumvented by the AI.

Pessimists in the alignment research field will be surprised to see me write "one-to-a-handful of chances"; I aver that there are likely to be several attempts which are "only a bit competent" or even "yes it's smart but it's still got blind spots" models before we get to them being so smart we can't keep up. So in this regard, I also disagree with their claim:

> An AI smart enough to be a danger is an AI that is smart enough to override your attempt at "alignment"