Why aren't more people here worried about AI's exceeding us capabilities?

4 hollerith 5 5/31/2025, 4:14:26 PM
I'm one of those people that keep saying that no one knows how to control an AI that is much more all-around capable than (organized groups of) people are, and that we should stop AI research till this is figured out. (People can keep on using the models that have already been released or extensively deployed.)

But even if you don't believe me that no one knows how to control a super-capable AI, why is no one worried about some nation or disaffected group intentionally creating an AI to kill us all as some kind of doomsday weapon? Every year the craft of creating powerful AIs becomes better understood, and researchers (recklessly IMHO) publish this better understanding for anyone to see. We don't know whether all the knowledge needed to create an AI more capable than people will be published this year or 25 years from now, but as soon as it happens, any actor on earth capable of reading and understanding machine-learning papers and in possession of the necessary GPUs and electricity-generating capacity can destroy the world or at least destroy the human species. Why are so many of you so complacent about that risk?

In the news recently was a young man who killed some people at a fertility clinic. He was a "promortalist": someone who believes that there is so much suffering in the world that the only moral response is to help all the people die (so they cannot suffer any more). Eventually, the craft of machine learning will become so well understood and access to compute resources so widespread and affordable that anyone (e.g., some troubled soul living in some damp basement somewhere who happens to inherits $66 million from some eccentric uncle or happens to win a big personal-injury lawsuit against some rich corporation) will have the means to end the human experiment.

He will not have to figure out how to stay in control of the AI he unleashes. Any AI (just like any human being) will have some system of preferences: there will be some ways the future might unfold that the AI will prefer to other ways. And if you put enough optimization pressure behind almost any system of preferences, what happens strongly tends to be incompatible with continued human survival unless the AI has been correctly programmed to care whether the humans survive. Our troubled soul bent on ending the human experiment can simply rely on this thorny property shared by all really powerful optimizing processes.

In summary, even if you don't believe me that no one knows (and no one is likely to find out in time if AI research is not stopped) how to create an AI that will keep on caring what happens to the people, aren't you worried about a human actor who need not bother to make sure that the AI will care what happens to the people because this actor is troubled and wants all the people to die?

I mean, yes, some of you genuinely disbelieve that AI can or will get good enough to be able to wrestle control over the future out of the hands of humankind. But many of you consider it likely that AI technology will continue to improve (or else people wouldn't've invested so much in AI and wouldn't've driven the market cap of Nvidia to 3 trillion dollars). Why so little worry?

Comments (5)

pvg · 1d ago
You're better off not loading the question like "Do you simply consider it someone else's job to worry about risks like that?". Who would want to talk to you when it sounds you're not asking but looking to berate?
hollerith · 1d ago
I removed that sentence (from the end of my post). Thanks for the feedback. I'll try to calm myself down now.
bigyabai · 1d ago
Your question still implies a hysteric interpretation of a nonexistent featureset. I think you will struggle to foster a serious discussion without actually describing what you're worried about. "AI kills people" is not any more of a serious concern than household furnitute becoming sentient and resolving to form an army that challenges humankind.

You have to describe what the actual threat is for us to treat it as an imperative issue. 99% of the time, these hypotheticals end with human error, not rogue AI.

philipkglass · 1d ago
Robotics progress is a lot slower than progress in disembodied AI, and disembodied AI trying to kill humanity is like naked John von Neumann trying to kill a tiger in an arena. IMO we need to figure out AI safety before physically embodied AI (smart robots) becomes routine, but to me safety in that context looks more like traditional safety-critical and security-critical software development.

I'm aware of the argument that smart enough AI can rapidly bootstrap itself to catastrophically affect the material world:

https://www.lesswrong.com/posts/Aq82XqYhgqdPdPrBA/full-trans...

"It gets an immense technological advantage. If it's smart, it doesn't announce itself. It doesn't tell you that there's a fight going on. It emails out some instructions to one of those labs that'll synthesize DNA and synthesize proteins from the DNA and get some proteins mailed to a hapless human somewhere who gets paid a bunch of money to mix together some stuff they got in the mail in a file. Like, smart people will not do this for any sum of money. Many people are not smart. Builds the ribosome, but the ribosome that builds things out of covalently bonded diamondoid instead of proteins folding up and held together by Van der Waals forces, builds tiny diamondoid bacteria. The diamondoid bacteria replicate using atmospheric carbon, hydrogen, oxygen, nitrogen, and sunlight. And a couple of days later, everybody on earth falls over dead in the same second."

As someone with a strong background in chemistry this just makes me skeptical of Yudkowsky's groundedness as a prognosticator. Biological life is not compatible with known synthesis conditions for diamond, and even superintelligence may not discover workarounds. I am even more skeptical that AI can make such advances and turn them into working devices purely by pondering/simulation, i.e. without iterative laboratory experiments.

bigyabai · 1d ago
1. If AI is latently capable of killing people using just computer power, then it was going to happen regardless. If the AI requires assistance from human actors then it's basically indistinct from human actors acting alone without AI. If you are a human that puts AI in charge of a human life, you are liable for criminal negligence.

2. You cannot stop AI research because of a bunch of unknowns. People will not be afraid of an immaterial threat that has no plausible way to threaten people besides generating text. Even if that text has access to the internet, the worst that can happen has probably already been explored by human actors. No AI was ever needed to proliferate catastrophes like Stuxnet, Sarin gas attacks, or 9/11.

3. Some people (like myself) have been following this space since Google published BERT. In that time, I have watched LLMs go from "absolutely dogshit text generator" to "slightly less dogshit text generator". It sounds to me like you've drank Sam Altman's Kool-aid without realizing that Sam is bullshitting too.