The AI Safety Problem Is Wanting

5 gregorymichael 1 6/26/2025, 4:07:38 PM dynomight.net ↗

Comments (1)

bigyabai · 7h ago
> I advise against giving AI access to nuclear weapons. Still, if an AI is vastly smarter than us and wants to hurt us, we have to assume it will be able to jailbreak any restrictions we place on it.

Why does every "AI safety" author make this same mistake? The concern isn't that AI will get the nuclear launch codes; if you even think that's possible then you're demonstrating a misunderstanding of the nuclear launch process and LLMs.

The concern is that "alignment" won't be enough. Making an AI "want" to save a human life won't make autonomous vehicles stop crashing. It won't stop robotic surgeons from killing their patients. And this makes sense - all of these situations have physical constraints that can't be magically overcome with AI. No amount of AGI will ever obviate these systemic limitations, which is why every single one of those "consider the 3D-printed drone swarm" essays in 2025 reads like a joke. There are real, present-day concerns about AI implementation and safe LLM usage, and all of it is being ignored so we can aggrandize the problem like it's a Neil Stephenson novel or a Hideo Kojima script.