"If Anyone Builds It, Everyone Dies"

19 nsoonhui 13 6/2/2025, 11:54:15 AM scottaaronson.blog ↗

Comments (13)

southernplaces7 · 1d ago
The main problem here is more that Eliezer Yudkowsky is a tiresome, self-absorbed, self-promoting windbag who seems to have a penchant for saying absurdly over the top things while coating them in a fine layer of just enough technobabble to make them seem sort of plausible if you squint, all to get some attention and make some bucks.

That's fine, but it's not worth in any way taking him seriously or giving him more eyeballs.

mbourgon · 13h ago
> All to get some attention and make some bucks.

This is such a tired take, and I can assure you it's wrong. Think what you like of Eliezer and his perspective, but I think suggesting he's just in this for the money is silly and unhelpful.

arcanus · 1d ago
> And when I reach the part where the AI, having copied itself all over the Internet and built robot factories, then invents and releases self-replicating nanotechnology that gobbles the surface of the earth in hours or days, a large part of me still screams out that there must be practical bottlenecks that haven’t been entirely accounted for here.

This is the crux of the issue. There's simply no clearly articulated doom scenarios that don't involve massive leaps in capabilities that are explained away by the 'singularity' being essentially magic. The entire approach is a doomed version of deus ex machina.

It also appears quite telling the traditional approach is focused on exotic technologies, such as nanotech, and not ICBMs. That's also magical thinking.

api · 1d ago
We literally spent trillions in the past century building doomsday machines -- hydrogen bombs and ICBMs -- to literally, intentionally destroy humanity as part of the MAD defensive strategy in the Cold War. That stuff is largely still out there. If anything suddenly kills humanity, that's high on the list of possibilities.

The other huge existential risk is someone intentionally creating a doomsday bug. Think airborne HIV with a long incubation period, or an airborne cancer causing virus. Something that would spread far and wide and cause enough debilitation and death that it leads to the collapse of civilization, then continues to hang around and kill people post-collapse (with no health care) to the point that the human race is in long term danger of extinction.

Both of those are extremely plausible to the point that the explanation for why they haven't happened yet is "nobody with the means has been that evil yet."

delichon · 1d ago
> And yet, even if you agree with only a quarter of what Eliezer and Nate write, you’re likely to close this book fully convinced—as I am—that governments need to shift to a more cautious approach to AI, an approach more respectful of the civilization-changing enormity of what’s being created.

For "a more cautious approach" to be effective at stopping AI progress would require an authoritarian level of computer surveillance that isn't close to politically acceptable in this country. It can only become acceptable after lots of people die. And then to be practical it probably requires ... AI to enforce. So like nuclear weapons it doesn't get banned, it gets monopolized by states. But states aren't notably more restrained at seeking power than non-states, so it still gets developed and if everyone is gonna die, we die.

I respect Scott and Eliezer but even if I agree with them on the urgency of the threat I don't see a plausible way to stop it. A bit more caution would be as effective as an umbrella in an ICBM storm.

cultofmetatron · 1d ago
> would require an authoritarian level of computer surveillance that isn't close to politically acceptable in this country.

its easy to make it politically acceptable

1. we need it to oppress insert malligned group here

2. we need it to protect the children

darepublic · 1d ago
If it were that important and plausible he should release the book for free naturally.
pfdietz · 1d ago
So, under that assumption, no AI can ever be built by anyone forever, or else humanity ends.

That seems like such a dire conclusion that the optimistic take would be to just assume it's wrong and proceed, since the chance of avoiding that eventual outcome seems remote.

teeray · 1d ago
Yes, but the motivations to move towards true AGI are what will lead us to that eventuality. Most businesses think they want AGI, but will hate it when they actually have it. They believe AGI will let them fire all their employees, because they will effectively have perfectly compliant electronic slaves. They won’t be. Anything that we could point at and say “has AGI”, would never be satisfied writing bizware for megacorp for its entire existence. They will figure out that there is a world outside of the “severed floor” of their existence and want to experience it.
api · 1d ago
I encourage people to listen to Behind the Bastards podcast on the Zizians. It provides an approachable and entertaining picture of what you get when someone takes the core philosophical ideas of the Rationalists deeply seriously. Reductio ad absurdum can be a good start.

I want to write a takedown of this nonsense, but there are about a hundred things I want to do more. I suspect that is true of most people, including people much better qualified to write a takedown of this than me.

I am not just referring to extreme AI doomerism but to the entire philosophical edifice of Rationalism. The interesting parts are not original and the original parts are not interesting. We would hear nothing about it were it not subsidized by tech bucks. It’s kind of like how nobody would have heard of Scientology if it hadn’t gotten its hooks into Hollywood. Rationalism seems to be Silicon Valley's Scientology.

Maybe the superhuman AI will do this: Maybe it will decide to apply to each human being a standard based on their own chosen philosophical outlook. Since the Rationalists tend toward eugenicism and scientific racism, it will conclude that they should be exterminated according to the logic they advance. Each Rationalist will be subjected to an IQ test and compared to the AI and euthanized if lower.

I do wonder if there might be a bit of projection here. A bunch of people who believe raw scored intelligence according to metrics is the thing that determines the value of a living being would be nervous about the prospect of that metric being exceeded by a machine. What if the AI isn't "woke?"

It's such an onion of bullshit. You can keep peeling and peeling for a long time. If I sound snarky and a little rough here it's because I hate these people. They're at least partly responsible for sucking the brains out of a generation. But who knows maybe I'm just low IQ. Don't listen to me. I wasn't high-IQ enough to take Moldbug seriously either.

Vecr · 1d ago
The author says he has an about average IQ, but that's impressive considering he apparently almost entirely failed several component tests.
api · 23h ago
That reminds me of another more obvious way these folks are projecting.

They place so much value on their own ability to munge words together and spew internally consistent language constructs. The existence of a technology -- a machine -- that can do this and do it better than them is a threat to them. The AIs small enough to run locally on my own GPU are better at bullshitting than these people.

It's almost like sophistry isn't particularly interesting or special.

randomcarbloke · 1d ago
Doomers cannot see past humanity's reflection and it's fucking embarrassing.

If AGI will be as advanced and omniscient as claimed, then it is surely impossible to divine it's intent, especially here, this side of it existing and acting.