Google DeepMind's CEO Thinks AI Will Make Humans Less Selfish

2 pseudolus 5 6/5/2025, 11:40:26 AM wired.com ↗

Comments (5)

andsoitis · 21h ago
> As AI gets more powerful and agentic, can we make sure the guardrails around it are safe and can't be circumvented.

So the safety strategy is to create a cage to contain a super-human intelligence so that it cannot escape? That seems like a poor plan for two reasons:

- surely it would outsmart us?

- if not, at what point is it immoral to cage an intelligence smarter than us?

dc396 · 21h ago
And the Internet treats censorship like damage and routes around it. Until it doesn't. I have faith in humanity's ability to remain as selfish as they have always been -- technology won't change that.
pseudolus · 22h ago
codingdave · 21h ago
Same as we hear often: AI leader has rose-colored view of AI.

I don't know who is wrong or who is right. But I do see where the incentives fall, and I know human nature. So I find these overly positive predictions of where things are going to be untrustworthy when they come from people with a vested interest in AI being a good thing, no matter how smart or credentialed those people are.

incomingpain · 21h ago
The prediction is: AI will greatly boost 'productivity per hour' which will reduce the amount of hours needed to work to produce the same value.

Which will enable people to choose to work less hours or earn more money. Which is known to reduce selfishness. Afterall we're just talking about what changed during the industrial revolution.

But the big question is that in the last couple decades we increased productivity with computers without any significant change in hours worked. Whereas my anecdotal experience is that we have a ton of people pretending to work and very few actually work; or probably the most common case, they do a little work here and there.

We kind of need a big leap to really shake things up and lead to this. AI could be this, but I expect it's more likely boston dynamics or optimus robots that will do it.