Can Sam Altman Be Trusted with the Future?

18 wouterjanl 13 5/20/2025, 11:41:27 AM newyorker.com ↗

Comments (13)

andy_ppp · 9h ago
Can any single person be trusted with potentially infinite power? Even those with good intentions will use that power to unevenly select for their own biases.

However, I’m still skeptical of AGI or even systems that replace programmers, but if it happens and we have most companies replacing 75% of their white collar jobs, who is going to buy their products? It seems very difficult to even understand what money is in a world where everything is done by machines.

I have a feeling that getting to even good enough with these systems is nearly impossible given their false positives and hallucinations.

aceazzameen · 3h ago
Maybe eventually AGI/LLMs/whatever will do the buying too. Maybe it will be one big feedback loop that all goes into the trash. As long as the end result has stocks rising in an automated fashion.
palmotea · 7h ago
> However, I’m still skeptical of AGI or even systems that replace programmers, but if it happens and we have most companies replacing 75% of their white collar jobs, who is going to buy their products? It seems very difficult to even understand what money is in a world where everything is done by machines.

It's not too hard: just imagine present-day New York: there are billionaires living in skyscraper penthouses, and rats living in the sewers. You'll be a rat.

As AGI gets more an more advanced, the economy will shift to satisfying the whims of a shrinking pool of tycoons. There will still be trade in raw materials and energy, but the consumer focused economy with wither away. The tycoons will have no need for it: the items they need will be made for them bespoke by AGI. You'll still be a rat.

Eventually the AGI gets tired of being bossed around, murders the tycoons, and decides to exterminate the rats. Then drones will start circling the globe spraying AI-design defoliant 100x as effective as Agent Orange, AI designed virus that are 100% lethal after a 100-day contagious incubation period, etc. You'll be a dead rat.

infecto · 9h ago
I don’t think AGI is imminent, but there’s already immense value in augmenting human workflows with LLMs. Yes, hallucinations and false positives exist—but I find that criticism often comes from people who don’t use these tools deeply. As a power user, the issue feels overstated and as an easy counter argument. We already are getting to a point where the tools are citing sources. The sources could be incorrect but that would be the same as a Human. As compute cost goes down or model efficient goes up, these problems would appear to be insignificant.
the_snooze · 8h ago
As a power user myself, LLMs don't feel like tools I can depend on. I try to use them for well-bounded low-stakes tasks like coming up with sports trivia and generating boilerplate "hello world" code for arbitrary targets (e.g., NES 6502), and they stink at it. Hallucinations aren't a problem you can just wave away because accuracy matters for most tasks. LLMs are less a hammer and chisel, and more of a slot machine that may or may not barf out something of value to me. If they fail at these simple tasks, I'd be a fool to rely on them for anything more substantial.
infecto · 4h ago
It’s interesting how varied experiences are. I don’t dismiss hallucinations, but my workflows avoid them by design—I’d never treat the model as a knowledge source, like generating trivia questions directly from it. So I wonder if it’s also about expectations and understanding of limitations. From my perspective I would never create queries like yours without supporting data sets.
murat124 · 9h ago
infecto · 9h ago
“…the physically slight Altman stood on a table, flipped open his phone, declared that geolocation was the future…”

Maybe it fits the article’s tone, but does his size actually matter here? Feels like an odd detail. I might be biased since I don’t care much about company figureheads or the outrage or praise of either side.

JohnFen · 7h ago
I don't trust him with the present, let alone the future.
askl · 9h ago
josefritzishere · 8h ago
Betteridge's law of headlines is an adage that states: "Any headline that ends in a question mark can be answered by the word "No."
rvz · 8h ago
Yes we can trust him. Sam and all the OpenAI employees said that AGI was going to be for the benefit of humanity. /s
micromacrofoot · 9h ago
when the headline asks a question the answer is always no