I distinctly remember multiple big companies quietly letting go of their AI ethics teams in 2023 around the same time the LLM craze started to pick up real steam.
I don’t think the skeptics disappeared, they just got drowned out with all the added noise that came with the new LLM hype cycle.
cmdrk · 44m ago
I feel that the article draws a false equivalence between skepticism and doomsaying. If anything, thinking AI is as dangerous as a nuclear weapon signals a true believer.
swivelmaster · 3m ago
Exactly. “AI will take over the world because it’s dangerously smart” is the exact opposite of skepticism!
There are different arguments as to why AI is bad, and they’re not all coming from the same people! There’s the resource argument (it’s expensive and bad for the environment), the quality argument (hallucinations, etc.), the ethical argument (stealing copyrighted material), the moral argument (displacing millions of jobs is bad), and probably more I’m forgetting.
Sam Altman talking about the dangers of AI in front of Congress accomplishes two things: It’s great publicity for AI’s capabilities (what CEO doesn’t want to possess the technology that could take over the world?), and it sets the stage for regulatory capture, protecting the big players from upstarts by making it too difficult/expensive to compete.
That’s not skepticism, that’s capitalism.
PaulHoule · 4h ago
3 words. Sam Bankman Fried.
himeexcelanta · 1h ago
The worst externalities of AI (mass social disinformation/manipulation) were already realized years ago with the Facebook algorithmic feed. Producing content wasn’t the limiting factor; AI-enabled algorithmic targeting to maximize ad revenue without any consideration for negative externalities has already eroded civil society.
croes · 5h ago
>What happened? Did we solve AI’s dangers, get bored of talking about them, or just decide that existential risk was bad for business?
We realized that AI isn't as smart as we feared and the real danger lies in the management believing the AI companies ads.
I don’t think the skeptics disappeared, they just got drowned out with all the added noise that came with the new LLM hype cycle.
There are different arguments as to why AI is bad, and they’re not all coming from the same people! There’s the resource argument (it’s expensive and bad for the environment), the quality argument (hallucinations, etc.), the ethical argument (stealing copyrighted material), the moral argument (displacing millions of jobs is bad), and probably more I’m forgetting.
Sam Altman talking about the dangers of AI in front of Congress accomplishes two things: It’s great publicity for AI’s capabilities (what CEO doesn’t want to possess the technology that could take over the world?), and it sets the stage for regulatory capture, protecting the big players from upstarts by making it too difficult/expensive to compete.
That’s not skepticism, that’s capitalism.
We realized that AI isn't as smart as we feared and the real danger lies in the management believing the AI companies ads.
No comments yet