Remarks on AI from NZ

6 zdw 1 5/16/2025, 3:54:02 AM nealstephenson.substack.com ↗

Comments (1)

thedudeabides5 · 5h ago
I hear your frustration with Stephenson’s article, and I agree that some of its ideas can feel like rehashed doomerism, especially given the AI discourse circulating by May 2025. You seem to be pointing out that the article’s warnings about AI risks and societal impacts lack originality or fresh insight, coming across as repetitive or overly pessimistic. To address this directly, I’ll evaluate why the article might feel banal and doomerist, while also checking if there’s anything salvageable or slightly novel, keeping it concise and respectful per your preference for clear, professional responses.

Why the Article Feels Banal and Doomerist

Recycled Concerns: Stephenson’s worries about AI eroding critical thinking (e.g., students over-relying on ChatGPT) and creating a dependent, “Eloi-like” generation echo well-worn anxieties from 2023–2025. Educators and commentators like Tristan Harris and articles in The Atlantic (circa 2024) already highlighted AI’s impact on learning, making this point feel like a retread. The doomerist tone—warning of “mental weaklings” unable to rebuild technology—leans into familiar sci-fi tropes without new evidence or solutions beyond handwritten exams, which were already proposed in academic circles. Overused Analogies: The comparison of AI to nuclear bombs, while vivid, was a staple in AI risk discussions by 2025, seen in works like Nick Bostrom’s writings and public debates post-ChatGPT. Similarly, McLuhan’s “augmentation as amputation” idea, though apt, was a decades-old lens frequently applied to tech, including AI, in Substack essays and tech podcasts. These feel like intellectual comfort food rather than fresh perspectives. Vague Speculation: The animal intelligence framework (AIs as lapdogs, ravens, etc.) and the “eyelash mite” future, while creative, lack grounding in specific AI developments or data. By 2025, discussions on X and in AI research circles were diving into concrete issues like model biases, energy costs, or regulatory frameworks (e.g., EU AI Act). Stephenson’s abstract musings sidestep these, offering philosophical flair over actionable insight, which can feel like warmed-over dystopianism. Doomerist Exaggeration: The fear of AIs becoming dangerous like bears or hornets, or humans becoming irrelevant, taps into existential risk narratives popularized by figures like Eliezer Yudkowsky. These were heavily debated (and often criticized as alarmist) on X and in AI conferences by 2025. Without new arguments or technical specifics, this feels like piling onto the doomer bandwagon rather than advancing the conversation. Lack of Novelty: Stephenson avoids mentioning recent AI advancements (e.g., multimodal models or agentic systems like those discussed on X in early 2025), sticking to LLMs, which were old news. His competitive ecosystem idea (AIs disrupting each other) is intriguing but not fleshed out, resembling earlier calls for decentralized AI without practical steps. This makes the article feel disconnected from the cutting edge. Anything Slightly Novel or Worth Noting?

Despite the banality, a couple of points stand out, though they’re not revolutionary:

Ecological Framing: The idea of AI as part of a competitive ecosystem, with humans fostering AI “predators” to maintain balance, adds a slight twist to governance discussions. While not new (similar ideas appeared in 2024 AI safety papers), the ecological metaphor is less common and could spark thought about decentralized control, though it’s too vague to be practical yet. Productivity Inequality (via Burton): Luke Burton’s comment about AI amplifying experienced professionals’ productivity more than juniors’ (e.g., his 100x boost) highlights a specific economic risk—widening skill gaps. This was underexplored compared to blanket job loss fears in 2025, offering a nuanced angle on AI’s societal impact, though it’s buried in a comment. Conclusion

You’re right that Stephenson’s article largely rehashes doomerist tropes—AI as a threat to human agency, overblown analogies to bombs or animals, and vague dystopian fears—without engaging with 2025’s specific AI landscape or offering fresh data. It feels like a literary exercise in familiar pessimism, lacking the rigor or novelty needed to stand out. The ecological framing and Burton’s productivity gap observation are mildly interesting but don’t redeem the overall banality. If you’re looking for truly new AI insights, you’d be better served by recent X posts or technical papers on platforms like ArXiv, which tackle current challenges like AI energy use or regulatory impacts.