AI is Anti-Human (and assorted qualifications)

31 fiatjaf 8 6/30/2025, 10:45:57 PM njump.me ↗

Comments (8)

solid_fuel · 9h ago
I keep finding myself contemplating the proscription of AI in the universe of Dune. While the later prequels by Brian Herbert write a fairly typically backstory of an AI rebellion and war, the original novels hint at something - IMO - far more interesting: AI was used by other people to manipulate and control, it didn't take control directly. People rebelled over the sheer amount of power that AI and computers enabled a few people to wield over society and the way it turned human life into one of being a cog in a machine, leading regimented and structured lives.

To wit: while ChatGPT and Gemini and most of these current models are fairly well behaved, when you use a model even for something seemingly innocuous like summarizing an article or an email you are indirectly allowing another person to decide what is important to you. Consider the power that gives other people over you. It pays to put on the devils cap sometimes and imagine a future where the LLMs that power our tools are controlled by people who don't exercise any restraint.

We have already seen shades of this with the amount of influence Facebook and TikTok and Twitter can wield over political discourse, picking which issues are winners and which are losers by choosing (even indirectly, by simply reacting to engagement metrics) what to emphasize and what to suppress. LLMs unlock another level entirely. An LLM can easily summarize an article while conveniently leaving out any negative mentions of certain politicians or parties. They can summarize emails and texts from family and friends while eliding any section asking for help or action.

While most people are somewhat distrustful of obviously biased sources, they don't regard LLMs with the same suspicion. An LLM could easily write a summary of Alan Turing's life while dropping all the bits about his sexuality and the way he was persecuted and castrated by the British government simply for trying to love.

I am not alleging that any of these things have happened, yet. But it is best to think of LLMs and generative AI in general as a tool that works _for someone else_. They can be very useful tools, but they can also be subverted and manipulated in subtle ways, and should not automatically be regarded as unbiased.

AlexeyBrin · 10h ago

     AI work is the same kind of thing as an AI girlfriend, because work is not only for the creation of value (although that's an essential part of it), but also for the exercise of human agency in the world. In other words, tools must be tools, not masters.
No matter if you agree or not with the author, the article is worth your time.
seabombs · 9h ago
Great article.

> Don't underestimate the value to your soul of good work.

This in particular resonated with me.

My concern is not really that AI will take over my job and replace me (if it is genuinely better at my job than I am, I think I would quite happily give it up and find something else to do). My concern is that AI will partially take over my job, doing the parts that I enjoy (creativity, thinking, learning) and leave to me the mundane aspects.

robbiewxyz · 10h ago
Ultimately the attitude of self-restraint called for in the article seems near-impossible: modern capitalism puts companies in a desperate race for dominance, and modern foreign policy puts countries in the same. From the penultimate paragraph:

"I think in all of this is implicit the idea of technological determinism, that productivity is power, and if you don't adapt you die. I reject this as an artifact of darwinism and materialism. The world is far more complex and full of grace than we think."

This argument is the one that either makes or breaks the article's feasibility and I fear the author is too optimistic.

What force of nature is it that can possibly hold its own against darwinism?

tines · 11h ago
Brilliantly written. Another of my favorite works on these ideas is Technopoly by Neil Postman. Absolutely a must-read.
somewhereoutth · 10h ago
> not doing [AI], either on an individual or a collective level, is just not an option.

I would have liked a deeper exploration of this point, since not doing it does indeed fix all the issues raised.

delichon · 10h ago
It's already too entrenched to revert without a broad anti AI consensus that doesn't exist. I can imagine the body politic suddenly seeing the light but that's fantasy. You could have as easily rolled back smart phones in 2009 based on vague warnings of future social damage.
somewhereoutth · 10h ago
I don't see that - I do see failed projects, negative returns, and general disillusionment.

The only thing propping it up is the sunk cost fallacy, on now a biblical scale, requiring ever more hype to cover the rising disappointment (ChatGPT 5 when??). The blowback from the inevitable collapse will be similarly biblical in proportions, a wise strategy would be to avoid it all (as an individual and/or collectively at some level).

In fairness, there are / will be some areas where LLMs have business benefit, but likely those areas will be limited, and the 'AI stuff' will be hidden from the user.