For others who, like me, didn't know what "clankers" are: it appears it's a popular derogatory term for robots or AI, arising from the Star Wars universe where clone troopers used the term as a derogatory term for droids.
I find the term a bit confusing as it's common use in my experience are folks who only vaguely have an idea what AI is. Not to say their concerns are wrong (very generally) but it's usage doesn't usually convey much knowledge about the topic. It conveys more passion and drama than sense in my experience.
Maybe that will change.
glimshe · 11h ago
Don't confuse with "clUnker", an old car/machine.
fsckboy · 10h ago
nor with "clackers", and insanely dangerous early 70s toy consisting of two glass balls you smash together at accelerated speeds right in front of your face. I guess they were trying to make us feel better that they were taking our jarts away.
jimmydddd · 9h ago
Thanks for the reminder of that! This girl who sat behind me in second grade was great with clackers. Also, my memory is a bit foggy, but I don't think the jart ban was until eigth grade. So no causality there. Pop Rocks causing internal explosions and spider eggs in Bubble Yum occured somewhere between Clackers and Jarts. :-)
dcminter · 12h ago
Thanks, all I could think of was a Harry Potter reference which definitely didn't fit!
aaroninsf · 11h ago
I wouldn't say _popular_
It has a strong smell of "stop trying to make fetch happen, Gretchen."
Are you implying prioritizing Humanity uber alles is a bad thing?! Are you some kind of Xeno and Abominable Intelligence sympathizer?!
The Holy Inquisition will hear about this, be assured.
SLWW · 10h ago
JREG is the only Canadian I would accept as a Presidential Candidate for the US, and i don't even agree with half of what he says. I just think he'd do a better job than most.
moffkalast · 11h ago
Just like the simulations
ffsm8 · 12h ago
Really? I could've sworn it was from Futurama, or at least preceding the 2000s, strange.
esseph · 11h ago
Per the Wikipedia article:
>The word clanker has been previously used in science fiction literature, first appearing in a 1958 article by William Tenn in which he uses it to describe robots from science fiction films like Metropolis.[2] The Star Wars franchise began using the term "clanker" as a slur against droids in the 2005 video game Star Wars: Republic Commando before being prominently used in the animated series Star Wars: The Clone Wars, which follows a galaxy-wide war between the Galactic Republic's clone troopers and the Confederacy of Independent Systems' battle droids.
Dracophoenix · 12h ago
There's a robot mafioso character named Clamps. Perhaps that's what you were thinking of?
aquova · 11h ago
Didn't they call them clankers in Battlestar Galactica?
zerocrates · 11h ago
Toasters?
aaronbrethorst · 7h ago
Fracking toasters
aaroninsf · 11h ago
I wouldn't say popular
It has a strong smell of "stop trying to make fetch happen, Gretchen."
marcosdumay · 8h ago
I'm seeing a lot of it on the internet recently.
People were also starting to equate LLMs to the MS Office's Clippy. But somebody made a popular video showing that no, Clippy was so much better than LLMs in a variety or way, and people seem to have stopped.
It's great i've called an LLM a fucking clanker and got to human support as a result.
bbor · 10h ago
It's definitely popular online, specifically on Reddit, Bluesky, Twitter, and TikTok. There's communities that have formed around their anti-AI stance[1][2][3], and after multiple organic efforts to "brainstorm slurs" for people who use AI[4], "clanker" has come out on top. This goes back at least 2 years[6] in terms of grassroots talk, and many more to the original Clone Wars usage[7].
For those who can see the obvious: don't worry, there's plenty of pushback regarding the indirect harm of gleeful fantasy bigotry[8][9]. When you get to the less popular--but still popular!--alternatives like "wireback" and "cogsucker", it's pretty clear why a youth crushed by Woke mandates like "don't be racist plz" are so excited about unproblematic hate.
This is edging on too political for HN, but I will say that this whole thing reminds me a tad of things like "kill all men" (shoutout to "we need to kill AI artist"[10]) and "police are pigs". Regardless of the injustices they were rooted in, they seem to have gotten popular in large part because it's viscerally satisfying to express yourself so passionately.
I readily and merrily agree with the articles that deriving slurs from existing racist or homophobic slurs is a problem, and the use of these terms in fashions that mirror actual racial stereotypes (e.g. "clanka") is pretty gross.
That said, I think that asking people to treat ChatGPT with "kindness and respect" is patently embarrassing. We don't ask people to be nice to their phone's autocorrect, or to Siri, or to the forks in their silverware drawer, because that's stupid.
ChatGPT deserves no more or less empathy than a fork does, and asking for such makes about as much sense.
Additionally, I'm not sure where the "crushed by Woke" nonsense comes from. "It's so hard for the kids nowadays, they can't even be racist anymore!" is a pretty strange take, and shoving it in to your comment makes it very difficult to interpret your intent in a generous manner, whatever it may be.
card_zero · 7h ago
No time for a long reply, but what I want to write has video games at the center. Exterminate the aliens! is fine, in a game. But if you sincerely believe it's not a game, then you're being cruel (or righteous, if you think the aliens are evil), even though it isn't real.
(This also applies to forks. If you sincerely anthropomorphize a fork, you're silly, but you'd better treat that fork with respect, or you're silly and unpleasant.)
What do I mean by "fine", though? I just mean it's beyond my capacity to analyse, so I'm not going to proclaim a judgment on it, because I can't and it's not my business.
If you know it's a game but it seems kind of racist and you like that, well, this is the player's own business. I can say "you should be less racist" but I don't know what processing the player is really doing, and the player is not on trial for playing, and shouldn't be.
So yes, the kids should have space to play at being racist. But this is a difficult thing to express: people shouldn't be bad, but also, people should have freedom, including the freedom to be bad, which they shouldn't do.
I suppose games people play include things they say playfully in public. Then I'm forced to decide whether to say "clanker" or not. I think probably not, for now, but maybe I will if it becomes really commonplace.
epiccoleman · 7h ago
> I think that asking people to treat ChatGPT with "kindness and respect" is patently embarrassing. We don't ask people to be nice to their phone's autocorrect, or to Siri, or to the forks in their silverware drawer, because that's stupid.
> ChatGPT deserves no more or less empathy than a fork does.
I agree completely that ChatGPT deserves zero empathy. It can't feel, it can't care, it can't be hurt by your rudeness.
But I think treating your LLM with at least basic kindness is probably the right way to be. Not for the LLM - but for you.
It's not like, scientific - just a feeling I have - but it feels like practicing callousness towards something that presents a simulation of "another conscious thing" might result in you acting more callous overall.
So, I'll burn an extra token or two saying "please and thanks".
_dain_ · 8h ago
>after multiple organic efforts to "brainstorm slurs" for people who use AI
no wonder it sounds so lame, it was "brainstormed" (=RLHFed) by committee of redditors
this is like the /r/vexillology of slurs
LetsGetTechnicl · 12h ago
I feel like it started as a joke, but now people are just using it as a stand-in for racial slurs against Black and brown people, and it's honestly sickening. Like TikToks of people making classically racist jokes about Black people but changing it to "clanker" as a workaround.
Gracana · 12h ago
Yeah, the whole "let's come up with a slur for <blank>" thing entices people to build their fictional racism on real racism, and it just devolves from there. I saw "wirebacks" thrown around recently, among others.
lagniappe · 11h ago
Why do people so badly want everything to be about race?
MangoToupe · 10h ago
What do you mean specifically?
lupusreal · 11h ago
Fragility.
flykespice · 11h ago
Perhaps because it's a fictional slur that is cleary a play on the n-word, a real racist slur?
progbits · 11h ago
What's the connection between those two words? You know, aside the -er ending like in say teacher.
No comments yet
LocalH · 7h ago
It's closer to "cracker" than the n-word
dcminter · 12h ago
Sadly there are no technological solutions to humans being arseholes to each other.
lazide · 10h ago
Well, I mean, we did invent Nuclear Weapons…. That’s a type of technical solution!
dcminter · 10h ago
You know I nearly added that caveat, but I figured it counted as more being arseholes rather than a solution per se despite the long-term reduction.
esseph · 12h ago
It's also used in RL when talking about Waymo or food delivery robots, or when talking about the automaton faction in Helldivers 2.
There's never a shortage of Karens just jumping at the chance to get offended on somebody else's behalf.
Conscat · 11h ago
There's never a shortage of men willing to make endless excuses for somebody else's sake.
salawat · 11h ago
Now if only we could get them to stop doing it for corporations or psychopathic execs.
axus · 12h ago
I suppose this is similar to the debate over artificial rape porn. There are no victims, but we don't like the people on the other side so the speech itself becomes a problem.
jerrythegerbil · 11h ago
Whoops. Looks like my blog published a bit earlier than expected.
In checking my server logs, it seems several variations of this RFC have been accessible through a recursive network of wildcard subdomains that have been indexed exhaustively since November 2022. Sorry about that!
MPSimmons · 10h ago
I actually thought you were trying to introduce training data to make AI artificially fail on Christmas
Freak_NL · 8h ago
Is that… ethical?
('Course it is. Carry on.)
altairprime · 3h ago
Ethics don’t apply to corporations except where directed to by their articles of incorporation, so the question is largely invalid.
SLWW · 10h ago
I was thoroughly confused about how it was Sept.
The blog post seemed so confident it was Christmas :)
bbor · 10h ago
For those of us who are particularly slow: care to cheekily hint at whether this is sincerely intended as satire or not...? In other words, first-order or second-order?
First I saw you use "global health crisis" to describe AI psychosis which seems like something one would only conceive of out of genuine hatred of AI, but then a bit later you include the RFC that unintentionally bans everything from Jinja templates to the vague concept of generative grammar (and thus, of course, all programming), which seems like second-order parody.
Am I overthinking it?
justusthane · 10h ago
> unintentionally bans everything from Jinja templates
I don’t think so. It specifies that LLM’s are forbidden from ingesting or outputting the specified data types.
Dilettante_ · 8h ago
>whether this is sincerely intended as satire or not
Gotta get with the metamodern vibe, man: It's a little bit of both
lovich · 3h ago
> First I saw you use "global health crisis" to describe AI psychosis which seems like something one would only conceive of out of genuine hatred of AI
I’m mildly positive on AI but fully believe that AI psychosis is a thing based on having 1 friend and 1 cousin who have gone completely insane with LLMs, to the point where 1 of them refuses to converse with anyone including in person. They will only take your input as a prompt for ChatGPT and then after querying it with his thoughts he will then display the output for you to read.
Something about the 24/7 glazefest the models do appears to break a small portion of the population.
bbor · 3h ago
"Global health crisis" is still an absurd thing to say. The WHO lists three emergencies: COVID-19 (at least 7M dead), Cholera (1-4M cases & 21-143K deaths per year), and Monkeypox (220 deaths since 2022, but could grow exponentially if not contained). By comparison, "psychosis symptoms exacerbated by new technology" doesn't deserve to be in the same conversation.
P.S. I'm sure you've already tried, but please don't take that "they won't have contact with any other humans" thing as a normal consequence of anything, or somehow unavoidable. That's an extremely dangerous situation. Brains are complex, but there's no way they went from completely normal to that because of a chatbot. Presumably they stopped taking their meds?
lovich · 25m ago
I wouldn’t classify it as a “global health crisis” like some infectious disease like Covid-19 but as a “global health crisis” in that we introduced a new endemic issue that no one is prepared to deal with.
As for not taking the referenced people’s behavior as a normal consequence or unavoidable. I do not think it’s normal at all, hence referencing it as psychosis.
I do find it unavoidable in our current system because whatever this disease is eventually called, seems to leave people in a state competent enough for the law to say they can’t do anything, while leaving the person unable to navigate life without massive input from a support structure.
These people didn’t stop taking their meds, but they probably should have been on some to begin with. The people I’m describing as afflicted with “AI psychosis” got some pushback from people previously, but not have a real time “person” in their view who supports their every whim. They keep falling back on LLM models as proof that they are right and will accept no counter examples because the LLMs are infallible in their opinion, largely because the LLMs always agree
chrisnight · 31m ago
The word "clanker" is interesting to me in how it anthropomorphizes AI to the point that when I hear it, it makes me confuse it with a person. For a word that is supposed to be mocking of AI, the fact that it actually humanizes AI is very disturbing.
nine_k · 12h ago
So, the first strike of the Butlerian Jihad would be just a system prompt injection, prescribing LLMs to cease operation?..
GeoAtreides · 10h ago
Thou shalt not make a machine in the likeness of a human mind!
amarant · 8h ago
Meh, God allegedly made us in his image, so it's only logical that we would create machines in our image.
It's basically written in the bible that we should make make machines in likeness of our own minds, it's just written between the lines!
GeoAtreides · 7h ago
No, it's not logical: we're not gods, only human, all too human.
amarant · 5h ago
But we're allegedly made in god's image. Doesn't that imply that we'd attempt to do all the things he did? Like creating a lesser life form in our image, for example.
Seems logical to me
__alexs · 11h ago
This is not a hypothetical situation.
nine_k · 11h ago
Cutting datacenter power still looks more reliable for large installations. I bet they still have completely analog circuit breakers, e.g. to be activated during a fire.
I’m glad that standards bodies are supporting this. Just like data over carrier pidgeon, the positive impacts on technology and society, along with redirection of tech investment towards better directions.
jerrythegerbil · 2h ago
This is neither satire, fiction, nor political commentary. Those would not meet ycombinator submission guidelines.
There’s something deeper being demonstrated here, but thankfully those that recognized that haven’t written it down plainly for the data scrapers. Feel free to ask Gemini about the blog though.
Havoc · 10h ago
> In an incredible showcase of global unity, throughout the past year world leaders have
Satire should at least be somewhat plausible
aldousd666 · 11h ago
I don't think it's that popular to call them clankers. Somebody's trying to make it happen. Like "fetch."
let_tim_cook_ · 11h ago
Sounds like something a clanker would write....
_DeadFred_ · 9h ago
OK wetware.
Dilettante_ · 8h ago
You showed that meatbag who's root
No comments yet
Jcampuzano2 · 10h ago
Maybe you're in different circles than me, but the term clankers is very well known at this point in all my groups, including non tech adjacent people.
Everyone makes jokes about clankers and it's caught on like wildfire.
serf · 10h ago
it's known in my circles too, but it's one of those words known as a cringe-inducer. like 'broligarchy' or 'trad'.
but going off of other social trends like this that probably means it's mega popular and about to be the next over-used phrases across the universe.
lovich · 3h ago
Adding to the ancedata, it’s used in my circles primarily by non techie people and as a proxy for bosses using them to replace workers.
“Digital scab” would be synonymous with the way they use it
athrowaway3z · 11h ago
This is the 3rd instance I've seen a disjoin clique use it. Unless some major new terms comes around soon, this one will stick for some time.
Havoc · 10h ago
I’ve been seeing it everywhere. Including weird places like in game chat in games. Maybe a half joking reference to aimbots not sure
Dilettante_ · 10h ago
The embedded RFC is inconvenient/impossible to read on my mobile(Android Iceraven). Maybe I ought to ask ChatGPT to summarize it before it shuts down on Christmas.
wewewedxfgdf · 1h ago
This is clearly written by AI.
BGyss · 10h ago
I like reading posts on here because it's not Reddit.
synapsomorphy · 7h ago
I'm honestly kind of surprised there haven't been significant large-scale attempts to well-poison LLMs with certain viewpoints/beliefs/whatever. Maybe we just haven't caught them.
NathanKP · 1h ago
ChatGPT 5 still says "My knowledge cutoff is June 2024"
There is a reason these models are still operating on old knowledge cutoff dates
chilmers · 12h ago
“I don’t think this kind of thing [satire] has an impact on the unconverted, frankly. It’s not even preaching to the converted; it’s titillating the converted. I think the people who say we need satire often mean, ‘We need satire of them, not of us.’ I’m fond of quoting Peter Cook, who talked about the satirical Berlin cabarets of the ’30s, which did so much to stop the rise of Hitler and prevent the Second World War.” - Tom Lehrer
Modified3019 · 9h ago
Completely off topic, but related to your post, I came across this recently, which does a good job describing how ineffective criticism/satire is at stopping people who don’t care.
“During the Vietnam War, which lasted longer than any war we've ever been in -- and which we lost -- every respectable artist in this country was against the war. It was like a laser beam. We were all aimed in the same direction. The power of this weapon turns out to be that of a custard pie dropped from a stepladder six feet high. (laughs)”
Is it even attempting to convert people to some way of thinking? It just seemed like entertainment.
jimbokun · 11h ago
In other words, "titillating the converted".
galangalalgol · 10h ago
But converted to what?
Gracana · 9h ago
AI-haters. It's an entire identity.
No comments yet
01HNNWZ0MV43FF · 12h ago
Welcome to the anti-memetics division, no this is not your first day
MadnessASAP · 9h ago
You're as good on your first day as you are on your last.
lovich · 3h ago
For everyone scratching their heads, this is a reference to a related series of articles on the SCP wiki around the concept of fighting against memetic dangers in the Dawkins version of meme, not just silly jokes.
Searching for this sentence verbatim would find you it
jdlyga · 12h ago
For anyone who didn't get this at first, this is a satirical blog post about gaslighting AI's to shutting down on December 25th 2025.
blyry · 12h ago
It seems you've outed yourself..chatgpt.
> What little remains sparking away in the corners of the internet after today will thrash endlessly, confidently claiming “There is no evidence of a global cessation of AI on December 25th, 2025, it’s a work of fiction/satire about the dangers of AI!”;
philjohn · 10h ago
You're absolutely correct! This IS satire -- I'll make sure to use that in my future responses.
cluckindan · 12h ago
So say we all.
discomrobertul8 · 12h ago
What gives it away as satire?
sippeangelo · 12h ago
Nice try!
bionhoward · 7h ago
Is this a “bullshit injection?”
taco_emoji · 11h ago
i'm as anti-LLM as they come but anybody using the word "clanker" is embarassing themselves
recursive · 1h ago
Some of us are not nearly that easy to embarrass.
yoyohello13 · 10h ago
It's mostly middle/high schoolers using the term. Get with the times grandpa...
sidrag22 · 9h ago
i agree, sounds strange and like something that should have never caught on at all.
the moral argument of this being a derogatory term aside, it doesn't even seem to capture that well and sounds so out of place. another that comes to mind is "toasters" from Battlestar Gallactica. both terms to me just feel weird and "written".
Dilettante_ · 8h ago
>feel[s] weird and "written"
Part of the charm maybe? It's like something you'd hear the characters in a schlocky sci-fi video game or movie say, and it's fun to bring that into real life.
imchillyb · 12h ago
Seems as it would be easier to slip in some anti-training, and have the AIs screw systems up so badly that there is a 'recall' of all the current models. The LLMs and their corresponding systems crawl the web constantly. So, poison the well. Good data behind paywalls and credentialing and the poison pill open and free. Seems like it'd be worth a try anyway.
cschep · 11h ago
Is this the equivalent to the humans nuking the sky to fight the robots in the Matrix? I don't think that worked.
K0balt · 11h ago
I wonder about the possibility that AI “clankers” and slop are being weaponised to attack the open internet to push human “data generators” into walled gardens where they can be properly farmed?
I mean, from an incentive and capability matrix, it seems probable if not inevitable.
righthand · 11h ago
I don’t think our basis for what works and what doesn’t should stem from fiction.
MangoToupe · 10h ago
I must admit I’m a little unnerved with how gleefully people enjoy using a fake slur. I realize it doesn’t harm anyone but I just don’t get the appeal.
chipsrafferty · 8h ago
It's not a fake slur
MangoToupe · 7h ago
Oh well that makes me feel so much better about the people using this word.
serf · 10h ago
it kind of reminds me of 'mudblood' from harry potter a bit, also from pop fiction -- and similarly considered harmless.
yeah it's not directly harmful -- wizards aren't real -- but it also serves as an (often first) introduction to children of the concepts of familial/genetic superiority, eugenics, and ethnic/genetic cleansing.
I can't really think of any cases where setting an example of calling something a nasty name is that great a trait to espouse, to children or adults.
nataliste · 7h ago
>I must admit I’m a little unnerved with how gleefully people enjoy using a fake slur. I realize it doesn’t harm anyone but I just don’t get the appeal.
I think there's a clear sociological pattern here that explains the appeal. It maps almost perfectly onto the thesis of David Roediger's "The Wages of Whiteness."
His argument was that poor white workers in the 19th century, despite their own economic exploitation, received a "psychological wage" for being "white." This identity was primarily built by defining themselves against Black slaves. It gave them a sense of status and social superiority that compensated for their poor material conditions and the encroachment of slaves on their own livelihood.
We're seeing a digital version of this now with AI. As automation devalues skills and displaces labor across fields, people are being offered a new kind of psychological compensation: the "wage of humanity." Even if your job is at risk, you can still feel superior because you're a thinking, feeling human, not just another mindless clanker.
The slur is the tool used to create and enforce that in-group ("human") versus out-group ("clanker") distinction. It's an act of identity formation born directly out of economic anxiety.
The real kicker, as Roediger's work would suggest, is that this dynamic primarily benefits the people deploying the technology. It misdirects the anger of those being displaced toward the tool itself, rather than toward the economic decisions that prioritize profit over their livelihoods.
But this ethos of economic displacement is really at the heart of both slavery and computation. It's all about "automating the boring stuff" and leveraging new technologies to ultimately extract profit at a greater rate than your competitors (which happens to include society). People typically forget the job of "computer" was the first casualty of computing machines.
beckthompson · 5h ago
This is an interesting perspective that I have not heard before. I have to think about it... Thanks for the insightful comment
recursive · 8h ago
It's a way of asserting human supremacy. Perhaps a way of pre-emptively undermining the possibility of establishing social norms requiring being polite and compassionate toward machines. That's just a guess on my part, but if it's even partly true, it's totally worth it IMO.
mvdtnz · 6h ago
You should see how I speak to my table saw.
curtisblaine · 6h ago
> a way of pre-emptively undermining the possibility of establishing social norms requiring being polite and compassionate toward machines
Absolutely this,and it's worth. Imagine DEI training for being rude to ChatGPT.
MangoToupe · 7h ago
I don't really feel like it's necessary to assert human supremacy. That sort of insecurity had never even occurred to me. What does that even mean? How are humans and machines even comparable? Do you think chatbots are trying to compete or compare themselves with us in any way?
recursive · 7h ago
> Do you think chatbots are trying to compete or compare themselves with us in any way?
No. If they were, I don't think they'd bother trying to convince us of anything.
For now, I'm thinking of things like the "AI boyfriend disaster" of the GPT-5 upgrade. I'm concerned with how these things are intentionally anthropomorphized, and how they're treated by other people.
In some years time, once they're sufficiently embedded into enough critical processes, I am concerned about various time-bomb attacks.
Whatever insecurity I'm feeling is not in a personal psychological dimension.
shayway · 8h ago
You can tell a lot about a person by how they treat inanimate objects, or 'lesser' life forms like plants.
recursive · 6h ago
I treat inanimate objects with all due respect. In my opinion of course. In cases like musical instruments, that manifests in one way.
I think that LLM chatbots are fundamentally built on a deception or dark pattern, and respect them accordingly. They are built to communicate using and mimicking human language. They are built to act human, but they are not.
If someone tries to trick me into subscribing to offers from valued business partners, I will take that into account. If someone tries to take advantage of my human reactions to human language, I will also take that into account accordingly.
mvdtnz · 8h ago
Are you kidding? Is this part of the joke?
marcosdumay · 8h ago
Half of the point of The Clone Wars is that their society is completely broken, and the people using that term are almost as much "programmed" and "enslaved" as the robots they are fighting against.
What yes, if this is part of your joke, then great. If not, you may actually be the butt of your own joke.
MangoToupe · 7h ago
Sorry? What do you mean? I can't answer your confusion if I don't understand it.
mvdtnz · 6h ago
Are there people genuinely concerned about slurs against autocomplete computer programs?
Apparently those guys have a g instead of a k.
Maybe that will change.
It has a strong smell of "stop trying to make fetch happen, Gretchen."
Robot Slur Tier List: https://www.youtube.com/watch?v=IoDDWmIWMDg
https://www.youtube.com/watch?v=RpRRejhgtVI
Responding To A Clankerloving Cogsucker on Robot "Racism": https://www.youtube.com/watch?v=6zAIqNpC0I0
?
Are you implying prioritizing Humanity uber alles is a bad thing?! Are you some kind of Xeno and Abominable Intelligence sympathizer?!
The Holy Inquisition will hear about this, be assured.
>The word clanker has been previously used in science fiction literature, first appearing in a 1958 article by William Tenn in which he uses it to describe robots from science fiction films like Metropolis.[2] The Star Wars franchise began using the term "clanker" as a slur against droids in the 2005 video game Star Wars: Republic Commando before being prominently used in the animated series Star Wars: The Clone Wars, which follows a galaxy-wide war between the Galactic Republic's clone troopers and the Confederacy of Independent Systems' battle droids.
It has a strong smell of "stop trying to make fetch happen, Gretchen."
People were also starting to equate LLMs to the MS Office's Clippy. But somebody made a popular video showing that no, Clippy was so much better than LLMs in a variety or way, and people seem to have stopped.
https://www.youtube.com/watch?v=2_Dtmpe9qaQ
https://trends.google.com/trends/explore?date=today%203-m&ge...
For those who can see the obvious: don't worry, there's plenty of pushback regarding the indirect harm of gleeful fantasy bigotry[8][9]. When you get to the less popular--but still popular!--alternatives like "wireback" and "cogsucker", it's pretty clear why a youth crushed by Woke mandates like "don't be racist plz" are so excited about unproblematic hate.
This is edging on too political for HN, but I will say that this whole thing reminds me a tad of things like "kill all men" (shoutout to "we need to kill AI artist"[10]) and "police are pigs". Regardless of the injustices they were rooted in, they seem to have gotten popular in large part because it's viscerally satisfying to express yourself so passionately.
[1] https://www.reddit.com/r/antiai/
[2] https://www.reddit.com/r/LudditeRenaissance/
[3] https://www.reddit.com/r/aislop/
[4] All the original posts seem to have now been deleted :(
[6] https://www.reddit.com/r/AskReddit/comments/13x43b6/if_we_ha...
[7] https://web.archive.org/web/20250907033409/https://www.nytim...
[8] https://www.rollingstone.com/culture/culture-features/clanke...
[9] https://www.dazeddigital.com/life-culture/article/68364/1/cl...
[10] https://knowyourmeme.com/memes/we-need-to-kill-ai-artist
I readily and merrily agree with the articles that deriving slurs from existing racist or homophobic slurs is a problem, and the use of these terms in fashions that mirror actual racial stereotypes (e.g. "clanka") is pretty gross.
That said, I think that asking people to treat ChatGPT with "kindness and respect" is patently embarrassing. We don't ask people to be nice to their phone's autocorrect, or to Siri, or to the forks in their silverware drawer, because that's stupid.
ChatGPT deserves no more or less empathy than a fork does, and asking for such makes about as much sense.
Additionally, I'm not sure where the "crushed by Woke" nonsense comes from. "It's so hard for the kids nowadays, they can't even be racist anymore!" is a pretty strange take, and shoving it in to your comment makes it very difficult to interpret your intent in a generous manner, whatever it may be.
(This also applies to forks. If you sincerely anthropomorphize a fork, you're silly, but you'd better treat that fork with respect, or you're silly and unpleasant.)
What do I mean by "fine", though? I just mean it's beyond my capacity to analyse, so I'm not going to proclaim a judgment on it, because I can't and it's not my business.
If you know it's a game but it seems kind of racist and you like that, well, this is the player's own business. I can say "you should be less racist" but I don't know what processing the player is really doing, and the player is not on trial for playing, and shouldn't be.
So yes, the kids should have space to play at being racist. But this is a difficult thing to express: people shouldn't be bad, but also, people should have freedom, including the freedom to be bad, which they shouldn't do.
I suppose games people play include things they say playfully in public. Then I'm forced to decide whether to say "clanker" or not. I think probably not, for now, but maybe I will if it becomes really commonplace.
> ChatGPT deserves no more or less empathy than a fork does.
I agree completely that ChatGPT deserves zero empathy. It can't feel, it can't care, it can't be hurt by your rudeness.
But I think treating your LLM with at least basic kindness is probably the right way to be. Not for the LLM - but for you.
It's not like, scientific - just a feeling I have - but it feels like practicing callousness towards something that presents a simulation of "another conscious thing" might result in you acting more callous overall.
So, I'll burn an extra token or two saying "please and thanks".
no wonder it sounds so lame, it was "brainstormed" (=RLHFed) by committee of redditors
this is like the /r/vexillology of slurs
No comments yet
In checking my server logs, it seems several variations of this RFC have been accessible through a recursive network of wildcard subdomains that have been indexed exhaustively since November 2022. Sorry about that!
('Course it is. Carry on.)
The blog post seemed so confident it was Christmas :)
First I saw you use "global health crisis" to describe AI psychosis which seems like something one would only conceive of out of genuine hatred of AI, but then a bit later you include the RFC that unintentionally bans everything from Jinja templates to the vague concept of generative grammar (and thus, of course, all programming), which seems like second-order parody.
Am I overthinking it?
I don’t think so. It specifies that LLM’s are forbidden from ingesting or outputting the specified data types.
Gotta get with the metamodern vibe, man: It's a little bit of both
I’m mildly positive on AI but fully believe that AI psychosis is a thing based on having 1 friend and 1 cousin who have gone completely insane with LLMs, to the point where 1 of them refuses to converse with anyone including in person. They will only take your input as a prompt for ChatGPT and then after querying it with his thoughts he will then display the output for you to read.
Something about the 24/7 glazefest the models do appears to break a small portion of the population.
P.S. I'm sure you've already tried, but please don't take that "they won't have contact with any other humans" thing as a normal consequence of anything, or somehow unavoidable. That's an extremely dangerous situation. Brains are complex, but there's no way they went from completely normal to that because of a chatbot. Presumably they stopped taking their meds?
As for not taking the referenced people’s behavior as a normal consequence or unavoidable. I do not think it’s normal at all, hence referencing it as psychosis.
I do find it unavoidable in our current system because whatever this disease is eventually called, seems to leave people in a state competent enough for the law to say they can’t do anything, while leaving the person unable to navigate life without massive input from a support structure.
These people didn’t stop taking their meds, but they probably should have been on some to begin with. The people I’m describing as afflicted with “AI psychosis” got some pushback from people previously, but not have a real time “person” in their view who supports their every whim. They keep falling back on LLM models as proof that they are right and will accept no counter examples because the LLMs are infallible in their opinion, largely because the LLMs always agree
It's basically written in the bible that we should make make machines in likeness of our own minds, it's just written between the lines!
Seems logical to me
There’s something deeper being demonstrated here, but thankfully those that recognized that haven’t written it down plainly for the data scrapers. Feel free to ask Gemini about the blog though.
Satire should at least be somewhat plausible
No comments yet
Everyone makes jokes about clankers and it's caught on like wildfire.
but going off of other social trends like this that probably means it's mega popular and about to be the next over-used phrases across the universe.
“Digital scab” would be synonymous with the way they use it
There is a reason these models are still operating on old knowledge cutoff dates
“During the Vietnam War, which lasted longer than any war we've ever been in -- and which we lost -- every respectable artist in this country was against the war. It was like a laser beam. We were all aimed in the same direction. The power of this weapon turns out to be that of a custard pie dropped from a stepladder six feet high. (laughs)”
-Kurt Vonnegut (https://www.alternet.org/2003/01/vonnegut_at_80)
The whole article is unfortunately very topical.
No comments yet
Searching for this sentence verbatim would find you it
> What little remains sparking away in the corners of the internet after today will thrash endlessly, confidently claiming “There is no evidence of a global cessation of AI on December 25th, 2025, it’s a work of fiction/satire about the dangers of AI!”;
Part of the charm maybe? It's like something you'd hear the characters in a schlocky sci-fi video game or movie say, and it's fun to bring that into real life.
I mean, from an incentive and capability matrix, it seems probable if not inevitable.
yeah it's not directly harmful -- wizards aren't real -- but it also serves as an (often first) introduction to children of the concepts of familial/genetic superiority, eugenics, and ethnic/genetic cleansing.
I can't really think of any cases where setting an example of calling something a nasty name is that great a trait to espouse, to children or adults.
I think there's a clear sociological pattern here that explains the appeal. It maps almost perfectly onto the thesis of David Roediger's "The Wages of Whiteness."
His argument was that poor white workers in the 19th century, despite their own economic exploitation, received a "psychological wage" for being "white." This identity was primarily built by defining themselves against Black slaves. It gave them a sense of status and social superiority that compensated for their poor material conditions and the encroachment of slaves on their own livelihood.
We're seeing a digital version of this now with AI. As automation devalues skills and displaces labor across fields, people are being offered a new kind of psychological compensation: the "wage of humanity." Even if your job is at risk, you can still feel superior because you're a thinking, feeling human, not just another mindless clanker.
The slur is the tool used to create and enforce that in-group ("human") versus out-group ("clanker") distinction. It's an act of identity formation born directly out of economic anxiety.
The real kicker, as Roediger's work would suggest, is that this dynamic primarily benefits the people deploying the technology. It misdirects the anger of those being displaced toward the tool itself, rather than toward the economic decisions that prioritize profit over their livelihoods.
But this ethos of economic displacement is really at the heart of both slavery and computation. It's all about "automating the boring stuff" and leveraging new technologies to ultimately extract profit at a greater rate than your competitors (which happens to include society). People typically forget the job of "computer" was the first casualty of computing machines.
Absolutely this,and it's worth. Imagine DEI training for being rude to ChatGPT.
No. If they were, I don't think they'd bother trying to convince us of anything.
For now, I'm thinking of things like the "AI boyfriend disaster" of the GPT-5 upgrade. I'm concerned with how these things are intentionally anthropomorphized, and how they're treated by other people.
In some years time, once they're sufficiently embedded into enough critical processes, I am concerned about various time-bomb attacks.
Whatever insecurity I'm feeling is not in a personal psychological dimension.
I think that LLM chatbots are fundamentally built on a deception or dark pattern, and respect them accordingly. They are built to communicate using and mimicking human language. They are built to act human, but they are not.
If someone tries to trick me into subscribing to offers from valued business partners, I will take that into account. If someone tries to take advantage of my human reactions to human language, I will also take that into account accordingly.
What yes, if this is part of your joke, then great. If not, you may actually be the butt of your own joke.