AI could have written this: Birth of a classist slur in knowledge work [pdf]

40 deverton 52 7/22/2025, 2:39:03 AM advait.org ↗

Comments (52)

kristjank · 7h ago
I don't think we should as a wider scientific/technical society care for the opinion of a person that uses epistocratic privilege as a a serious term. This stinks to high hell of proving a conclusion by working backwards from it.

The cognitive dissonance to imply that expecting knowledge from a knowledge worker or a knowledge-centered discourse is a form of boundary work or discrimination is extremely destructive to any and all productive work once you consider how most of the sciences and technological systems depend on a very fragile notion of knowledge preservation and incremental improvements on a system that is intentionally pedantic to provide a stable ground for progress. In a lot of fields, replacing this structure with AI is still very much impossible, but explaining how for each example an LLM blurts out is tedious work. I need to sit down and solve a problem the right way, and in the meantime about 20 false solutions can be generated by ChatGPT

If you read the paper, the author even uses terms related to discrimination by immutable characteristics, invokes xenophobia and quotes a black student calling discouragement of AI as a cheating device racist.

This seems to me an utter insanity and should not only be ignored, but actively pushed against on the grounds of anti-intellectualism.

randomcarbloke · 3h ago
Being a pilot is an epistrocratic privilege and they should welcome the input of the less advantaged.
yhoiseth · 8h ago
Sarkar argues that “AI shaming arises from a class anxiety induced in middle class knowledge workers, and is a form of boundary work to maintain class solidarity and limit mobility into knowledge work.”

I think there is at least some truth to this.

Another possible cause of AI shaming is that reading AI writing feels like a waste of time. If the author didn’t bother to write it, why should I bother to read it and provide feedback?

alisonatwork · 7h ago
This latter piece is something I am struggling with.

I have spent 10+ years working on teams that are primarily composed of people whose first language is not English in workplaces where English is the mandated business language. Ever since the earliest LLMs started appearing, the written communication of non-native speakers has become a lot clearer from a grammatical point of view, but also a lot more florid and pretentious than they actually intended to be. This is really annoying to read because you need to mentally decode the LLM-ness of their comments/messages/etc back into normal English, which ends up costing more cognitive overhead than it used to reading their more blunt and/or broken English. But, from their perspective, I also understand that it saves them cognitive effort to just type a vague notion into an LLM and ask for a block of well-formed English.

So, in some way, this fantastic tool for translation is resulting in worse communication than we had before. It's strange and I'm not sure how to solve it. I suppose we could use an LLM on the receiving end to decode the rambling mess the LLM on the sending end produced? This future sucks.

xwolfi · 5h ago
It is normal, you add a layer between the two brains that communicate, and that layer only add statistical experience to the message.

I write letters to my gf, in English, while English is not our first language. I would never ever put an LLM between us: this would fall flat, remove who we are, be a mess of cultural references, it would just not be interesting to read, even if maybe we could make it sound more native, in the style of Barack Obama or Prince Charles...

LLMs are going to make people as dumb as GPS made them. Except really, when reading a map was not very useful a skill, writing what you feel... should be.

No comments yet

dist-epoch · 5h ago
I thought about this too. I think the solution is to send both, prompt and output - since the output was selected itself by the human between potentially multiple variants.

Prompt: I want to tell you X

AI: Dear sir, as per our previous discussion let's delve into the item at hand...

throwaway78665 · 7h ago
If knowledge work doesn't require knowledge then is it knowledge work?

The main issue that is symptomatic to current AI is that without knowledge (at least at some level) you can't validate the output of AI.

visarga · 6h ago
> why should I bother to read it and provide feedback?

I like to discuss a topic with a LLM and generate an article at the end. It is more structured and better worded but still reflects my own ideas. I only post these articles in a private blog I don't pass them as my own writing. But I find this exercise useful to me because I use LLMs as a brainstorming and idea-debugging space.

dist-epoch · 5h ago
> If the author didn’t bother to write it, why should I bother to read it

There is an argument that luxury stuff is valuable because typically it's hand made, and in a sense, what you are buying is not the item itself, but the untold hours "wasted" creating that item for your own exclusive use. In a sense "renting a slave" - you have control over another human's time, and this is a power trip.

You have expressed it perfectly: "I don't care about the writing itself, I care about how much effort a human put into it"

satisfice · 5h ago
If effort wasn’t put into it, then the writing cannot be good, except by accident or theft or else it is not your writing.

If you want to court me, don’t ask Cyrano de Bergerac to write poetry and pass it off as your own.

dist-epoch · 4h ago
> If effort wasn’t put into it, then the writing cannot be good

This is what people used to say about photography versus painting.

> pass it off as your own.

This is misleading/fraud and a separate subject than the quality of the writing.

vanschelven · 7h ago
This reads like yet another attempt to pathologize perfectly reasonable criticism as some form of oppression. Calling “AI could have written this” a classist slur is a stretch so extreme it borders on parody. People say that when writing lacks originality or depth — not to reinforce some imagined academic caste system. The idea that pointing out bland prose is equivalent to sumptuary laws or racial gatekeeping is intellectual overreach at its finest. Ironically, this entire paper feels like something an AI could have written: full of jargon, light on substance. And no, there’s no original research, just theory stacked on theory.
raincole · 6h ago
> Calling “AI could have written this” a classist slur is a stretch so extreme it borders on parody.

In AI discussions the relevance of Poe's law is rampant. You can never tell what is parody or what is not.

There was a (former) xAI employee that got fired for advocating the extinction of humanity.

terminalshort · 6h ago
Reading this makes me understand why there is a political movement to defund universities.
throwaway2562 · 1h ago
The real shame of it is that OP claims affiliation to two respectable universities (UCL and Cambridge) and one formerly credible venue (CHI)

Mock scholarship is on the rampage. I agree: this stuff does make me understand the yahoos with a defunding urge too - not something I ever expected to feel any sympathy for, but here we are.

laurent_du · 5h ago
It makes me sick to my heart to think that money is stolen from my pocket to be given to lunatics of this kind.
miningape · 6h ago
Overall, this comes across as extremely patronising: to authors by running defence for obviously sub-par work, because their background makes it "impossible" for them to do good work. And to the commenters by assuming mal-intent towards the less privileged that needs to be controlled.

And it's all wrapped in a lovely package of AI apologetics - wonderful.

So, honestly, no. The identity of the author doesn't matter, if it reads like AI slop the author should be grateful I even left an "AI could have written this" comment.

mgraczyk · 8h ago
I'd like to brag that I got in trouble for saying this to somebody in 2021, before ChatGPT
andrelaszlo · 7h ago
I put a chapter of a paper I wrote in 2016 into GPTZero and got the probability breakdown 90% AI, 10% human. I am 100% human, and I wrote it myself, so I guess I'm lucky that I didn't hand it in this year, or I could have gotten accused of cheating?
rcxdude · 6h ago
That's more an indictment of the accuracy of such tools. Writing in a very 'standard' style like found in papers is going to match well with the LLM predictions, regardless of origin.
tough · 7h ago
maybe gptzero had your paper on its training data (it being from 2016)?
mgraczyk · 7h ago
I wasn't being serious when I said it, I was using it as an insult for bad work
UncleMeat · 2h ago
"We have to use AI to achieve class solidarity" is insane to me.

People realize that the bosses all love AI because they envision a future where they don't need to pay the rabble like us, right? People remember leaders in the Trump administration going on TV and saying that we should have fewer laptop jobs, right?

That professor telling you not to use ChatGPT to cheat on your essay is likely not a member of the PMC but is probably an adjunct getting paid near-poverty wages.

sshine · 2h ago
Synthetic beings will look back at this with great curiosity.
satisfice · 8h ago
This paper presents an elaborate straw-man argument. It does not faithfully represent the legitimate concerns of reasonable people about the persistent and irresponsible application of AI in knowledge work.

Generative AI produces work that is voluminous and difficult to check. It presents such challenges to people who apply it that they, in practice, do not adequately validate the output.

The users of this technology then present the work as if it were their own, which misrepresents their skills and judgement, making it more difficult for other people to evaluate the risk and benefits of working with them.

It is not the mere otherness of AI that results in anger about it being foisted upon us, but the unavoidable disruption to our systems of accountability and ability to assess risk.

forgetfreeman · 7h ago
Additionally their use of the term "slur" for what is frequently a valid criticism seems questionable.
satisfice · 5h ago
It is itself a form of bullying.
kelseyfrog · 8h ago
While it would have been a better paper if the author collaborated with a sociologist, it would have also be less likely to be taken seriously by the HN for the same class anxieties that its title is founded on.
miningape · 6h ago
Excuse us for expecting evidence and intellectual rigour. :D

I've taken a number of university Sociology courses and from those experiences I came to the opinion that Sociology's current iteration is really just grievance airing in an academic tone. It doesn't really justify it's own existence outside of being a Buzzfeed for academics.

I'm not even talking about slightly more rigorous subjects such as Psychology or Political Science, which modern Sociology uses as a shield for the lack of a feedback mechanism.

Don't get me wrong though, I realise this is an opinion formed from my admittedly limited exposure to Sociology (~3 semesters). It could have also been the university I went to particularly leaned on "grievance airing".

stuaxo · 8h ago
The state of this headline.
_vertigo · 9h ago
Honestly, AI could have written this.
readthenotes1 · 9h ago
That tldr table at top looks a lot like what perplexity provides at the bottom...
s0teri0s · 7h ago
The obvious response is, "Oh, it will."
mvdtnz · 5h ago
Gosh I wonder why there's a cultural backlash against the "intellectual" elite.
andsoitis · 11h ago
Would love to read, but it seems heavily paywalled, so can't.
deverton · 11h ago
The author seems to be hosting the full PDF on their website https://advait.org/files/sarkar_2025_ai_shaming.pdf
tomhow · 9h ago
Thanks, we updated the URL!
renewiltord · 8h ago
This is just like the way some people decided that "Blue Check" should be an insult on Twitter. Occasionally people still say it but almost everyone ignores it. Fads like this are common on the Internet. It's just like any other clique of people: a few people accidentally taste-make as a bunch of replicators simply repeat mindless things over and over again "slop" "mask-off moment" "enshittification" "ghoulish". Just words that people repeat because other people say them and get likes/upvotes/retweets or whatever.

The "Blue Check" insult regime didn't get anywhere and I doubt any anti-LLM/anti-diffusion-model stuff will last. "Stop trying to make fetch happen". The tools are just too useful.

People on the Internet are just weird. Some time in the early 2010s the big deal was "fedoras". Oh you're weird if you have a fedora. Man, losers keep thinking fedoras are cool. I recall hanging out with a bunch of friends once and we walked by a hat shop and the girls were all like "Man, you guys should all wear these hats". The girls didn't have a clue: these were fedoras. Didn't they know that it would mark us out as weird losers? They didn't, and it turned out it doesn't. In real life.

It only does on the Internet. Because the Internet is a collection of subcultures with some unique cultural overtones.

throwawaybob420 · 7h ago
Sounds like something a blue checker would say. And yes, if you pay for Twitter you’re going to clowned on.

And what the hell is that segue into fedoras? The entire meme of them is because stereotypical clueless individually took fedoras to be the pinnacle of fashion, while disregarding nearly everything else about not only their outfit, but about their bodies.

This entire comment reeks of not actually understanding anything.

TeMPOraL · 7h ago
Found that user who memorized KnowYourMeme and thinks they're a scholar of culture now.

Or a cheap LLM acting as them, and wired up to KnowYourMeme via MCP? Can't tell these days. Hell, we're one weekend side project away from seeing "on the Internet, no one knows you are a dog" become true in a literal sense.

s/

pluto_modadic · 6h ago
blue checks are orthogonal - they're more rough approximations of "I bought a cybertruck when musk went full crazy" (and yes, it's a bad look). - judging some blog post for seeming like AI is different.