Writing with LLM is not a shame

30 flornt 54 8/24/2025, 10:10:29 AM reflexions.florianernotte.be ↗

Comments (54)

nicbou · 7h ago
I think it's fair to use AI as an editor, to get feedback about how your ideas are packaged.

It's also fair to use it as a clever dictionary, to find the right expressions, or to use correct grammar and spelling. (This post could really use a round of corrections.)

But in the end, the message and the reasoning should be yours, and any facts that come from the LLM should be verified. Expecting people to read unverified machine output is rude.

amiga386 · 6h ago
> Expecting people to read unverified machine output is rude.

Quite. Its the attention economy, you've demanded people's attention, and then you shove crap that even you didn't spend time reading in their face.

Even if you're using it as an editor... you know that editors vary in quality, right? You wouldn't accept a random editor just because they're cheap or free. Prose has a lot in it, not just syntax, spelling and semantics, but style, tone, depth... and you'd want competent feedback on all of that. Ideally insightful feedback. Unless you yourself don't care about your craft.

But perhaps you don't care about your craft. And if that's the case... why should anyone else care or waste their time on it?

dr_dshiv · 6h ago
> Its the attention economy, you've demanded people's attention, and then you shove crap that even you didn't spend time reading in their face.

That’s the rudeness. But this takes care of itself— we just adjust trust accordingly

ekianjo · 7h ago
> message and the reasoning should be yours,

I think we havent realized yet that most of us don't really have original thoughts. Even in creative industries the amount of plagiarism (or so called inspiration) is at all times high (and that's before LLMs were available).

aeonik · 6h ago
Even novel thoughts are rarely original.

Every time I come up with an algorithm idea, or a system idea, I'm always checking who has done it before, and I always find significant prior art.

Even for really niche things.

I think my name Aeonik Chaos might be one of the only original, never before done things. And even that was just an extension of established linguistic rules.

treetalker · 2h ago
My great-great grandfather was named Aeonik Chaos!
saalweachter · 6h ago
Sure, but also, curation is a service.

An author that does nothing but "plagiarize" and regurgitate the ideas of others is incredibly valuable... if they exercise their human judgement and only regurgitate the most interesting and useful ideas, saving the rest of us the trouble of sifting through their sources.

lewdwig · 6h ago
With code, I’m much more interested in it being correct and good rather than creative or novel. I see it is my job to be the arbiter of taste because the models are equally happy to create code I’d consider excellent and terrible on command.
dep_b · 7h ago
Just got a few recommendations by my colleagues on LinkedIn that were clearly written by an LLM, the long emdash was even present. But then again, the message was tuned to specific things I did. Also they were from Eastern Europe, so I imagine they just fixed their input.

If you call yourself a writer, having tell tale LLM signs is bad. But for people who's work doesn't involve having a personal voice in written language, it might help them getting them to express things in a better way than before.

SweetSoftPillow · 6h ago
I've been using em dashes since long before LLMs existed, and I won't stop. Some people might think it's a sign of an LLM, but I know it's just a sign of their own short-sightedness.
AlecSchueler · 5h ago
It's really frustrating to have to adjust my writing style to seem more human despite being entirely human. Many of us have been using em dashes for a long time, who else do people think the LLMs learnt it from?
d4rkp4ttern · 4h ago
Exactly. I think the whole emdash thing is a nonsense meme propagated by Xfluencers or LinkFluencers.
amiga386 · 6h ago
> it might help them getting them to express things in a better way than before.

You know what people did before the AI fad? They read other people's books. They found and talked to interesting people. They found themselves in, or put themselves in, interesting situations. They spent a lot of time cogitating and ruminating before they decided they ought to write their ideas down. They put in a lot of effort.

Now the AI salemen come, and insist you don't need a wealth of ezperience and talent, you just need their thingy, price £29.99 from all good websites. Now you can be like a Replicant, with your factory-implanted memories instead of true experience.

bilvar · 6h ago
Did people really use to do all that work when someone asked them to write a recommendation on LinkedIn?
amiga386 · 1h ago
No, but people who called themselves a writer did, or should.
Gigachad · 6h ago
Craziest thing I saw at work was someone using AI generated text in a farewell card. Like it's so obvious, it's so much more offensive to send someone an AI generated message than to just not send anything at all.
singpolyma3 · 6h ago
What made it obvious?
latexr · 7h ago
> clearly written by an LLM, the long emdash was even present.

Can we please stop propagating this accusation? Alright, sure, maybe LLMs overuse the em-dash, but it is a valid topographical mark which was in use way before LLMs and is even auto-inserted by default by popular software on popular operating systems—it is never sufficient on its own to identify LLM use (and yes, I just used it—multiple times—on purpose on 100% human-written text).

Sincerily,

Someone who enjoys and would like to be able to continue to use correct punctuation, but doesn’t judge those who don’t.

ginko · 6h ago
So do you always put in the ALT+<code> incantation to get an emdash or copy&paste?

I feel the emdash is a tell because you have to go out of your way to use it on a computer keyboard. Something anyone other than the most dedicated punctuation geeks won't do for a random message on the internet.

Things are different for typeset books.

latexr · 5h ago
> So do you always put in the ALT+<code> incantation to get an emdash or copy&paste?

There’s no incantation. On macOS it’s either ⌥- (option+hyphen) or ⇧⌥- (shift+option+hyphen) depending on keyboard layout. It’s no more effort than using ⇧ for an uppercase letter. On iOS I long-press the hyphen key. I do the same for the correct apostrophe (’). These are so ingrained in my muscle memory I can’t even tell you the exact keys I press without looking at the keyboard. For quotes I have an Alfred snippet which replaces "" with “” and places the cursor between them.

But here’s the thing: you don’t even have to do that because Apple operating systems do it for you by default. Type -- and it converts to —; type ' in the middle of a word and it replaces it with ’; quotes it also adds the correct start and end ones depending on where you type them.

The reason I type these myself instead of using the native system methods is that those work a bit too well. Sometimes I need to type code in non-code apps (such as in a textarea in a browser) and don’t want the replacements to happen.

> I feel the emdash is a tell because you have to go out of your way to use it on a computer keyboard.

You do not. Again, on Apple operating systems these are trivial and on by default.

> Something anyone other than the most dedicated punctuation geeks won't do for a random message on the internet.

Even if that were true—which, as per above, it’s not, you don't have to be that dedicated to type two hyphens in a row—it makes no sense to conflate those who care enough about their writing to use correct punctuation and those who don’t even care enough to type the words themselves. They stand at opposite ends of the spectrum.

Again, using em-dashes as one signal is fine; using it as the principal or sole signal is not.

acheron · 3h ago
You type -- and it gets auto converted.
jascha_eng · 6h ago
Fact is that I maybe saw it in 10% of blogs and news articles before Chatgpt. And now it pops up in emails, slack messages, HN/reddit comments and probably more than half of blog posts?

Yes it's not a guarantee but it is at least a very good signal that something was at least partially LLM written. It is also a very practical signal, there are a few other signs but none of them are this obvious.

latexr · 6h ago
> Fact is that I maybe saw it in 10% of blogs and news articles before Chatgpt.

I believe you. But also be aware of the Frequency Illusion. The fact that someone mentions that as an LLM signal also makes you see it more.

https://en.wikipedia.org/wiki/Frequency_illusion

> Yes it's not a guarantee but it is at least a very good signal that something was at least partially LLM written.

Which is perfectly congruent with what I said with emphasis:

> it is never sufficient on its own to identify LLM use

I have no quarrel with using it as one signal. My beef is when it’s used as the principal or sole signal.

yoz-y · 1h ago
Dubious. The only signal this gives that in aggregate people use AI. On individual basis, presence of em dashes means nothing.
CRConrad · 6h ago
> And now it pops up in emails, slack messages, HN/reddit comments and probably more than half of blog posts?

Yeah, maybe that's the one thing people who didn't know how to do it before have learnt from "AI" output.

singpolyma3 · 6h ago
... you know all serious writers use mdash right? This is not so magic LLM watermark
CRConrad · 6h ago
> the long emdash [...] tell tale LLM signs

I so wish people would stop spouting this bogus "sign" — but I know I'm going to be disappointed.

mentalgear · 7h ago
It's good for what all other LLMs are good for: semantic search, where the output can be generated texts to help you. But never get wrapped into the illusion that there is actual causal thinking. The thinking is still your responsibility, LLMs are just newer discovery/browsing engines.
lewdwig · 7h ago
There are nascent signs of emergent world models in current LLMs, the problem is that they decohere very quickly due to them lacking any kind of hierarchical long term memory.

A lot of what is structurally important the model knows about your code gets lost whenever the context gets compressed.

Solving this problem will mark the next big leap in agentic coding I think.

dsq · 6h ago
I would rewrite the title as "There's no shame in writing with LLMs", or, "Writing with LLMs is nothing to be ashamed of".
dang · 17m ago
You're right, of course, but the original title manages to still be grammatical and the altered meaning has its charm.
jillesvangurp · 6h ago
Of course, there’s no shame in using tools that are available to us. We’re a tool-using species. We’re just a bunch of stupid monkeys without tools. A lot of what we do is about using tools to free up time to do more interesting things than doing things the tools already do better than us.

Like it or not, people are using LLMs a lot. The output isn’t universally good. It depends on what you ask for and how you criticize what comes back. But the simple reality is that the tools are pretty good these days. And not using them is a bit of a mistake.

You can use LLMs to fix simple grammar and style issues, to fact-check argumentation, and to criticize and identify weaknesses. You can also task LLMs with doing background research, double-checking sources, and more.

I’m not a fan of letting LLMs rewrite my text into something completely different. But when I'm in a hurry or in a business context, I sometimes let LLMs do the heavy lifting for my writing anyway.

Ironically, a good example is this article which makes a few nice points. But it’s also full of grammar and style issues that are easily remedied with LLMs without really affecting the tone or line of argumentation (though IMHO that needs work as well). Clearly, this is not a native speaker. But that’s no excuse these days to publish poorly written text. It's sloppy and doesn't look good. And we have tools that can fix it now.

And yes, LLMS were used to refine this comment. But I wrote the comment.

jaredcwhite · 1h ago
Consumers have a right to know what the source is of the content they are ingesting into their minds, and specifically if that content originated in another actual human mind or if it's the slop generated by a synthetic text extruder.

It's really a pretty straightforward proposition to understand, and disclosure is absolutely the key so that consumers, if they choose as I do to boycott such output, can make informed decisions.

latexr · 7h ago
> One argument to not disclaim it: people do not disclaim if they Photoshop a picture after publishing it and we are surrounded by a lot of edited pictures.

That is both a false equivalence and a form of whataboutism.

https://en.wikipedia.org/wiki/False_equivalence

https://en.wikipedia.org/wiki/Whataboutism

It is a poor argument in general, and a sure-fire way to increase shittiness in the world: “Well, everyone else is doing this wrong thing, so I can too”. No. Whenever you mention the status quo as an excuse to justify your own behaviour, you should look inward and reflect on your actions. Do you really believe what you’re doing is the right thing? If it is, fine; but if it is not, either don’t mention it or (ideally) do something about it.

> why don’t we see people mentioning they used specific tools to proofread before AI apparition?

Whenever I see this argument, I have a hard time believe it is made in good faith. Can you truly not see the difference between using a tool to fix mistakes in your work or to do the work for you?

> It feels like an obligation we have to respect in a way.

This was obvious from the beginning of the post. Throughout I never got the feeling you were struggling with the question intrinsically, for yourself, but always in a sense of how others would judge your actions. You quote opinion after opinion and it felt you were in search of absolution—not truth—for something you had already decided you did not want to do.

flornt · 6h ago
Thanks. Really appreciate your comments. It opens some perspectives I haven't considered and gives more things to think about regarding this. I'll digest it and update the content based on your observations!
klabb3 · 6h ago
It's very similar to the Stack Overflow debate of the previous decade. Bad developers would copy paste without understanding. It's the same here. Without understanding, you just can't build very sophisticated things, or debug hard issues. And even if AI got better at this, anyone else can do it too, so you'll be a dime a dozen engineer.

Those who don't compromise on understanding will benefit from an extra tool under their belt. Those who actively leverage the tool to improve their understanding will do even better.

Those who want shortcuts and not bother understanding are like cheating in school – not in a morally wrong way, but rather in a they missed the entire point way.

tjpnz · 7h ago
If you don't have time to write it I'm not going to make time to read it.
echelon_musk · 7h ago
Writing with LLMs is not a shame

Or

Writing with an LLM is not a shame

latexr · 6h ago
I’d suggest “not shameful” instead of “not a shame”.
riz_ · 7h ago
Should have written with an LLM.
squid_ca · 7h ago
You’re absolutely right!
ares623 · 6h ago
Not just right - genius!
CRConrad · 6h ago
What says they didn't?
ekianjo · 6h ago
> Writing with an LLM is not a shame

Should be "Writing with a LLM is not a shame", no reason to put a "an" here.

catapart · 6h ago
"el" begins with a vowel sound, so "an" is the appropriate article.

Is not about the letter, it's about practical pronunciation. "An r before a u, and an m or an f".

CRConrad · 6h ago
Nope, that's not how a / an works in English.
lewdwig · 7h ago
I use Claude Code almost daily now, and I think I’d rather cut off my own arm than go without it, but I don’t delude myself into thinking that current gen tools don’t have significant limitations and that it is my job to manage those limitations.

So just like any other tool really.

I have discovered this week that Claude is really good at redteaming code (and specs, and ADRs, and test plans), much better than most human devs who don’t like doing it because it’s thankless work and don’t want to be “mean” to colleagues by being overly critical.

torium · 7h ago
Would you share with us what kind of job you do?

I keep seeing people saying how amazing it is to code with these things, and I keep failing at it. I suspect that they're better at some kinds of codebases than others.

girvo · 6h ago
> I suspect that they're better at some kinds of codebases than others.

Probably. My works custom dev agent poops the bed on our front-end monorepo unless you're very careful about context, but then being careful about context is sort of the name of the game anyway...

I'm using them, mainly for scaffolding out test boilerplate (but not actual tests, most of its output there is useless) and so on, or component code structure based on how our codebase works. Basically a way more powerful templating tool I guess.

lewdwig · 6h ago
Devops/SRE/Platform Engineering

Downside: lots of Python, and Python indentation causes havoc with a lot of agentic coding tools. RooCode in particular seems to mangle diffs all the time, irrespective of model.

ath3nd · 6h ago
Riding a bike with training wheels is also not a shame. If you need the training wheels, by all means feel free to use them.

But LLMs are training wheels being forced on everyone, including experienced developers and we are being gaslit that if we don't use them, we are getting behind. In reality, however, the only study up to date shows 19% decline in productivity for experienced devs using LLMs.

I don't mind folks using crutches if they help them. The cognitive decline and reasoning skills of people using LLMs is not yet studied well but preliminary results show its a thing. I gotta ask: why are you guys doing that to yourselves?

wfhrto · 7h ago
At this point, it would be shameful to not write with LLMs. I don't want to spend time reading plain human text when improved AI text is an option.
latexr · 6h ago
> improved AI text

It is certainly your prerogative to believe that, but know your opinion is far from universal. It is a widespread view that AI-written text is worse.

lomase · 6h ago
> improved AI text

Why are you on hackernews and not talking to an LLM?