In the long run, LLMs make us dumber

50 speckx 30 8/21/2025, 7:10:39 PM desunit.com ↗

Comments (30)

codespin · 11m ago
Just as the engine replaced physical strength, artificial intelligence, through models like large language models, is now replacing cognitive labor and thought.

From the article "Muscles grow by lifting weights" yet we do that now as a hobby and not as a critical job. I'm not sure I want to live in a world where thinking is a gym like activity, however if you go back 200 years it would probably be difficult to explain the situation today to someone living in a world where most people are doing physical labor or using animals to do it.

deepsun · 31m ago
Plato was against writing, as it makes us dumber.

https://fs.blog/an-old-argument-against-writing/

Refreeze5224 · 8m ago
I imagine his memory and those of people who memorized instead of wrote were better. So by that metric, writing is making people dumber. It's just not all that relevant today, and we don't prioritize memorization to the extent Plate and the ancient Greeks probably did.
neom · 24m ago
Bit tangential, but I find oral traditions really interesting, the sheer scale of what can be done is quite impressive: https://blog.education.nationalgeographic.org/2016/04/08/abo... -- https://en.wikipedia.org/wiki/Songline
FollowingTheDao · 26m ago
Sounds like you’re saying this in favor of AI, but I’m taking it as a just favor of both AI and writing.
CuriouslyC · 26m ago
LLMs haven't made me dumber, but they have made me lazier. I think about writing code by hand now and groan.
sitzkrieg · 25m ago
thats kinda embarrassing
CuriouslyC · 16m ago
Would you groan if you had to take public transit while your car was broken down?

If you love to knit that's cool but don't get on me because I'd rather buy a factory sweater and get on with my day.

I love creating things, I love solving problems, I love designing elegant systems. I don't love mashing keys.

WD-42 · 4s ago
I love how you immediately go for public transit as an analogy for something regrettable. Fits.
blibble · 5m ago
my public transport is faster and cheaper than driving...
scarface_74 · 3m ago
I don’t get paid to “write code”. I use my 30 years of professional industry experience to either make the company money or to save the company money and in exchange for my labor, they put money in my account and formerly RSUs in my brokerage account.

It’s not about “passion”. It’s purely transactional and I will use any tool that is available to me to do it.

If an LLM can make me more efficient at that so be it. I’m also not spending months getting a server room built out to hold a SAN that can store a whopping 3TB of storage like in 2002. I write 4 lines of Yaml to provision an S3 bucket.

tptacek · 50m ago
I buy this for writing. There's a very limited set of things GPT is good at for improving my writing (basic sentence voice and structure stuff, overusing words), but mostly I find it makes my writing worse, and I don't trust any argument it makes because, as the post observes, I haven't thought them through and had the opportunity to second-guess them myself.

Also it has a high opinion of Bryan Ferry. Deeply untrustworthy.

But I don't buy this at all for software development. I find myself thinking more carefully and more expansively, at the same time, about solving programming problems when I'm assisted by an LLM agent, because there's minimal exertion to trying multiple paths out and seeing how they work out. Without an agent, every new function I write is a kind of bet on how the software is going to come out in the end, and like every human I'm loss-averse, so I'm not good at cutting my losses on the bad bets. Agents free me from that.

chankstein38 · 45m ago
That's wild. My experience has been vastly different. ChatGPT, Claude, Claude Code, Gemini, etc whatever it may be even the simplest scripts I've had them write usually come out with issues. As far as writing functions is concerned, it's way less risky for me to write functions based on my prior knowledge than to ask ChatGPT to write the entire thing for me and just paste it in and call it good.

I do use it for learning and to help me access new concepts I've never thought about but if you're not proving what it's writing yourself and understanding what its written yourself then I hope I never have to work on code you've written. If you are, then you are not doing what the article is talking about.

tptacek · 37m ago
I don't know what you're having it write; I mostly have it write Go. When I ask it to write shell scripts, its shell scripts are better than what I would have written (my daily drivers are whatever Sketch.dev is using under the hood --- Claude, I assume --- and Gemini).

I've been writing Go since ~2012 and coding since ~1995. I read everything I merge to `main`. The code it produces is solid. I don't know that it one-shots stuff; I work iteratively and don't care enough to try to make it do that, I just care about the endpoint. The outcomes are very good.

I know I'm not alone in having this experience.

chankstein38 · 13m ago
That makes sense! I frequently have it write python. I'll say though, working on Go for more than a decade and coding for more than a lot of people have been alive is likely proof you're not one of the people this article is talking about. I don't think I've been made stupider by LLMs either but, like someone else said, maybe a bit lazier about things. I am not the author so I should stop talking as if I know their thoughts but, at least in my opinion, this message is more important for the swathes of people who don't have 10-20 years of experience solving complex problems.
tptacek · 6m ago
I'm not that old.
reptation · 22m ago
Remake/Remodel is an all time great! https://youtu.be/m-zSnO7sbXg?list=RDm-zSnO7sbXg
cabacon · 59m ago
Plato's _Phaedrus_ features Socrates arguing against writing; "They will cease to exercise memory because they rely on that which is written, calling things to remembrance no longer from within themselves, but by means of external marks."

I have heard people argue that the use of calculators (and later, specifically graphing calculators) would make people worse at math; quick searching found papers like https://files.eric.ed.gov/fulltext/ED525547.pdf discussing the topic.

I can't see how the "LLMs make us dumber" argument is different than those. I think calculators are a great tool, and people trained in a calculator-having environment certainly seem to be able to do math. I can't see that writing has done anything but improve our ability to reason over time. What makes LLMs different?

chankstein38 · 51m ago
Because they do it all for us and they frequently do it wrong. We're not offloading the calculation or the typing to the thing we're using it to solve the whole problem for us.

Calculators don't solve problems, they solve equations. Writing didn't kill our memories because there's still so much to remember that we almost have to write things down to be able to retain it.

If you don't do your own research and present the LLM with your solution and let it point out errors and instead just type "How do I make ____?" it's solving the entire thought process for you right there. And it may be leading you wrong.

That's my view on how it's different at least. They're not calculators or writing. They're text robots that present solutions confidently and offer to do more work immediately afterwards, usually ending a response in "Want me to write you a quick python script to handle that?"

A thought experiment, if you're someone who has used a calculator to calculate 20% tips your whole life, try to calculate one without it. Maybe you specifically don't struggle because you're good at math or have a lot of math experience elsewhere but if you have approached it the way this article is calling bad, you'd simply have no clue where to start.

cabacon · 22m ago
I guess my point is that the argument being made is "if you lift dumbbells with a forklift, you aren't getting strong by exercising". And that's correct. But that doesn't mean that the existence of forklifts makes us weaker.

So, I guess I'm just saying that LLMs are a tool like any other. Their existence doesn't make you worse at what they do unless you forgo thinking when you use them. You can use a calculator to efficiently solve a wrong equation - you have to think about what it is going to solve for you. You can use an LLM to make a bad argument for you - you have to think about the inputs you're going to have it output for you.

I was just feeling anti-alarmist-headline - there's no intrinsic reason we'd get dumber because LLMs exist. We could, but I think history has shown that this kind of alarmism doesn't come to fruition.

chankstein38 · 10m ago
Fair! I'd definitely agree with that! I don't really know the author's intentions here but my read of this article is that it's for the people that ARE skipping thinking entirely using them. I agree completely, to me LLMs are effectively a slightly more useful (sometimes vastly more useful) search engine. They help me find out about features or mechanisms I didn't know existed and help demonstrate their value for me. I am still the one doing the thinking.

I'd argue we're using them "right" though.

tines · 26m ago
The analogy falls apart because calculating isn't math. Calculating is more like spelling, while math is more akin to writing. Writing and math are creative, spelling and calculating are not.
toss1 · 23m ago
>>What makes LLMs different?

Good question!

Writing or calculators likely do reduce our ability memorize vast amounts of text or do arithmetic in our heads; but to write or do math with writing and calculation, we still must fully load those intermediate facts into our brain and fully understand what was previously written down or calculated to wield and wrangle it into a new piece of work.

In contrast, LLMs (unless used with great care as only one research input) can produce a fully written answer without ever really requiring the 'author' to fully load the details of the work into their brain. LLMs basically reduc ethe task to editing not writing. As editing is not the same as writing, so it is no surprise this study shows an serious inability to remember quotes from the "written" piece.

Perhaps it is similar to learning a new language wherein we tend to be much sooner able to read the new language at a higher complexity than write or speak it?

cabacon · 17m ago
I have a kid in high school who uses LLMs to get feedback on essays he has written. It will come back with responses like "you failed to give good evidence to support your point that [X]", or "most readers prefer you to include more elaboration on how you changed subject from [Y] to [Z]".

You (and another respondent) both cite the case where someone unthinkingly generates a large swath of text using the LLM, but that's not the only modality for incorporating LLMs into writing. I'm with you both on your examples, fwiw, I just think that only thinking about that way of using LLMs for writing is putting on blinders to the productive ways that they can be used.

It feels to me like people are reacting to the idea that we haven't figured out how to work it into our pedagogy, and that their existence hurts certain ways we've become accustomed to measuring people having learned what we intended them to learn. There's certainly a lot of societal adaptation that should put guardrails around their utility to us, but when I see "They will make us dumb!" it just sets of a contrarian reaction in me.

dothereading · 45m ago
I agree with this, but at the same time I think LLMs will make anyone who wants to learn much smarter.
BizarroLand · 14m ago
Dumb is more the inability to make expedient, salient, and useful decisions either from the lack of knowledge or the fundamental incapability to process the available knowledge.

Dumb is accidental or genetic.

AI won't affect how dumb we are.

I think they will decrease the utility of crystalline knowledge skills and increase our fluid knowledge skills. Smart people will still find ways to thrive in the environment.

Human intelligence will continue moving forward.

blamestross · 57m ago
Its all about who "us" are.

Individuals? Most information technology makes us dumber in isolation, but with the tools we end up net faster.

The scary thing is that it is less about making things "better" than it is making them cheaper. AI isn't winning on skill, its winning on being "80% the quality at 20% the price."

So if you see "us" as the economic super-organism managed by very powerful people, then it makes us a lot smarter!

j45 · 43m ago
If it's doing the thinking for you, just like social media, but much more intense.
whydoineedthis · 30m ago
Similar fear mongering when calculators came about. No one got dumber, we just got faster at doing simple math. WOrking out complex math will always be interesting to those who really want to do it, and the rest likely wont contribute mu ch anyway - thier just consumers. Let the kids have thier wordy calculators, it actually may unblock critical paths of success needed for someone to really go deep.
BizarroLand · 11m ago
Yep. I force memorized so many calculations because our teachers constantly told us that in the future we wouldn't always have a calculator with us.

It was helpful, I got pretty far along in collegiate math without tutors or assistance thanks to the hard calculation skills I drilled into my head.

But, counterpoint, if I leave my calculator/computer/all in one everything device at home on any given day it can ruin my entire day. I haven't gone 72 hours without a calculator in nearly a decade.