When working well, they enable us to offload needing to memorize a wikipedia worth of information and think about higher level problems. We become more intelligent at higher level solutions. Of course people don't know what was written if they were required to "submit an essay" where the main grade is whether or not they submitted one and the topic may have been one not interesting to them. Ask people to write essays about things they're truly, honestly interested in, and people who have access to an LLM are likely able to enrich their knowledge faster than those without.
codespin · 32m ago
Just as the engine replaced physical strength, artificial intelligence, through models like large language models, is now replacing cognitive labor and thought.
From the article "Muscles grow by lifting weights" yet we do that now as a hobby and not as a critical job. I'm not sure I want to live in a world where thinking is a gym like activity, however if you go back 200 years it would probably be difficult to explain the situation today to someone living in a world where most people are doing physical labor or using animals to do it.
pixl97 · 12s ago
>back 200 years it would probably be difficult to explain the
"Almost everyone lives a life closer to that of nobility or the merchent class"
I'm sure the vast majority of the people from that time would rather live in ours if explained that way.
amdivia · 6m ago
I doubt that would happen.
The engine provides artificial strength, granted, but AI does not provide artificial intelligence. It's a misnomer.
evanjrowley · 11m ago
Impressive research but I can't help feeling like it's fundamentally flawed. The analysis considered "essay ownership" a property of LLM, Search, and Brain-Only participants, but what would have been more valuable is flipping all of the graphs based on percieved ownership levels. On average, less LLM users felt a sense of ownership, and this should not surprise anyone. The researchers lumped together people who let LLMs do all of the writing vs using LLMs constructive ways. What would have been more interesting is studying the LLM users who maintained a sense of owership, because then we could learn more way to use LLMs that potentially make us smarter.
I also feel like there's more to be said about LLMs fostering the ability to ask questions better than you might if you primarily used search. If the objective was to write, for example, about an esoteric organic chemistry topic, and a "No Brain" group of non-experts was only allowed to formulate a response by asking real-life experts as much as they can about the esoteric topic, then would users more experienced with LLMs come out ahead on the essay score? Understanding how to leverage a tight communication loop most effectively is a skill that the non-LLM groups in this study should be evaluated on.
I imagine his memory and those of people who memorized instead of wrote were better. So by that metric, writing is making people dumber. It's just not all that relevant today, and we don't prioritize memorization to the extent Plate and the ancient Greeks probably did.
jazzyjackson · 17m ago
It probably made us worse orators
FollowingTheDao · 47m ago
Sounds like you’re saying this in favor of AI, but I’m taking it as a just favor of both AI and writing.
CuriouslyC · 47m ago
LLMs haven't made me dumber, but they have made me lazier. I think about writing code by hand now and groan.
gerdesj · 11m ago
So how do you interact with your LLM? (by hand perhaps?)
I find prompt fettling a great way of getting to grips with a problem. If I can explain my problem effectively enough to get a reasonable start on an answer, then I likely thoroughly understand my problem.
An LLM is like a really fancy slide rule or calculator (I own both). It does have a habit of getting pissed and talking bollocks periodically. Mind you, so do I.
CuriouslyC · 4m ago
I voice chat with chatgpt to generate a very very detailed architecture document / spec / prd / whatever, in an implementation checklist order, then I take that and save it as a "VISION.md" file in the repo, and queue up a command for the agent to start working on the problem. I have detailed subagent and task forking logic in my claude setup, so I can get plans that involve 7-8 subagents (some being invoked by claude -p for parallel execution) and take a project from that spec to a very real project in ~6-8 hours, during which time my agents will typically phone home to see if they should continue maybe 2 or 3 times (I have anti stopping hooks but they're lazy SoBs).
sitzkrieg · 46m ago
thats kinda embarrassing
CuriouslyC · 38m ago
Would you groan if you had to take public transit while your car was broken down?
If you love to knit that's cool but don't get on me because I'd rather buy a factory sweater and get on with my day.
I love creating things, I love solving problems, I love designing elegant systems. I don't love mashing keys.
WD-42 · 21m ago
I love how you immediately go for public transit as an analogy for something regrettable. Fits.
CuriouslyC · 18m ago
I put my time in on the city bus brother. In the time that I had the displeasure I had bodily fluids thrown on me and someone almost stabbed me. Maybe you have first class magic fairy busses with plush reclining seats and civil neighbors but that's not the norm.
blibble · 26m ago
my public transport is faster and cheaper than driving...
sys_64738 · 8m ago
How much of your coding time do you spend writing mundane, repetitive code that should just instantly appear? I think there's a dual benefit of removing the drudgery of that but also gives you more time to write new, interesting code to achieve your goals and priorities sooner.
nomel · 2m ago
Wait until you hear how code used to be written!
scarface_74 · 24m ago
I don’t get paid to “write code”. I use my 30 years of professional industry experience to either make the company money or to save the company money and in exchange for my labor, they put money in my account and formerly RSUs in my brokerage account.
It’s not about “passion”. It’s purely transactional and I will use any tool that is available to me to do it.
If an LLM can make me more efficient at that so be it. I’m also not spending months getting a server room built out to hold a SAN that can store a whopping 3TB of storage like in 2002. I write 4 lines of Yaml to provision an S3 bucket.
tptacek · 1h ago
I buy this for writing. There's a very limited set of things GPT is good at for improving my writing (basic sentence voice and structure stuff, overusing words), but mostly I find it makes my writing worse, and I don't trust any argument it makes because, as the post observes, I haven't thought them through and had the opportunity to second-guess them myself.
Also it has a high opinion of Bryan Ferry. Deeply untrustworthy.
But I don't buy this at all for software development. I find myself thinking more carefully and more expansively, at the same time, about solving programming problems when I'm assisted by an LLM agent, because there's minimal exertion to trying multiple paths out and seeing how they work out. Without an agent, every new function I write is a kind of bet on how the software is going to come out in the end, and like every human I'm loss-averse, so I'm not good at cutting my losses on the bad bets. Agents free me from that.
spondylosaurus · 17m ago
> Also it has a high opinion of Bryan Ferry. Deeply untrustworthy.
Whoa, whoa, are we talking Bryan Ferry as an artist, or Bryan Ferry as a guy? Because I love me some Roxy Music but have heard that Bryan is kind of a dick.
chankstein38 · 1h ago
That's wild. My experience has been vastly different. ChatGPT, Claude, Claude Code, Gemini, etc whatever it may be even the simplest scripts I've had them write usually come out with issues. As far as writing functions is concerned, it's way less risky for me to write functions based on my prior knowledge than to ask ChatGPT to write the entire thing for me and just paste it in and call it good.
I do use it for learning and to help me access new concepts I've never thought about but if you're not proving what it's writing yourself and understanding what its written yourself then I hope I never have to work on code you've written. If you are, then you are not doing what the article is talking about.
tptacek · 59m ago
I don't know what you're having it write; I mostly have it write Go. When I ask it to write shell scripts, its shell scripts are better than what I would have written (my daily drivers are whatever Sketch.dev is using under the hood --- Claude, I assume --- and Gemini).
I've been writing Go since ~2012 and coding since ~1995. I read everything I merge to `main`. The code it produces is solid. I don't know that it one-shots stuff; I work iteratively and don't care enough to try to make it do that, I just care about the endpoint. The outcomes are very good.
I know I'm not alone in having this experience.
chankstein38 · 34m ago
That makes sense! I frequently have it write python. I'll say though, working on Go for more than a decade and coding for more than a lot of people have been alive is likely proof you're not one of the people this article is talking about. I don't think I've been made stupider by LLMs either but, like someone else said, maybe a bit lazier about things. I am not the author so I should stop talking as if I know their thoughts but, at least in my opinion, this message is more important for the swathes of people who don't have 10-20 years of experience solving complex problems.
Plato's _Phaedrus_ features Socrates arguing against writing; "They will cease to exercise memory because they rely on that which is written, calling things to remembrance no longer from within themselves, but by means of external marks."
I have heard people argue that the use of calculators (and later, specifically graphing calculators) would make people worse at math; quick searching found papers like https://files.eric.ed.gov/fulltext/ED525547.pdf discussing the topic.
I can't see how the "LLMs make us dumber" argument is different than those. I think calculators are a great tool, and people trained in a calculator-having environment certainly seem to be able to do math. I can't see that writing has done anything but improve our ability to reason over time. What makes LLMs different?
chankstein38 · 1h ago
Because they do it all for us and they frequently do it wrong. We're not offloading the calculation or the typing to the thing we're using it to solve the whole problem for us.
Calculators don't solve problems, they solve equations. Writing didn't kill our memories because there's still so much to remember that we almost have to write things down to be able to retain it.
If you don't do your own research and present the LLM with your solution and let it point out errors and instead just type "How do I make ____?" it's solving the entire thought process for you right there. And it may be leading you wrong.
That's my view on how it's different at least. They're not calculators or writing. They're text robots that present solutions confidently and offer to do more work immediately afterwards, usually ending a response in "Want me to write you a quick python script to handle that?"
A thought experiment, if you're someone who has used a calculator to calculate 20% tips your whole life, try to calculate one without it. Maybe you specifically don't struggle because you're good at math or have a lot of math experience elsewhere but if you have approached it the way this article is calling bad, you'd simply have no clue where to start.
cabacon · 43m ago
I guess my point is that the argument being made is "if you lift dumbbells with a forklift, you aren't getting strong by exercising". And that's correct. But that doesn't mean that the existence of forklifts makes us weaker.
So, I guess I'm just saying that LLMs are a tool like any other. Their existence doesn't make you worse at what they do unless you forgo thinking when you use them. You can use a calculator to efficiently solve a wrong equation - you have to think about what it is going to solve for you. You can use an LLM to make a bad argument for you - you have to think about the inputs you're going to have it output for you.
I was just feeling anti-alarmist-headline - there's no intrinsic reason we'd get dumber because LLMs exist. We could, but I think history has shown that this kind of alarmism doesn't come to fruition.
chankstein38 · 31m ago
Fair! I'd definitely agree with that! I don't really know the author's intentions here but my read of this article is that it's for the people that ARE skipping thinking entirely using them. I agree completely, to me LLMs are effectively a slightly more useful (sometimes vastly more useful) search engine. They help me find out about features or mechanisms I didn't know existed and help demonstrate their value for me. I am still the one doing the thinking.
I'd argue we're using them "right" though.
tines · 48m ago
The analogy falls apart because calculating isn't math. Calculating is more like spelling, while math is more akin to writing. Writing and math are creative, spelling and calculating are not.
toss1 · 45m ago
>>What makes LLMs different?
Good question!
Writing or calculators likely do reduce our ability memorize vast amounts of text or do arithmetic in our heads; but to write or do math with writing and calculation, we still must fully load those intermediate facts into our brain and fully understand what was previously written down or calculated to wield and wrangle it into a new piece of work.
In contrast, LLMs (unless used with great care as only one research input) can produce a fully written answer without ever really requiring the 'author' to fully load the details of the work into their brain. LLMs basically reduc ethe task to editing not writing. As editing is not the same as writing, so it is no surprise this study shows an serious inability to remember quotes from the "written" piece.
Perhaps it is similar to learning a new language wherein we tend to be much sooner able to read the new language at a higher complexity than write or speak it?
cabacon · 38m ago
I have a kid in high school who uses LLMs to get feedback on essays he has written. It will come back with responses like "you failed to give good evidence to support your point that [X]", or "most readers prefer you to include more elaboration on how you changed subject from [Y] to [Z]".
You (and another respondent) both cite the case where someone unthinkingly generates a large swath of text using the LLM, but that's not the only modality for incorporating LLMs into writing. I'm with you both on your examples, fwiw, I just think that only thinking about that way of using LLMs for writing is putting on blinders to the productive ways that they can be used.
It feels to me like people are reacting to the idea that we haven't figured out how to work it into our pedagogy, and that their existence hurts certain ways we've become accustomed to measuring people having learned what we intended them to learn. There's certainly a lot of societal adaptation that should put guardrails around their utility to us, but when I see "They will make us dumb!" it just sets of a contrarian reaction in me.
dothereading · 1h ago
I agree with this, but at the same time I think LLMs will make anyone who wants to learn much smarter.
BizarroLand · 36m ago
Dumb is more the inability to make expedient, salient, and useful decisions either from the lack of knowledge or the fundamental incapability to process the available knowledge.
Dumb is accidental or genetic.
AI won't affect how dumb we are.
I think they will decrease the utility of crystalline knowledge skills and increase our fluid knowledge skills. Smart people will still find ways to thrive in the environment.
Human intelligence will continue moving forward.
blamestross · 1h ago
Its all about who "us" are.
Individuals? Most information technology makes us dumber in isolation, but with the tools we end up net faster.
The scary thing is that it is less about making things "better" than it is making them cheaper. AI isn't winning on skill, its winning on being "80% the quality at 20% the price."
So if you see "us" as the economic super-organism managed by very powerful people, then it makes us a lot smarter!
j45 · 1h ago
If it's doing the thinking for you, just like social media, but much more intense.
whydoineedthis · 51m ago
Similar fear mongering when calculators came about. No one got dumber, we just got faster at doing simple math. WOrking out complex math will always be interesting to those who really want to do it, and the rest likely wont contribute mu ch anyway - thier just consumers. Let the kids have thier wordy calculators, it actually may unblock critical paths of success needed for someone to really go deep.
BizarroLand · 32m ago
Yep. I force memorized so many calculations because our teachers constantly told us that in the future we wouldn't always have a calculator with us.
It was helpful, I got pretty far along in collegiate math without tutors or assistance thanks to the hard calculation skills I drilled into my head.
But, counterpoint, if I leave my calculator/computer/all in one everything device at home on any given day it can ruin my entire day. I haven't gone 72 hours without a calculator in nearly a decade.
From the article "Muscles grow by lifting weights" yet we do that now as a hobby and not as a critical job. I'm not sure I want to live in a world where thinking is a gym like activity, however if you go back 200 years it would probably be difficult to explain the situation today to someone living in a world where most people are doing physical labor or using animals to do it.
"Almost everyone lives a life closer to that of nobility or the merchent class"
I'm sure the vast majority of the people from that time would rather live in ours if explained that way.
The engine provides artificial strength, granted, but AI does not provide artificial intelligence. It's a misnomer.
I also feel like there's more to be said about LLMs fostering the ability to ask questions better than you might if you primarily used search. If the objective was to write, for example, about an esoteric organic chemistry topic, and a "No Brain" group of non-experts was only allowed to formulate a response by asking real-life experts as much as they can about the esoteric topic, then would users more experienced with LLMs come out ahead on the essay score? Understanding how to leverage a tight communication loop most effectively is a skill that the non-LLM groups in this study should be evaluated on.
https://fs.blog/an-old-argument-against-writing/
I find prompt fettling a great way of getting to grips with a problem. If I can explain my problem effectively enough to get a reasonable start on an answer, then I likely thoroughly understand my problem.
An LLM is like a really fancy slide rule or calculator (I own both). It does have a habit of getting pissed and talking bollocks periodically. Mind you, so do I.
If you love to knit that's cool but don't get on me because I'd rather buy a factory sweater and get on with my day.
I love creating things, I love solving problems, I love designing elegant systems. I don't love mashing keys.
It’s not about “passion”. It’s purely transactional and I will use any tool that is available to me to do it.
If an LLM can make me more efficient at that so be it. I’m also not spending months getting a server room built out to hold a SAN that can store a whopping 3TB of storage like in 2002. I write 4 lines of Yaml to provision an S3 bucket.
Also it has a high opinion of Bryan Ferry. Deeply untrustworthy.
But I don't buy this at all for software development. I find myself thinking more carefully and more expansively, at the same time, about solving programming problems when I'm assisted by an LLM agent, because there's minimal exertion to trying multiple paths out and seeing how they work out. Without an agent, every new function I write is a kind of bet on how the software is going to come out in the end, and like every human I'm loss-averse, so I'm not good at cutting my losses on the bad bets. Agents free me from that.
Whoa, whoa, are we talking Bryan Ferry as an artist, or Bryan Ferry as a guy? Because I love me some Roxy Music but have heard that Bryan is kind of a dick.
I do use it for learning and to help me access new concepts I've never thought about but if you're not proving what it's writing yourself and understanding what its written yourself then I hope I never have to work on code you've written. If you are, then you are not doing what the article is talking about.
I've been writing Go since ~2012 and coding since ~1995. I read everything I merge to `main`. The code it produces is solid. I don't know that it one-shots stuff; I work iteratively and don't care enough to try to make it do that, I just care about the endpoint. The outcomes are very good.
I know I'm not alone in having this experience.
I have heard people argue that the use of calculators (and later, specifically graphing calculators) would make people worse at math; quick searching found papers like https://files.eric.ed.gov/fulltext/ED525547.pdf discussing the topic.
I can't see how the "LLMs make us dumber" argument is different than those. I think calculators are a great tool, and people trained in a calculator-having environment certainly seem to be able to do math. I can't see that writing has done anything but improve our ability to reason over time. What makes LLMs different?
Calculators don't solve problems, they solve equations. Writing didn't kill our memories because there's still so much to remember that we almost have to write things down to be able to retain it.
If you don't do your own research and present the LLM with your solution and let it point out errors and instead just type "How do I make ____?" it's solving the entire thought process for you right there. And it may be leading you wrong.
That's my view on how it's different at least. They're not calculators or writing. They're text robots that present solutions confidently and offer to do more work immediately afterwards, usually ending a response in "Want me to write you a quick python script to handle that?"
A thought experiment, if you're someone who has used a calculator to calculate 20% tips your whole life, try to calculate one without it. Maybe you specifically don't struggle because you're good at math or have a lot of math experience elsewhere but if you have approached it the way this article is calling bad, you'd simply have no clue where to start.
So, I guess I'm just saying that LLMs are a tool like any other. Their existence doesn't make you worse at what they do unless you forgo thinking when you use them. You can use a calculator to efficiently solve a wrong equation - you have to think about what it is going to solve for you. You can use an LLM to make a bad argument for you - you have to think about the inputs you're going to have it output for you.
I was just feeling anti-alarmist-headline - there's no intrinsic reason we'd get dumber because LLMs exist. We could, but I think history has shown that this kind of alarmism doesn't come to fruition.
I'd argue we're using them "right" though.
Good question!
Writing or calculators likely do reduce our ability memorize vast amounts of text or do arithmetic in our heads; but to write or do math with writing and calculation, we still must fully load those intermediate facts into our brain and fully understand what was previously written down or calculated to wield and wrangle it into a new piece of work.
In contrast, LLMs (unless used with great care as only one research input) can produce a fully written answer without ever really requiring the 'author' to fully load the details of the work into their brain. LLMs basically reduc ethe task to editing not writing. As editing is not the same as writing, so it is no surprise this study shows an serious inability to remember quotes from the "written" piece.
Perhaps it is similar to learning a new language wherein we tend to be much sooner able to read the new language at a higher complexity than write or speak it?
You (and another respondent) both cite the case where someone unthinkingly generates a large swath of text using the LLM, but that's not the only modality for incorporating LLMs into writing. I'm with you both on your examples, fwiw, I just think that only thinking about that way of using LLMs for writing is putting on blinders to the productive ways that they can be used.
It feels to me like people are reacting to the idea that we haven't figured out how to work it into our pedagogy, and that their existence hurts certain ways we've become accustomed to measuring people having learned what we intended them to learn. There's certainly a lot of societal adaptation that should put guardrails around their utility to us, but when I see "They will make us dumb!" it just sets of a contrarian reaction in me.
Dumb is accidental or genetic.
AI won't affect how dumb we are.
I think they will decrease the utility of crystalline knowledge skills and increase our fluid knowledge skills. Smart people will still find ways to thrive in the environment.
Human intelligence will continue moving forward.
Individuals? Most information technology makes us dumber in isolation, but with the tools we end up net faster.
The scary thing is that it is less about making things "better" than it is making them cheaper. AI isn't winning on skill, its winning on being "80% the quality at 20% the price."
So if you see "us" as the economic super-organism managed by very powerful people, then it makes us a lot smarter!
It was helpful, I got pretty far along in collegiate math without tutors or assistance thanks to the hard calculation skills I drilled into my head.
But, counterpoint, if I leave my calculator/computer/all in one everything device at home on any given day it can ruin my entire day. I haven't gone 72 hours without a calculator in nearly a decade.