Accumulation of cognitive debt when using an AI assistant for essay writing task

238 stephen_g 138 6/16/2025, 2:49:58 AM arxiv.org ↗

Comments (138)

jsrozner · 11h ago
I wouldn't call it "accumulation of cognitive debt"; just call it cognitive decline, or loss of cognitive skills.

And also DUH. If you stop speaking a language you forget it. The brain does not retain information that it does not need. Anybody remember the couple studies on the use of google maps for navigation? One was "Habitual use of GPS negatively impacts spatial memory during self-guided navigation"; another reported a reduction in gray matter among maps users.

Moreover, anyone who has developed expertise in a science field knows that coming to understand something requires pondering it, exploring how each idea relates to other things, etc. You can't just skim a math textbook and know all the math. You have to stop and think. IMO it is the act of thinking which establishes the objects in our mind such that they can be useful to our thinking later on.

vishnugupta · 10h ago
> You can't just skim a math textbook and know all the math. You have to stop and think.

And most importantly you have to write. A lot. Writing allows our brain to structure our thinking. Enables us to have a structured dialogue with ourselves. Explore different paths. Thinking & pondering can only do so much and will reach the limits soon. Writing, on the other hand enables one to explore thoughts nearly endlessly.

Given that thinking is so intimately associated with writing (could be prose, drawing, equations, graphs/charts, whatever) and that LLMs are doing more and more of writing it'll be interesting to see the effect of LLMs on our cognitive skills.

larodi · 9h ago
The impact of writing is immensely undervalued. Even writing with a keyboard or screen is a lot more than non writing. Exercising writing on any topic is still beneficial, and you can find many psychologists recommend having a daily blog of some sort to help people observe themselves from a side. The same goes for speaking, public speech if u want, and therapeutic daily acting-playing which is also overlooked.

I’d love to see some sort of study on people who actively particulate writing their stuff on social media and those who don’t.

If u want to spare your mind from GPT numbness - write or copy what it tells you to do by hand, do not abandon this process.

Or just write code, programs, essays, poems for fun. Trust me - it is and you’ll get smarter and more confident. GPT is a very dangerous convenience gadget, is not going away like sugar or Netflix, or obesity or long commutes … but similarly dosage and counter measures are essential to cope with the side-effects.

ToucanLoucan · 3h ago
The only writing I've ever used ChatGPT for is writing I openly don't give a shit about, and even then I constantly find myself prompting it to write less because holy shit do LLMs love to go on and on and on.

Like not only do I cosign all said above, but I will also add to this: brevity is the soul of wit and none of these fucking things are brief. No matter what you ask for you end up getting just paragraphs of shit to communicate even basic ideas. It's hard to not think this tool was designed from go to automate high school book reports.

I would only use these programs to either create these overly long, meandering stupid emails, or to digest ones similarly sent to me, and make a mental note to reduce my interactions with this person.

It's no wonder the MBA class is fucking thrilled with it though, since the vast majority of their jobs seem to revolve around producing and consuming huge reports containing vacuously little.

supriyo-biswas · 10h ago
> And most importantly you have to write. A lot. Writing allows our brain to structure our thinking.

There's a lot of talk about AI assisted coding these days, but I've found similar issues where I'm unable to form a mental model of the program when I rely too much on them (amongst other issues where the model will make unnecessary changes, etc.). This is one of the reasons why I limit their use to "boring" tasks like refactoring or clarifying concepts that I'm unsure about.

> it'll be interesting to see the effect of LLMs on our cognitive skills.

These discussions remind me a lot about this comic[1].

[1] https://www.monkeyuser.com/2023/deprecated/

fatnoah · 4h ago
> And most importantly you have to write. A lot. Writing allows our brain to structure our thinking. Enables us to have a structured dialogue with ourselves.

I feel like to goes beyond writing to really any form of expressing this knowledge to others. As a grad student, I was a teaching assistant for an Electrical Engineering class I failed as an undergrad. The depth of understanding I developed for the material over the course of supporting students in the class was amazing. I transitioned from "knowing" the material and equations to being able to generate them all from first principles.

Regardless, I fully agree that using LLMs as our form of expression will weaken both the ability to express ourselves AND the ability to develop deep understanding of topics as LLMs "think" for us too.

p_v_doom · 7h ago
Writing is pure magic.It allows so much reflection and so many insights, that you wouldnt otherwise get. And writing as part of the reading process allows you to directly integrate what you are reading as you are doing it. Like cant recommend it enough. Only downside is that its slow, compared to what people are used and want to do, especially in the work environment.
Davidzheng · 10h ago
I disagree with this take. I'd say often when exploring new math problems, often it's possible explore the possible solutions paths at lower technical levels first in your mind before anything down--when actually going into details of an approach. I don't think not writing is that limiting if all of your approaches already fail before going into details, which is often the case in early stages of math research.
hamdouni · 9h ago
I can also explore by writing. Writing drafts can help structure my thinking.
hyper57 · 9h ago
"The pen is an instrument of discovery rather than just a recording implement." ~ Billy Collins
Aeolun · 10h ago
> And most importantly you have to write. A lot.

I find this to still be true with AI assisted coding. Especially when I still have to build a map of the domain.

dr_dshiv · 9h ago
Prompting involves more than an insignificant amount of writing.
delusional · 9h ago
But it is not at all the same _type_ of writing. Most of the prompts I've seen and written are shorter, less organized, and most importantly not actually considered a piece of writing. When you are writing a prompt you are considering how the machine will "interpret" it and what it will spit back, you're not constructing and argument. Vagueness or dialectics in a prompt will often just confuse the machine.

Hitting the keys is not always writing.

dr_dshiv · 9h ago
Prompting is prewriting — which is very important and often neglected. With it, you are:

* Describing the purpose of the writing

* Defining the format of the writing

* Articulating the context

You are writing to figure out what you want.

teekert · 9h ago
I would call it cognitive debt. Have you ever tried writing a large report with an LLM?

It's very tempting to let it write a lot, let it structure things, let it make arguments and visuals. It's easy to let it do more and more... And then you end up with something that is very much... Not yours.

But your name is on it, you are asked to explain it, to understand it even better than it is written down. Surely the report is just a "2D projection" of some "high dimensional reality" that you have in you head... right? Normally it is, but when you spit out a report in 1/10th of the time it isn't. You struggle to explain concepts, even though they look nice on paper.

I found that I just really have to do the work, to develop the mental models, to articulate and to re-articulate and re-articulate again. For different audiences in different ways.

I like the term cognitive debt as a description of the gap between what mental models one would have to develop pre-LLMs to get a report out, and how little you may need with an LLM.

In the end it is your name on that report/paper, what can we expect of you, the author? Maybe that will start slipping and we start expecting less over time? Maybe we can start skipping authors altogether and rely on the LLM's "mental" model when we have in depth questions about a report/paper... Who knows. But different models (like LLMs) may have different "models" (predictive algorithms) of underlying truth/reality. What allows for most accurate predictions? One needs a certain "depth of understanding". Writing while relying too much on LLMs will not give it to you.

Over time indeed this may lead to a population "cognitive decline, or loss of cognitive skills." I don't dare to say that. Book printing didn't do that, although it was expected at the time by the religious elite, they worried that normal humans would not be able to interpret texts correctly.

As remarked here in this thread before, I really do think that "Writing is thinking" (but perhaps there is something better than writing which we haven't invented yet). And thinking is: Developing a detailed mental model that allows you to predict the future with a probability better than chance. Our survival depends on it, in fact it is what evolution is in terms of information theory [0]. "Nothing in biology makes sense except in the light of ... information."

[0] https://www.youtube.com/watch?v=4PCHelnFKGc

pilif · 10h ago
> The brain does not retain information that it does not need.

Why do I still know how to optimize free conventional memory in DOS by configuring config.sys and autoexec.bat?

I haven’t done this in 2 decades and I’m reasonably sure I never again will

dotancohen · 10h ago
Probably because you learned it during that brief period in your development in which humans are most impressionable.

Now think about the effect on those humans currently using LLMs at that stage of their development.

reciprocity · 1h ago
I also think the claim that "the brain does not retain information it does not need" is an insufficient explanation, and short-sighted. As an example, reading books informs and shapes our thinking, and while people may not immediately recall a book that they read some time ago, I've had conversations where I remembered that I had read a particular passage (sentence, phrase, idea) and referred to it in the conversation.

People do stuff like that all the time, bringing up past memories in spontaneity. The brain absolutely does remember things it "doesn't need".

fennecfoxy · 5h ago
The last fast food place you went to, what does the ceiling look like? The exact colour/pattern?

The last phone conversation you had with a utility company, how did they greet you exactly?

There's lots that we do remember, sometimes odd things like your example, though I'm sure you must have repeated it a few times as well. But there's so much detail that we don't remember at all, and even our childhood memories just become memories of memories - we remember some event, but we slowly forget the exact details, they become fuzzy.

nottorp · 8h ago
To nitpick, your subconscious is aware computers have memory constraints even now and you write better code because of it even if you do javascript...
rusk · 10h ago
Because these are core memories that provide stepping stones to later knowledge. It is a part of the story of you. It is very hard to integrate all knowledge in this way.
15123123 · 10h ago
I think because some experiences are so profound to your brain ( first impression, moments that you are proud of ) that you just replay them over and over again.
Delphiza · 3h ago
memmaker - a cheat, but it is still in my quick-access memory.
flomo · 8h ago
Probably because there was some reward that you felt at the time was important (most likely playing a DOS game).

I did this for a living at a large corp where I was the 'thinkpad guy', and I barely remember any of the tricks (and only some of the IBM stuff). Then Windows NT and 95 came out and like whoo cares... This was always dogshit. Because I was always an Apple/Unix guy and that was just a job.

lelele · 9h ago
Agreed. We remember many things that don't serve us anymore.
this_steve_j · 6h ago
The terms “Cognitive decline” or “brain rot” may have sounded too sensational, and to be fair the authors note the limitations of the small sample size.

Indeed the paper doesn’t provide a reference or citation for the term “cognitive debt” so it is a strange title. Maybe a last minute swap.

Fascinating research out of MIT. Like all psychology studies it deserves healthy scrutiny and independent verification. Bit of a kitchen sink with the imaging and psychometric assessments, but who doesn’t love a picture of “this is your brain on LLMs” amirite?

eru · 10h ago
> The brain does not retain information that it does not need.

Sounds very plausible, though how does that square with the common experience that certain skills, famously 'riding a bike', never go away once learned?

wahern · 10h ago
Closer to the truth is that the brain never completely forgets something, in the sense that there are always vestiges left over, even after the ability to recall or instantly draw upon it is long gone. Studies show, for example, that after one has "forgotten" a language, they're quicker to pick up it again later on compared to someone without that prior experience; how quickly being time dependent, but more quickly nonetheless.

OTOH, IME the quickest way to truly forget something is to overwrite it. Photographs being a notorious example, where looking at photographs can overwrite your own personal episodic memory of an event. I don't know how much research exists exploring this phenomenon, though, but AFAIU there are studies at least showing that the mere act of recalling can reshape memories. So, ironically, perhaps the best way not to forget is to not remember.

Left unstated in the above is that we can categorize different types of memory--episodic, semantic, implicit, etc--based on how they seem to operate. Generalizations (like the above ;) can be misleading.

KineticLensman · 4h ago
> Sounds very plausible, though how does that square with the common experience that certain skills, famously 'riding a bike', never go away once learned?

I worked with some researchers who specifically examined this when developing training content for soldiers. They found that 'muscle memory' skills such as riding a bike could persist for a very long time. At the other end of the spectrum were tasks that involved performing lots of technical steps in a particular order, but where the tasks themselves were only performed infrequently. The classic example was fault finding and diagnosis on military equipment. The researchers were in effect quantifying the 'forgetting curve' for specific tasks. For some key tasks, you could overtrain to improve the competence retention, but it was often easier to accept that training would wear off very quickly and give people a checklist instead.

eru · 1h ago
Very interesting! Thanks for bringing this up.
gwd · 7h ago
I think a better way to say it is that the brain doesn't commit to long term memory things that it doesn't need.

I remember hearing about some research they'd done on "binge watching" -- basically, if you have two groups:

1. One group watches the entire series over the course of a week

2. A second group watches a series one episode per week

Then some time later (maybe 6 months), ask them questions about the show, and the people in group 2 will remember significantly more.

Anecdotally, I've found the same thing with Scottish Country Dancing. In SCD, you typically walk through a dance that has 16 or so "figures", then for the next 10 minutes you need to remember the figures over and over again from different perspectives (as 1st couple, 2nd couple, 3rd couple etc). Fairly quickly, my brain realized that it only needed to remember the figures for 10 minutes; and even the next morning if you'd asked me what the figures were for a dance the night before I couldn't have told you.

I can totally believe it's the same thing with writing with an LLM (or having an assistant write a speech / report for you) -- if you're just skimming over things to make sure it looks right, your brain quickly figures out that it doesn't need to retain this information.

Contrast this to riding a bike, where you almost certainly used the skill repeatedly over the course of at least a year.

pempem · 10h ago
Such a good question - I hope someone answers with more than an anecdote (which is all I can provide) - I've found the skills that don't leave you like riding a bike, swimming, cooking are all physical skills. Tangible.

The skills that leave: arguments, analysis, language, creativity often seem abstract and primarily if not exclusively sourced in our minds

hn_throwaway_99 · 10h ago
Google "procedural memory". Procedural memory is more resistant to forgetting than other types of memory.
eru · 9h ago
I guess speaking a language employs some mixture of procedural and other types of memory?
rusk · 10h ago
Riding a bike is a skill rather than what we would call a “memory” per se. It’s a skill that develops a new neural pathway throughout your extended nervous system bringing together the lesser senses of proprioception and balance. Once you bring these things together you then go on to use them for other things. You “know” (grok), rather than “understand” how a bike stays upright on a very deep physical level.
eru · 9h ago
Sure. But speaking a language is also (at least partially) a skill, ain't it?
rusk · 9h ago
It is. It’s also something you don’t forget except in extreme cases like dementia. Skills are different from facts but we use the word memory interchangeably for each. It’s this nuance of language that causes a category error in your reasoning ain’t it.
devmor · 2h ago
I am not an expert in the subject but I believe that motor neurons retain memory, even those not located inside the brain. They may be subject to different constraints than other neurons.
amelius · 3h ago
> You can't just skim a math textbook and know all the math.

Curious, did anyone try to learn a subject by predicting the next token, and how did it go?

jancsika · 10h ago
> And also DUH. If you stop speaking a language you forget it. The brain does not retain information that it does not need.

Except when it does-- for example in the abstract where it is written that Brain-to-LLM users "exhibited higher memory recall" than LLM and LLM-to-Brain users.

NetRunnerSu · 2h ago
The discussion here about "cognitive debt" is spot on, but I fear it might be too conservative. We're not just talking about forgetting a skill like a language or losing spatial memory from using GPS. We're talking about the systematic, irreversible atrophy of the neural pathways responsible for integrated reasoning.

The core danger isn't the "debt" itself, which implies it can be repaid through practice. The real danger is crossing a "cognitive tipping point". This is the threshold where so much executive function, synthesis, and argumentation has been offloaded to an external system (like an LLM) that the biological brain, in its ruthless efficiency, not only prunes the unused connections but loses the meta-ability to rebuild them.

Our biological wetware is a use-it-or-lose-it system without version control. When a complex cognitive function atrophies, the "source code" is corrupted. There's no git revert for a collapsed neural network that once supported deep, structured thought.

This HN thread is focused on essay writing. But scale this up. We are running a massive, uncontrolled experiment in outsourcing our collective cognition. The long-term outcome isn't just a society of people who are less skilled, but a society of people who are structurally incapable of the kind of thinking that built our world.

So the question isn't just "how do we avoid cognitive debt?". The real, terrifying question is: "What kind of container do we need for our minds when the biological one proves to be so ruthlessly, and perhaps irreversibly, self-optimizing for laziness?"

https://github.com/dmf-archive/dmf-archive.github.io

alex77456 · 37m ago
It's up to everyone to decide what to use LLMs for. For high friction / low throughput (eg, online research using inferieor search tools) tasks, i find text models to be great. To ask about what you don't know, to skip the 'tedious part' (I don't feel like looking for answers, especially troubleshooting arcane technical issues among pages of forums or social media, makes me smarter in any way whatsoever, especially that the information usually needs to be verified and taken with a grain of salt).

StackExchange, the way it was meant to be initially, would be way more valuable over text models. But in reality people are imperfect and carry all sorts of cognitive biases and baggage, while a LLM won't close your question as 'too broad' right after it gets upvotes and user interaction.

On the other hand, I still find LLM writing on the subjects familiar to me, vastly inferior. Whenever I try to write a say, email with its help, I end up spending just as much time either editing the prompt to keep it on track, or rewriting it significantly after. I'd rather write it on my own with my own flow, than proofread/peer review a text model.

niemandhier · 10h ago
AI is the anti-Zettelkasten.

Rather than getting ever deeper insight into a subject matter by actively working on it, you iterate fast but shallow over a corpus of AI generated content.

Example: I wanted to understand the situation in the Middle East better so I wrote an 10 page essay on the genesis if Hammas and Hizbulah using OpenAI as a cowriter.

I remember nothing, worse of the things I remember I don’t know if it was hallucinations I fixed or actual facts.

energy123 · 10h ago
I'm on the optimistic side with how useful LLMs are, but I have to agree. You cultivate the instinct for how to steer the models and reduce hallucinations, but you're not building articulable knowledge or engaging in challenging thinking. It's more learning muscle-memory reactions to certain forms of LLM output that lean you towards trusting the output more, trying another prompting strategy, clearing context or not, and so on.

To the extent we can call it skill, it's probably going to be made redundant in a few years as the models get better. It gives me a kind of listlessness that assembly line workers would feel.

namaria · 9h ago
Maybe, much like we invented gyms to exercise after civilization made most physical labor redundant (at least in developed countries), we will see a rise of 'creative writing gyms' of some sort in the future.
nottorp · 8h ago
You tend to remember trouble more than things going smoothly, so I'd say you remember the parts you had to fix manually.
atoav · 10h ago
Most intelligent people are aware of the fact that writing is about thinking as much as it is about getting the written text.

LLMs can be great sparring partners for this, if you don't use it as a tool that writes for you, but as a tool that finds mistakes, points out gaps and errors (which you may or may not ignore) and helps in researching general questions aboit the world around you (always woth caution and sources).

tkgally · 10h ago
The results are not surprising to me personally. When I have used AI to help with my own writing and translation tasks, I do not feel as mentally engaged with the writing or translation process as I would be if I were doing it all on my own.

But I have found that using AI in other ways to be incredibly mentally engaging in its own way. For the past two weeks, I’ve been experimenting with Claude Code to see how well it can fully automate the brainstorming, researching, and writing of essays and research papers. I have been as deeply engaged with the process as I have ever been with writing or translating by myself. But the engagement is of a different form.

The results of my experiments, by the way, are pretty good so far. That is, the output essays and papers are often interesting for me to read even though I know an AI agent wrote them. And, no, I do not plan to publish them or share them.

SchemaLoad · 10h ago
I use AI tools for amusement and asking random questions, but for actual work, I basically don't use them at all. I wonder if I'll be part of the increasingly rare group who is actually able to do anything while the rest become progressively more incompetent.
barrenko · 10h ago
My nickel - we are in the primary stages of being given something like the famed "bicycle for the mind", an exoskeleton for the brain. At first when someone gives you a mech, you're like "woah, cool", let's see what it can do. And then you zip around, smash rocks, buildings, go try to lift the Eiffel.

After a while you get bored of it (duh), and go back to doing what you usually do, utilizing the "bicycle" for the kind of stuff you actually like doing, if it's needed, because while exploration is fun, work is deeply personal and meaningful and does not sustain too much exploration for too long.

(highly personal perspective)

audunw · 9h ago
“Bicycle for the mind” analogy is actually really good here. Since bicycles and other transportation technology has made us increasingly weak, which has a negative impact on physical health. At this point it has reached such a critical point that people are taking seriously the fact that we need physical exercise to be in good health. My company recently introduced 60 minutes a week of activity during work hours. It’s probably a good investment since physical health affects performance and mental health.

Coming back to AI, maybe in the future we will need to explicitly take mental exercise as seriously as we do with physical exercise now. Perhaps people will go to mental gyms. (That’s just a school you may say, but I think the focus could be different: Not having a goal to complete a class and then finish, but continuous mental exercises..)

rohansingh · 8h ago
> bicycles ... has made us increasingly weak

This is pretty difficult for me to buy. Cycling has been shown time & again to be a great way to increase fitness.

nottorp · 8h ago
> Cycling has been shown time & again to be a great way to increase fitness.

Compared to sitting on your butt in a car or public transport.

Perhaps not compared to walking everywhere and chasing the antelope you want to cook for lunch.

I think what he meant is that both bicycles and LLMs are a force multiplier and you still provide the core of the work, but not all of the work any more.

alex77456 · 31m ago
Cycling, in my experience, is usually way more intense than walking or even running/jogging. It just lets you cover larger distance and gives you more control over how your energy is used.

With the example of LLMs, sure, you could cycle the initial destination you were meant to walk to - write an article with its help, save a few hours and call it a day. Or you could cycle further and use the saved time to work on something a text model can't help you well with.

noobermin · 8h ago
I once had blood clots in my legs. I couldn't walk in the worst parts of it but cycling down the street was easier than walking for more than ten metres. It's better than sitting on your butt for hours on end, sure.
Todd · 11h ago
This is called cognitive offloading. Anyone who’s spent enough time working with coding assistants will recognize it.
esafak · 11h ago
Or working as an engineering manager.

It's the inevitable consequence of working at a different level of abstraction. It's not the end of the world. My assembly is rusty too...

15123123 · 10h ago
I don't think not using assembly is going to affect my brain / my life quality in any significant way, but not speaking / chatting with someone is.
tankenmate · 7h ago
But this is a strawman argument, it's not what the research is talking about.
nothrabannosir · 8h ago
If LLMs were as reliable as compilers we wouldn’t be checking in their output, and I’d be happy to forget all programming lore.

The “skill domain” with compilers is the “input”: that’s what I need to grok , maintain , and understand . With LLMs it’s the “output”.

until that changes, you’re playing a dangerous game letting those skills atrophy.

jameson · 11h ago
> The LLM undeniably reduced the friction involved in answering participants' questions compared to the Search Engine. However, this convenience came at a cognitive cost, diminishing users' inclination to critically evaluate the LLM's output or ”opinions” (probabilistic answers based on the training datasets). This highlights a concerning evolution of the 'echo chamber' effect: rather than disappearing, it has adapted to shape user exposure through algorithmically curated content. What is ranked as “top” is ultimately influenced by the priorities of the LLM's shareholders [123, 125].
eru · 10h ago
> What is ranked as “top” is ultimately influenced by the priorities of the LLM's shareholders [123, 125].

As if that's anything new. There's the adage that's older than electronics, that freedom of the press is freedom for those who can afford to own a printing press.

> However, this convenience came at a cognitive cost, diminishing users' inclination to critically evaluate the LLM's output or ”opinions” (probabilistic answers based on the training datasets).

Reminds me of Plato's concern about reading and writing dulling your mind. (I think he had his sock puppet Socrates express the concern. But I could be wrong.)

namaria · 9h ago
> Reminds me of Plato's concern about reading and writing dulling your mind. (I think he had his sock puppet Socrates express the concern. But I could be wrong.)

Nope.

Read the dialogue (Phaedrus). It's about rhetoric and writing down political discourses. Writing had existed for millennia. And the bit about writing being detrimental is from a mythical Egyptian king talking to a god, just a throwaway story used in the dialogue to make a tiny point.

In fact the conclusion of that bit of the dialogue is that merely having access to text may give an illusion of understanding. Quite relevant and on point I'd say.

eru · 1h ago
> In fact the conclusion of that bit of the dialogue is that merely having access to text may give an illusion of understanding. Quite relevant and on point I'd say.

Well, so that's exactly my point: Plato was an old man who yelled at clouds before it was cool.

namaria · 48m ago
Wow.
dotancohen · 10h ago
Plato's sock puppet Socrates? I think that you and I have read different history books, or at least different books regarding the history of philosophy. That said, I would love to hear your perspective on this.
eru · 9h ago
> Plato's sock puppet Socrates?

See https://en.wikipedia.org/wiki/Socratic_problem

> Socrates was the main character in most of Plato's dialogues and was a genuine historical figure. It is widely understood that in later dialogues, Plato used the character Socrates to give voice to views that were his own.

However, have a look at the Wikipedia article itself for a more nuanced view. We also have some other writers with accounts of Socrates.

Sharlin · 9h ago
I presume they refer to the fact that Socrates is basically used as a rhetorical device in Plato’s writings, and it’s not entirely clear how much of the dialogues were Socrates’s thoughts and how much was Plato’s own.
eru · 1h ago
Yes, exactly.
falcor84 · 4h ago
I don't quite see their point. Obviously if you're delegating the task to someone/something then you're not getting as good at it as if you were to do it yourself. If I were to write machine code by hand, rather than having the compiler do it for me, I would definitely be better at it and have more neural circuitry devoted to it.

As I see it, it's much more interesting to ask not wherther we are still good at doing the work that computers can do for us, but whether we are now able to do better at the higher-level tasks that computers can't yet do on their own.

devmor · 2h ago
Your question is answered by the study abstract.

> Over four months, LLM users consistently underperformed at neural, linguistic, and behavioral levels. These results raise concerns about the long-term educational implications of LLM reliance and underscore the need for deeper inquiry into AI's role in learning.

falcor84 · 1h ago
But it's not that they "underperformed" at life in general - they underperformed when assessed on various aspects of the task that they weren't practicing. To me it's as if they ran a trial where one group played basketball, while another were acting as referees - of course that when tested on ball control, those who were dribbling and throwing would do better, but it tells us nothing about how those acting as referees performed at their thing.
devmor · 28m ago
I see what you’re getting at now. I agree I’d like to see a more general trial that measures general changes in problem solving ability after a test group is set at using LLMs for a specific problem solving task vs a control group not using them.
Kiyo-Lynn · 8h ago
When I write with AI, it feels smooth in the moment, but I’m not really thinking through the ideas. The writing sounds fine, but when I look back later, I often can’t remember why I phrased things that way.

Now I try to write my own draft first, then use AI to help polish it. It takes a bit more effort upfront, but I feel like I learn more and remember things better.

energy123 · 7h ago
The rule of thumb "LLMs are good at reducing text, not expanding it" is a good one here.
falcor84 · 58m ago
> "LLMs are good at reducing text, not expanding it"

You put it in quote marks, but the only search results are from you writing it here on HN. Obviously LLMs are extremely good at expanding text, which is essentially what they do whenever they continue a prompt. Or did you mean that in a prescriptive way - that it would be better for us to use it more for summarizing rather than expanding?

devmor · 2h ago
Probably interesting to note that this is almost always true of weighted randomness.

If you have something that you consider to be over 50% towards your desired result, reducing the space of the result has a higher chance of removing the negative factor than the positive.

In contrast, any case that the algorithm is less than 100% capable of producing the positive factor, adding on to the result could always increase the negative factor more than the positive, given a finite time constraint (aka any reasonable non-theoretical application).

a_bonobo · 10h ago
I guess: Not only does AI reduce the number of the entry-level workers, now this shows that the entry-level workers who remain won't learn anything from their use of AI and remain entry-level forever if they're not careful.
user453 · 3h ago
Interesting study but I don't really get the point of the search group. Looking at the essay prompts, they all seem like fluffy, opinion based stuff. How would you even use a search engine to help you in that case? Quote some guy who had an opinion? Personally I think my approach would be identical whether put in the web-search or the only-brain group.
Noelia- · 6h ago
After using ChatGPT a lot, I’ve definitely noticed myself skipping the thinking part and just waiting for it to give me something. This article on cognitive debt really hit home. Now I try to write an outline first before bringing in the AI. I do not want to give up all the control.
sachin_rcz · 2h ago
Would the cognitive decline of using coding debt be on higher side compared to essay writing task? We can all see the effect on junior developers but what about senior devs.
seanmcdirmid · 9h ago
My hand writing has suffered since I’ve heavily relied on keyboards for the last few decades. I can’t even produce a consistent signature anymore. My stick shift skills also suffered when I used an automatic for so long (and now I have an EV, I’m forgetting what gears are at all).

Rather than lament that the machine has gotten better than us at producing what we’re always mostly vacuous essays anyways, we have to instead look at more pointed writing tasks and practice those instead. Actually, I never really learned how to write until I hit grad school and had messages I actually wanted to communicate. Whatever I was doing before really wasn’t that helpful, it was missing focus. Having ChatGPT write an essay I don’t really care about only seems slightly worse than writing it myself.

rgoulter · 10h ago
From the summary:

"""Going forward, a balanced approach is advisable, one that might leverage AI for routine assistance but still challenges individuals to perform core cognitive operations themselves. In doing so, we can harness potential benefits of AI support without impairing the natural development of the brain's writing-related networks.

"""It would be important to explore hybrid strategies in which AI handles routine aspects of writing composition, while core cognitive processes, idea generation, organization, and critical revision, remain user‑driven. During the early learning phases, full neural engagement seems to be essential for developing robust writing networks; by contrast, in later practice phases, selective AI support could reduce extraneous cognitive load and thereby enhance efficiency without undermining those established networks."""

mmaunder · 9h ago
No one only uses an LLM for writing. We switch tools as needed to pull threads as they emerge. It’s like being told to explore a building without leaving a specific room.
disintegrator · 4h ago
It's somewhat disappointing to see a bunch of "well, duh" comments here. We're often asking for research and citations and this seems like a useful entry in the corpus of "effects of AI usage on cognition".

On the topic itself, I am very cautious about my use of LLMs. It breaks down into three categories for me: 1. replacing Google, 2. get a first review of my work and 3. taking away mundane tasks around code editing.

Point 3. is where I can become most complacent and increasingly miscategorize tasks as mundane. I often reflect after a day working with an LLM on coding tasks because I want to understand how my behavior is changing in its presence. However, I do not have a proper framework to work out "did i get better because of it or not".

I still believe we need to get better as professionals and it worries me that even this virtue is called into question nowadays. Research like this will be helpful to me personally.

xorokongo · 7h ago
Will we end up with a world where the only experts are LLM companies, having a monopoly on thinking. Will future humans ever be as smart as us or are we the peak of human intelligence and can AI make progress without smart humans to provide training data, getting new insights and increasing its intelligence?
Frummy · 11h ago
I can’t believe riding a horse and carriage wouldn’t make you better at riding a horse. Sure a horserider wouldn’t want to practice the wrong way, but anyone else just wants to get somewhere
OhNotAPaper · 11h ago
> I can’t believe riding a horse and carriage wouldn’t make you better at riding a horse.

Surely you mean "would"? Because riding a horse and carriage doesn't imply any ability at riding a horse, but the reverse relation would actually make sense, as you already have historical, experiential, intimate knowledge of a horse despite no contemporaneous, immediate physical contact.

Similarly, already knowing what you want to write would make you more proficient at operating a chatbot to produce what you want to write faster—but telling a chatbot a vague sense of the meaning you want to communicate wouldn't make you better at communicating. How would you communicate with the chatbot what you want if you never developed the ability to articulate what you want by learning to write?

EDIT: I sort of understand what you might be getting at—you can learn to write by using a chatbot if you mimic the chatbot like the chatbot mimics humans—but I'd still prefer humans learn directly from humans rather than rephrased by some corporate middle-man with unknown quality and zero liability.

wcoenen · 10h ago
The first sentence in the comment you are responding to is sarcasm. Just replace "I can't believe" with "Of course".
OhNotAPaper · 7h ago
> The first sentence in the comment you are responding to is sarcasm. Just replace "I can't believe" with "Of course".

Do you have any evidence of this?

wcoenen · 6h ago
No, because of Poe's law only the author of the comment can confirm. But the analogy makes sense then:

"[Of course] writing an essay with chatgpt wouldn’t make you better at writing essays unassisted. Sure, a student wouldn’t want to practice the wrong way, but anyone else just wants to produce a good essay."

christophilus · 3h ago
It’s fairly obvious from the context.
apsurd · 11h ago
i didn't read the article but come on riding a horse to get to a destination is not remotely similar to writing an essay.

if you say it's a means to an end - to what a good grade? - we've lost the plot long ago.

writing is for thinking.

adeon · 11h ago
The task of riding a horse can be almost entirely offsourced to the professional horse riders. If they take your carriage from point A to point B, sure, you care about just getting somewhere.

Taking the article's task of essay writing: someone presumably is supposed to read them. It's not a carriage task from point A to point B anymore. If the LLM-assisted writers begin to not even understand their own work (quoting from abstract "LLM users also struggled to accurately quote their own work.") how do they know they are not putting out nonsense?

eru · 10h ago
> If the LLM-assisted writers begin to not even understand their own work (quoting from abstract "LLM users also struggled to accurately quote their own work.") how do they know they are not putting out nonsense?

They are trained (amongst other things) on human essays. They just need to mimic them well enough to pass the class.

> Taking the article's task of essay writing: someone presumably is supposed to read them.

Soon enough, that someone is gonna be another LLM more often than not.

bakugo · 10h ago
You know the AI-induced cognitive decline is already well under way when people start comparing writing an essay to riding a horse.
namaria · 9h ago
Horse riding was invented much later than carriages, and it revolutionized warfare.
gnabgib · 9h ago
Can you point at some references? Horse riding started around 3500 BC[0], while horse carriages started around 100BC [1], oxen/buffalo drawn devices around 3000 BC[1].

[0]: https://en.wikipedia.org/wiki/Equestrianism

[1]: https://en.wikipedia.org/wiki/Carriage

namaria · 8h ago
From the article [0] you linked:

"However, the most unequivocal early archaeological evidence of equines put to working use was of horses being driven. Chariot burials about 2500 BC present the most direct hard evidence of horses used as working animals. In ancient times chariot warfare was followed by the use of war horses as light and heavy cavalry."

Long discussion in History Exchange about dating the cave paintings mentioned in the wikipedia article above:

https://history.stackexchange.com/questions/68935/when-did-h...

gnabgib · 8h ago
Well exactly.. a millennium after being ridden (3500BC) they were used as beasts of burden (2500BC).. rather the opposite of your claim.
namaria · 8h ago
The 3500 BCE date for horse ridding is speculative and poorly supported by evidence. I thought the language in the bit I pasted made that clear. "Horse being driven" means attached to chariots, not ridden.

Unless you want to date the industrial revolution to 30 BCE when Vitruvius described the aeolipile, we can talk about the evidence of these technologies impact in society. For chariots that would be 1700 BCE and horseback riding well into iron age ~1000 BCE.

eesmith · 8h ago
I think you are reading "carriage" too specifically, when I suspect it's meant as a wider term for any horse-drawn wheeled vehicle.

Your [0] says "Chariot burials about 2500 BC present the most direct hard evidence of horses used as working animals. In ancient times chariot warfare was followed by the use of war horses as light and heavy cavalry.", just after "the most unequivocal early archaeological evidence of equines put to working use was of horses being driven."

That suggests the evidence is stronger for cart use before riding.

If you follow your [1] link to "bullock cart" at https://en.wikipedia.org/wiki/Bullock_cart you'll see: "The first indications of the use of a wagon (cart tracks, incisions, model wheels) are dated to around 4400 BC[citation needed]. The oldest wooden wheels usable for transport were found in southern Russia and dated to 3325 ± 125 BC.[1]"

That is older than 3000 BC.

I tried but failed to find something more definite. I did learn from "Wheeled Vehicles and Their Development in Ancient Egypt – Technical Innovations and Their (Non-) Acceptance in Pharaonic Times" (2021) that:

> The earliest depiction of a rider on horseback in Egypt belongs to the reign of Thutmose III.80 Therefore, in ancient Egypt the horse is attested for pulling chariots81 before it was used as a riding animal, which is only rarely shown throughout Pharaonic times.

I also found "The prehistoric origins of the domestic horse and horseback riding" (2023) referring to this as the "cart before the horse" vs. "horse before the cart" debate, with the position that there's "strong support for the “horse before the cart” view by finding diagnostic traits associated with habitual horseback riding in human skeletons that considerably pre-date the earliest wheeled vehicles pulled by horses." https://journals.openedition.org/bmsap/11881

On the other hand, "Tracing horseback riding and transport in the human skeleton" (2024) points out "the methodological hurdles and analytical risks of using this approach in the absence of valid comparative datasets", and also mentions how "the expansion of biomolecular tools over the past two decades has undercut many of the core assumptions of the kurgan hypothesis and has destabilized consensus belief in the Botai model." https://www.science.org/doi/pdf/10.1126/sciadv.ado9774

Quite a fascinating topic. It's no wonder that Wikipedia can't give a definite answer!

kanodiaashu · 10h ago
Well, on the flipside of writing with AI, I've been making an app to read papers with AI! https://www.proread.ai/community/ab7bd00c-e017-4de2-b6fb-502... ; Please give me feedback if you try it!
smcleod · 10h ago
Quite a nice interface, it reminds me of the static sites that Perplexity builds in Labs mode. Is it open source?
kanodiaashu · 9h ago
Thank you! Its not open source, no. I need to check those out, I have not.
paradite · 10h ago
The results are not surprising, but it's good to have these findings formalized as publications, so that we (or LLMs) can refer to them as ground truth in the future.
bsenftner · 4h ago
Well duh. Writing is thinking ordered, and thinking in your mind is not ordered unless one has specific training that organizes and orders their thinking - and even then it requires effort to maintain an organized perception. That is why we write: writing is our thoughts organized and frozen in an order that will be remain in order when related, without writing as the communications foundation the ideas/concepts would drift. Using an LLM to write is using an LLM to think for you and unless you then double your work by validating what was written, you are just adding work that regulates your mind to a janitor cleaning up after the LLM.

It is absolutely possible to use LLMs when writing essays, but do not use them to write! Use them to critique what you yourself with your own mind wrote!

satisfice · 1h ago
I am just finishing a book that took about two years to write. I thought I would be done a year ago. It’s been a slog.

So now I am in the final editing stage, and I am going back over old writing that I don’t remember doing. The material has come together over many many drafts, and parts of it are still not quite consistent with other parts.

But when I am done, it will be mine. And any mistakes will be honest ones that represent the real me. That’s a feeling no one who uses AI assistance will ever have.

I have never and will never use AI to write anything for me.

cleandreams · 9h ago
A paper to make the teachers I know weep.
ninetyninenine · 9h ago
The next generation of programmers will be stupider then the current generation thanks to LLMs. That means age-ism will become less and less prevalent.

"Look at that old timer! He can code without AI! That's insane!"

tguvot · 11h ago
Now, let's do same exercise but with programming and over longer period of time.

Would really like to present it to management that pushes ai assistance for coding

throwawaygmbno · 10h ago
This opinion is the exact thinking that has lead to the massive layoffs in the design industry. Their jobs are being destroyed because they think lawsuits and current state of the art will show they are right. These models actually can't produce unique input and if you use them for ideation they do only help you get to already solved problems.

But engineers aren't being fired completely in droves because we have adapted. The human can still break down the problem, tell the LLM to come up with multiple different ways of solving the problem, throw away all of them and asking for more. My most effective use is usually looking and seeing what I would do normally, breaking it down, and then asking for it in chunks that make sense that would touch multiple places, then coding details. It's just a shift in thinking like knowing when to copy and paste when being DRY.

Designers are screwing themselves right now waiting for case law instead of using their talents to make one unique thing not in the training set to boost their productivity and shaming tools that let them do that.

It will be a competitive advantage in the future to short sighted companies that took humans out the loop completely, but any company not using the tech will be horse shoe makers not worried because of all the mechanical issues with horseless carriages

AnimalMuppet · 55m ago
If by "cognitive debt", you mean "you don't really understand the code of the application that we're trying to extend/maintain", then yes, it's almost certainly going to apply to programming.

If I write the application, I have an internal map that corresponds (more or less) to what's going on in the code. I built that map as I was writing it, and I use that map as I debug, maintain, and extend the application.

But if I use AI, I have much less clear of a map. I become dependent on AI to help me understand the code well enough to debug it. Given AI's current limitations of actually understanding, that should give you pause...

OhNotAPaper · 10h ago
> ai assistance for coding

I honestly think it's gonna be a decade to define this domain, and it's going to come with significant productivity costs. We need a git but to prevent LLMs from stabbing themself in the face. At that point you can establish an actual workflow for unfucking agents when they inevitably fuck themselves. After some time and some battery of testing you can also automate this process. This will take time, but eventually, one day, you can have a tedious process of describing an application you want to use over and over again until it actually works.... on some level, not guaranteed to be anything close to the quality of hand-crafted apps (which is in-line with the transition from assembly to high-level and now to whatever the fuck you want to call the katamari-damacy zombie that is the browser)

eru · 10h ago
> Would really like to present it to management that pushes ai assistance for coding

Your management presumably cares more about results, than your long term cognitive decline?

ezst · 10h ago
Good of you to suppose that engineers cognitive decline doesn't translate into long term impactful business challenges as well. I mean, once you truly don't know your product and its capabilities any longer, what's left for you to "sell"?
eru · 9h ago
To quote myself:

> Companies don't own employees: workers can leave at any time.

> Thus protecting employees productivity in the long run doesn't necessarily help the company. (Unless you explicitly set up contracts that help there, or there are strong social norms in your place around this.)

ezst · 7h ago
You are talking about productivity, I'm talking about knowledge. You may come-up with a product, then fire all engineers having built it. Then, what? It's not sustainable for a business to start from scratch every other year. Your LLM won't be a substitute for owning your product.
eru · 1h ago
Your workers can still quit, and take their knowledge with them.
tguvot · 10h ago
i guess one of the questions is how quick cognitive decline sets it and how it influences system stability (we have big system with very high sla due to nature of system and it takes some serious cognitive abilities to reason about it operation).

if todays productivity is traded for longer term stability, i am not sure that it's a risk they would like to take

eru · 10h ago
Companies don't own employees: workers can leave at any time.

Thus protecting employees productivity in the long run doesn't necessarily help the company. (Unless you explicitly set up contracts that help there, or there are strong social norms in your place around this.)

tguvot · 7h ago
i am not talking about productivity. i am talking about quality and knowledge
eru · 1h ago
Your workers can still quit, and take their knowledge with them.
yifanl · 52m ago
You can put effort in making workers not want to quit.
raincole · 10h ago
Your management probably believe there will be no "longer period" of programming, as a career option.
devjab · 9h ago
I don't think that research will show what you're hoping it would. I'm not a big proponent of AI, you shouldn't bother going through my history but it is there to back up my statement if you're bored. Anyway, even I find it hard to argue against AI agents for productivity, but I think ik depend a lot on how you use them. As an anecdotal example I mainly work with Python, C and Go, but once in a while I also work with Typescript and C#. I've got 15 years experience with js/ts but when I've been away from it for a month it's not easy for me to remember the syntax, and before AI agents I'd need to go to https://developer.mozilla.org/en-US/docs/Web/JavaScript or similar quite a lot when I jumped back into it. AI agents allow me to do the same thing so much quicker.

These AI agent tools can turn your intend into code rather quickly, and at least for me, quicker than I often can. They do it rather unintrusive, with little effort on your part and they present it with nice little pull-request-lite functionalities.

The key "issue" here, and probably what this article is more about is that they can't reason as you likely know. The AI needs me to know what "we" are doing, because while they are good programmers they are horrible software engineers. Or in other words, the reason AI agents enhance my performance is because I know exactly what and how I want them to program something and I can quickly assess when they suck.

Python is a good language to come up with examples on how they can quickly fail you if you don't actually know Python. When you want to iterate over something you have to decide whether you want to do this in memory or not, in C#'s linq this is relatively easily presented to you with IEnumerable and IQuerable which work and look the same. In Python, however, you're often going to want to use a generator which looks nothing like simply looping over a List. It's also something many Python programmers have never even heard about, similar to how many haven't heard about __slots__ or even dataclasses. If you don't know what you're doing, you'll quikly end up with Python that works, but doesn't scale, and when I say scale I'm not talking Netflix, I'm talking looping over a couple of hundred of thousands of items without breaking paying a ridicilous amount of money for cloud memory. This is very anecdotal, but I've found that LLM's are actually quite good at recognizing how to iterate in C# and quite terrible in both Python and Typescript desbite LLM's generally (again in my experience) are much worse at writing C#. If that isn't just anecdotal then I guess they truly are what they eat.

Anyway, I think similar research would show that AI is great for experienced software engineers and terrible for learning. What is worse is that I think it might show that a domain expert like an accountant might be better at building software for their domain with AI than an inexperienced software engineer.

darkstar_16 · 6h ago
You're proving the point in the actual research. Programmers who only use AI for learning/coding will lose this knowledge (of python, for example) that you have gained by actually "doing" it.
devjab · 3h ago
I thought I pretty clearly stated that I was already losing that knowledge long before AI. I guess time will tell if I will lose even more with agents, but I frankly doubt that is possible.
tguvot · 6h ago
i'll add this quote from article:

Perhaps one of the more concerning findings is that participants in the LLM-to-Brain group repeatedly focused on a narrower set of ideas, as evidenced by n-gram analysis (see topics COURAGE, FORETHOUGHT, and PERFECT in Figures 82, 83, and 85, respectively) and supported by interview responses. This repetition suggests that many participants may not have engaged deeply with the topics or critically examined the material provided by the LLM.

When individuals fail to critically engage with a subject, their writing might become biased and superficial. This pattern reflects the accumulation of cognitive debt, a condition in which repeated reliance on external systems like LLMs replaces the effortful cognitive processes required for independent thinking.

Cognitive debt defers mental effort in the short term but results in long-term costs, such as diminished critical inquiry, increased vulnerability to manipulation, decreased creativity. When participants reproduce suggestions without evaluating their accuracy or relevance, they not only forfeit ownership of the ideas but also risk internalizing shallow or biased perspectives.

tguvot · 9h ago
point of article is that people who use ai in order to accomplish work experience measurable cognitive decline compared to those who not
ivape · 11h ago
Why not try it for social media? There’s got to be the world’s largest class action lawsuit if we can get some science behind what that industry has done.
OhNotAPaper · 10h ago
> There’s got to be the world’s largest class action lawsuit

You'd have to articulate harm, so this is basically dead in the water (in the US). Good luck.

wcfrobert · 10h ago
The results are obviously predictable, but it's nice that the authors took the time to prove a thing everyone already knows to be true with the rigors of science.

I wonder how the participants felt writing an essay while being hooked up to an EEG.

namaria · 9h ago
[flagged]
tomhow · 8h ago
Please don't do this here. If a comment seems unfit for HN, please flag it and email us at hn@ycombinator.com so we can have a look.

We detached this comment from https://news.ycombinator.com/item?id=44287157 and marked it off topic.

namaria · 46m ago
I did not say it was unfit and I don't see how discussing writing styles and the influence of LLMs on it is off topic on a thread about the effects of LLMs on cognition.

I don't believe I was impolite or making a personal attack. I had a relevant point and I made it clearly and in a civil manner. I strongly disagree with your assessment.

stephen_g · 8h ago
Really? You claim that praising an analogy would never happen in normal conversation before 2022? Seems fairly normal to potentially start with "that's a good way of putting it, but [...]" since forever...
namaria · 8h ago
I claim specifically that "I love this analogy" and "I love your analogy" have become noticeably more common in HN since 2022.