Your Brain on ChatGPT

118 msyvr 104 6/18/2025, 6:47:44 AM media.mit.edu ↗

Comments (104)

supriyo-biswas · 8h ago
Davidzheng · 7h ago
This was discussed only two days ago: https://news.ycombinator.com/item?id=44286277
relaxing · 4h ago
Why did the posting two days ago omit the first part of the title?
tomhow · 2h ago
The submitter chose the title but they were right to do so.

The full title of the paper is "Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task". It exceeds the 80 character limit for HN titles, so something had to be cut. They cut the first part, which is the baitier and less informative part.

The phrase "This is your brain on ..." is from an old anti-drugs campaign, and is deliberately chosen here to draw parallels between the effects of drugs and chatbots on the brain. It's fine for the authors to do that in their own title but when something has to be cut from the title for HN, that's the right part to cut.

jwblackwell · 8h ago
One slightly unexpected side effect of using AI to do most of my coding now is that I find myself a lot less tired and can focus for longer periods. It's enabled me to get work done while faced with other distractions. Essentially, offload some mental capacity towards AI frees up capacity elsewhere.
delegate · 7h ago
I find the opposite to be true. I am a lot more productive, so I work on more things in parallel, which makes me extremely tired by the end of the day, as if my brain worked at 100% capacity..
jwblackwell · 7h ago
Yeah I do feel the pressure to run multiple instances of Claude Code now. Haven't really managed to find a good workflow, I find I just get too distracted swapping between tasks and then probably end up working slower than if I had just stayed in one IDE instance
xiphias2 · 7h ago
Codex is the perfect workflow for me: instead of swapping, just accept / reject cls / refine tasks
michelsedgh · 7h ago
Yeah and after a few days of this, I find I can't do anything and stop all the side projects for a few days until im recharged again and can get back to it.
benterix · 7h ago
I bet we will all need a new type of therapy for that at some point in the future.
BoorishBears · 7h ago
On one hand, I've found that it reduces acute fatigue, but on the other I've found there's also an inflection point where it can encourage more fatigue over longer time horizons if you're not careful.

In the past I'd often reach a point like an unexpected error or looking at some docs would act like a "speed bump" and let me breath, and typically from there I'd acknowledge how tired I am, and stop for the moment.

With AI those speed bumps still exist, but there's sometimes just a bit of extra momentum that keeps me from slowing down enough to have that moment of reflection on how exhausted I am.

And the AI doesn't even have to be right for that to happen: sometimes just reading a suggestion that's specific to the current situation can trigger your own train of thought that's hard to reign back in.

DocTomoe · 7h ago
I like to think of AI as cars:

You can go to the Walmart outside town on foot. And carry your stuff back. But it is much faster - and less exhaustive - to use the car. Which means you can spend more quality time on things you enjoy.

chneu · 7h ago
There are detriments to this as well.

Exercise is good.

Being outside is good.

New experiences happen when you're on foot.

You see more things on foot.

Etc etc. We make our lives way too efficient and we atrophy basic skills. There are benefits to doing things manually. Hustle culture is quite bad for us.

Going by foot or bicycle is so healthy for us for a myriad of reasons.

ricardobeat · 7h ago
While this is absolutely true, “walking to Walmart” is a terrible example due to the lack of pedestrian infrastructure and distances involved :)
ivan_gammel · 7h ago
It is actually a great example. If the only way to go to Walmart is by car, it’s a sign of bigger problems with urban planning. Car is a hotfix, that becomes an environmental and public health problem. We shouldn’t try to get rid of cars or individual homes, but there exist plenty of other, more healthier ways of living (and the most livable cities in the world focus on them, not on car-powered suburbia). Same with AI: if it becomes the only way for people to achieve result, it may point not to complexity necessitating the use of an advanced tool, but to problems with education, developer experience etc. AI becomes a hotfix for things that could be fixed with more traditional approaches and in more sustainable way.
Etheryte · 7h ago
I think in a way this is a good analogy, because it also includes the downside. If you always drive everywhere and do everything by car, your health will suffer due to lack of physical activity.
xnorswap · 7h ago
And you'll randomly kill people on your way there.

( Of course dear reader, YOU won't randomly kill people because you're a "good driver". )

AnthonyMouse · 7h ago
Do you really think it's random? 30% of car accidents are from drunk drivers. Some large fraction of the remainder are a result of some other impairment or distraction. Don't drive on two hours of sleep and don't text behind the wheel and your chances of hitting someone go way down.

And it will be the same thing with AI. You want to ask it a question that you can verify the answer to, and then you actually verify it? No problem. But then you have corporations using it for "content moderation" and end up shadow banning actual human beings when it gets it wrong, and then those people commit suicide because they think no one cares about them when it's really that the AI wrongly pegs them as a bot and then heartlessly isolates them from every other living person.

No comments yet

lm28469 · 7h ago
You got it backwards, there wouldn't be a walmart outside of town if there were no cars, you'd walk to the local butcher/baker/whatever in <10min.
xixixao · 7h ago
Where you’d be able to afford much less (not judging the trade-off, but that’s the primary reason why the US is headed in the opposite direction).
looofooo0 · 7h ago
Not true, food shopping cost in Germany are way less then in the USA and in my dense neighborhood >20.000 people/km² I have 3 supermarkets (also baker, butcher, etc.) within 5 min bicycle ride.
lm28469 · 7h ago
> Where you’d be able to afford much less

When 75% of the west is overweight or obese, and when the leading causes of death are quite literally sloth and gluttony I think I'd take my chances... We're drown in insane quantity of low quality food and gadgets

ricardobeat · 7h ago
I thought the US went into the opposite direction because of ruthless corporate profit optimization, zoning rules and city planning that fuels suburban sprawl?

Economies of scale do mean you can get a fluffy blanket imported from China at $5, less than the cost of a coffee at Starbucks, but for food necessities Walmart isn’t even that cheap or abundant compared to other chains.

DocTomoe · 6h ago
Yes, but there are cars. That genie has already escaped its bottle.

And you pay small local stores with higher prices - which leads to more people, even in such small-towns with local butchers and bakers to get into their ride and go to the Lidl or Aldi on the outskirts.

Much like companies will realise LLM-using devs are more efficient by some random metric (do I hear: Story points and feature counts?), and will require LLM use from their employees.

KronisLV · 7h ago
That’s a nice analogy! Though one might argue that the walk in of itself would be good for your health (as evidenced by me putting on some weight after replacing my 30 minute daily walk to the office with working remotely).

One could also do the drive (use AI) and then get some fresh air after (personal projects, code golf, solving interesting problems), but I don’t thing everyone has the willpower for that or the desire to consider that.

lucideer · 7h ago
Oh man. I really hope AI doesn't do as much harm to us as cars have.
gherkinnn · 7h ago
At the price of a sluggish, atrophied body.
chii · 7h ago
which is why people now pay $100 a month for a gym membership.
gherkinnn · 7h ago
Save an hour a day to spend it in the gym. That's what I call a bargain.
dr_dshiv · 7h ago
That’s why we need an AI infrastructure like amsterdam, where you can bike everywhere. It’s faster and more convenient than a car for most trips and keeps everyone fit and happy.
laurentiurad · 7h ago
this analogy is flawed to its core. The car doesn't make you forget how to walk, because you are still forced to walk in certain circumstances. Delegating learning to an llm will increase your reliance on it, and will eventually affect the way you're learning. A better analogy is the usage of GPS. If you use it continuously, you will be dependent on it to get to a place, and lose the capacity to find places on your own.
nlnn · 7h ago
The problem is that when it's for work, the company now knows you have access to a car, so sends you on 20x the trips. You have no more quality time, and your physical health suffers from lack of exercise.
DocTomoe · 6h ago
Which is exactly why many jobs actively require a driver's license where I live.

The car analogy has that covered already. When Guttenberg was printing bibles, those things sold like warm bread rolls - these days, printing books is barely profitable. The trick with new disruptive tech always is to be an early adopter - not the long tail.

nlnn · 2h ago
Yeah, I wasn't disputing the car analogy, more the benefits. If I'm using GPT to benefit myself (e.g. working on a side project), that's great and saves me time to do other things. If I'm using it to benefit my employer, I won't save any time, they'll fill it with other things to do, or expect me to be X times as productive in the same time.
monegator · 7h ago
In this context: Brain only is going on foot/bike Search Engine is by car LLM is direct delivery to the home with the clerk packing your groceries (with them making the choices for you)
looofooo0 · 7h ago
Car culture societies are as bad as smoking for general health.
moffkalast · 7h ago
And then you drive to the gym to run on a treadmill for two hours.
DocTomoe · 6h ago
Enter "Coding Bootcamps" and "Hackathons"
pcwelder · 7h ago
Back when GANs were popular, I'd train generator-discriminator models for image generation.

I thought a lot about it and realised discriminating is much easier than generating.

I can discriminate good vs bad UI for example, but I can't generate a good UI to save my life. I immediately know when a movie is good, but writing a decent short story is an arduous task.

I can determine the degree of realism in a painting, but I can't paint a simple bicycle to convince a single soul.

We can determine if an LLM generation is good or bad in a lot of cases. As a crude strategy then we can discard bad cases and keep generating till we achieve our task. LLMs are useful only because of this disparity between discrimination vs generation.

These two skills are separate. Generation skills are hard to acquire and very valuable. They will atrophy if you don't keep exercising those.

tasn · 6h ago
I think this is true for the very simple cases, for example and obviously bad picture vs. a good one.

I don't think this is necessarily true for more complex tasks, especially not in areas that require deep evaluation. For example, reviewing 5 non-trivial PRs is probably harder and more time consuming than writing it yourself.

The reason why it works well for images and short stories is because the filter you are applying is "I like it, vs. I don't like it", rather than "it's good vs. it's not good".

keithwhor · 8h ago
I think it's likely we learn to develop healthier relationships with these technologies. The timeframe? I'm not sure. May take generations. May happen quicker than we think.

It's clear to me that language models are a net accelerant. But if they make the average person more "loquacious" (first word that came to mind, but also lol) then the signal for raw intellect will change over time.

Nobody wants to be in a relationship with a language model. But language models may be able to help people who aren't otherwise equipped to handle major life changes and setbacks! So it's a tool - if you know how to use it.

Let's use a real-life example: relationship advice. Over time I would imagine that "ChatGPT-guided relationships" will fall into two categories: "copy-and-pasters", who are just adding a layer of complexity to communication that was subpar to begin with ("I just copied what ChatGPT said"), and "accelerators" who use ChatGPT to analyze their own and their partners motivations to find better solutions to common problems.

It still requires a brain and empathy to make the correct decisions about the latter. The former will always end in heartbreak. I have faith that people will figure this out.

falcor84 · 7h ago
>Nobody wants to be in a relationship with a language model.

I'm not sure about it. I don't have first or second hand experience with this, but I've been hearing about a lot of cases of people really getting into a sort of relationship with an AI, and I can understand a bit of the appeal. You can "have someone" who's entirely unjudgemental, who's always there for you when you want to chat about your stuff, and isn't ever making demands of you. It's definitely nothing close to a real relationship, big I do think it's objectively better than the worst of human relationships, and is probably better for your psyche than being lonely.

For better or for worse, I imagine that we'll see rapid growth in human-AI relationships over the coming decade, driven by improvements in memory and long-term planning (and possibly robotic bodies) on the one hand, and a growth of the loneliness epidemic on the other.

santiagobasulto · 7h ago
Wasn't THE SAME said when Google came out? That we were not remembering things anymore and we were relying on Google? And also with cellphones before that (even the big dummy brickphones), that we were not remembering phone numbers anymore.
gamerDude · 7h ago
And this is exactly what this study showed too.

"Brain connectivity systematically scaled down with the amount of external support: the Brain‑only group exhibited the strongest, widest‑ranging networks, Search Engine group showed intermediate engagement, and LLM assistance elicited the weakest overall coupling."

ansc · 7h ago
Yes, that was true though, wasn't it? If this is also true, what does that imply?
nottorp · 7h ago
Yes but your cell phone contacts don't have a chance to call a completely different number out of thin air once in a while.

At least for now, while Apple and Google haven't put "AI" in the contacts list. Can't guarantee tomorrow.

falcor84 · 7h ago
That would actually be an amazing feature. Like in those movie meet-cutes where the person you were supposed to meet doesn't show up, and instead you make a connection with a random person.
nottorp · 7h ago
Those services are available already, but the random person at the other end is "AI" generated :)
bodge5000 · 7h ago
A comment on another similar thread pointed out it goes as far back as Socrates saying that writing things down means your not exercising your brain, so you're right, this is the same old argument we've heard for years before.

The question is, were they wrong? I'm not sure I could continue doing my job much as SWE if I lost access to search engines, and I certainly don't remember phone numbers anymore, and as for Socrates, we found that the ability to forget about something (while still maintaining some record of it) was actually a benefit of writing, not a flaw. I think in all these cases we found that to some extent they were right, but either the benefits outweighed the cost of reliance, or that the cost was the benefit.

I'm sure each one had its worst case scenario where we'd all turn into brainless slugs offloading all our critical thinking to the computer or the phone or a piece of paper, and that obviously didn't happen, so it might not here either, but there's a good chance we will lose something as a result of this, and its whether the benefits still outweigh the costs

FranzFerdiNaN · 7h ago
Plato was already worried that the written word caused people to forget things (although his main complaint was that words cant answer like a person can in a dialogue).
dyauspitr · 7h ago
Google was like a faster library. ChatGPT just does most of the work for you.
AnthonyMouse · 7h ago
It's the doing the work for you which is the trouble.

Suppose you want to know how some git command works. If you have to read the manual to find out, you end up reading about four other features you didn't know existed before you get to the thing you set out to look for to begin with, and then you have those things in your brain when you need them later.

If you can just type it into a search box and it spits back a command to paste into the terminal, it's "faster" -- this time -- but then you never actually learn how it works, so what happens when you get to a question the search box can't answer?

alganet · 7h ago
Their results support this. The study has three groups: LLM users, Search Engine users and Brain only.

In terms of connections made, Brain Only beats Search User, Search User beats LLM User.

So, yes. If those measured connections mean something, it's the same but worse.

hkon · 7h ago
I don't remember phone numbers.

I remember where I can get information on the internet, not the information itself. I rely on google for many things, but find myself increasingly using AI instead since the signal/noise ratio on google is getting worse.

fercircularbuf · 8h ago
As the proliferation of the smart phone eroded our ability to locate and orient ourselves and remember routes to places. It's no surprise that a tool like this, used for the purpose of outsourcing a task that our own brains would otherwise do, would result in a decline in the skills that would be trained if we were performing that task ourselves.
arethuza · 8h ago
The only two times I have made bad navigation mistakes in mountains were in the weeks after I started using my phone and a mapping app - the realisation that using my phone was making me worse at navigation was quite a shock at the time.
khazhoux · 8h ago
But you didn't become worse at navigation. Sounds like you trusted a tool, and it failed you.
arethuza · 7h ago
No - on both occasions it was the same scenario - descending from a peak in bad weather and picking the wrong ridge to descend - I was confident I "knew" which was the right ridge and with the app I use bearings for the right route are pretty difficult to distinguish - so completely my fault.

I'm now aware of that problem and haven't had that problem since but I was pretty shocked in retrospect that I confidently headed off in the wrong direction when the tool I was using was by any objective measure much better.

I agree with this:

"the key to navigating successfully is being able to read and understand a map and how it relates to your surroundings"

https://www.mountaineering.scot/safety-and-skills/essential-...

jajko · 7h ago
This is splitting hair, at the end his navigation skills (him + whatever tool he used) were NOK and could result in dangerous situations (been there so many times in the mountains, although it was mostly about "went too far in a bit wrong direction and don't want to backtrack that far, I am sure I will find a way to that already close point..." and 10 mins later scrambling on all 4 on some slippery wet rock with no room for error)
tehnub · 7h ago
Navigation is a narrow task. For many intents and purposes, LLMs are generally intelligent.
khazhoux · 8h ago
> As the proliferation of the smart phone eroded our ability to locate and orient ourselves and remember routes to places

Can you point to a study to back this up? Otherwise, it's anecdata.

ineedaj0b · 7h ago
i really tire of people always asking for studies for obvious things.

have sword skills declined since the introduction of guns? surely people still have hands and understand how to move swords, and they use knives to cut food for consumption. the skill level is the same..

but we know on aggregate most people have switched to relying on a technological advancement. there's not the same culture for swords as in the past by sheer numbers despite there being more self proclaimed 'experts'.

100 genz vs. 100 genx you'll likely find a smidgen more of one group than the other finding a location without a phone.

khazhoux · 6h ago
> i really tire of people always asking for studies for obvious things.

I actually agree with you on this!

But... I have very very good directional sense, and as far as I can tell it's innate. My whole life I've been able to remember pathing and maintain proper orientation. I don't think this has anything to do with lack of navigation aids (online or otherwise) during formative years.

But I'm talking about geospatial sense within the brain. If your point is that people no longer learn and improve the skill of map-reading then yes that should be self-evident.

fercircularbuf · 7h ago
https://www.sciencedirect.com/science/article/pii/S027249442...

The first paragraph of the conclusions section is also stimulating and I think aptly applies to this discussion of using AI as a tool.

> it is important to mention the bidirectionality of the relationship between GPS use and navigation abilities: Individuals with poorer ability to learn spatial information and form environmental knowledge tend to use assisted navigation systems more frequently in daily life, thus weakening their navigation abilities. This intriguing link might suggest that individuals who have a weaker “internal” ability to use spatial knowledge to navigate their surroundings are also more prone to rely on “external” devices or systems to navigate successfully. Therefore, other psychological factors (e.g., self-efficacy; Miola et al., 2023) might moderate this bidirectional relationship, and researchers need to further elucidate it.

tehnub · 7h ago
I sometimes used to think about things. Now I just ask ChatGPT and it tells me.

No comments yet

noname120 · 7h ago
@dang Can the unwanted editorialization of this title be removed? Nowhere does the title or article contain the gutter press statement “AI is eating our brains”.
empiko · 7h ago
I wonder to what extent this is caused by the writing style LLMs have. They just love beating around the bush, repeat themselves, use fillers, etc. I often find it hard to find the signal in the noise, but I guess that it is inevitable with the way they work. I can easily imagine my brain shutting down when I have to parse this sort of output.
pepa65 · 3h ago
It also depends on the LLM.
solumunus · 4h ago
Instruct it to be concise.
unsupp0rted · 8h ago
Also ever since we invented the written word it has been eating our brains by killing our memory
dig1 · 8h ago
Quite the opposite, it was shown that reading improves memory and cognitive abilities for children [1] and older adults [2].

[1] https://www.cam.ac.uk/research/news/reading-for-pleasure-ear...

[2] https://pmc.ncbi.nlm.nih.gov/articles/PMC8482376

readthenotes1 · 8h ago
How does that compare to the population of people who memorize the Old testament or the Quran?

I remember hearing that the entire epics of the Iliad and the Odyssey we're all done via memorization and only spoken... How do you think those poets memories compared to a child who reads it Bob the builder books?

rokkamokka · 8h ago
I watched so many reruns of Community I could recite the episodes by heart. I don't think that made the rest of my memory any better.
elric · 8h ago
For those who don't get the reference, Plato thought that the written word was not a good tool for teaching/learning, because it outsources some of the thinking.

Simiarly (IIRC) Socrates thought the written word wasn't great for communicating, because it lacks the nuance of face-to-face communication.

I wonder if they ever realised that it could also be a giant knowledge amplifier.

moffkalast · 6h ago
They probably did, but still preferred their old way since it took more skill.

I remember some old quote about how people used to ask their parents and grandparents questions, got answers that were just as likely to be bullshit and then believed that for the rest of their life because they had no alternative info to go on. You had to invest so much time to turn a library upside down and search through books to find what you needed, if they even had the right book.

Search engines solved that part, but you still needed to know what to search for and study the subject a little first. LLMs solve the final hurdle of going from the dumbest possible wrongly posed question to directly knowing exactly what to search for in seconds. If this doesn't result in a knowledge explosion I don't know what will.

camillomiller · 8h ago
Such a comment from an AI apologist definitely helps to confirm the findings of the study.
risyachka · 8h ago
Not really . You have to memorise much more in today’s world to be able to do any kind of work.
lostlogin · 8h ago
I’m not sure of my retelling of events from the same day.

Aboriginal storytelling is claimed to pass on events from 7k+ years ago.

https://www.tandfonline.com/doi/abs/10.1080/00049182.2015.10...

chongli · 8h ago
We already know how oral cultures work: they use technologies such as rhyme, meter, music, stock characters, memory palaces, and more. If you want a good example of how powerful this stuff is, think about the last time you had a song stuck in your head.
dyauspitr · 8h ago
Yeah I’ve used ChatGPT as a starting point for so much documentation I dread having to write a product brief from scratch now.
risyachka · 8h ago
This is exactly why there is no point in using AI for coding unless in rare fee cases.

Code without AI - sharp skills, your brain works and you come up with better solutions etc.

Code with AI - skills decline after merely a week or two, you forget how to think and because of relying on AI for simpler and simpler tasks - your total output is less and worse that in you were to diy it.

Kon5ole · 5h ago
>Code without AI - sharp skills, your brain works and you come up with better solutions etc.

That train of thought leads to writing assembly language in ed. ;-)

I think developers as a group have a tendency to spend too much time "inside baseball" and forget what the tools we're good at are actually used for.

Farmers don't defend the scythe, spend time doing leetscythe katas or go to scything seminars. They think about the harvest.

(Ok, some farmers started the sport of Tractor Pulling when the tractor came along and forgot about the harvest but still!) :)

bootsmann · 5h ago
> That train of thought leads to writing assembly language in ed

Hard disagree, LLVM will always outperform me in writing assembly, it won't just give up and fail randomly when it meets a particularly non-trivial problem, causing me to write assembly by hand to fix it. If LLMs would be 100% reliable on the tasks I had to do, I don't think anyone here would seriously debate about the issue of mental attrition (i.e. you don't see people complaining about calculators). The problem is that in too many cases, the LLM will only get so far and you will still have to switch to doing actual programming to get the task finished and the worse you get at that last part the more your skillset converges to exactly the type of things an LLM (and therefore everyone else with a keyboard) can reliably do.

risyachka · 3h ago
> That train of thought leads to writing assembly language in ed

you an pick any language you think is best atm. the point if you have to practice it.

use it or lose it

squigz · 7h ago
Does this logic apply to IDEs, search engines, or any of the various other tools programmers use?
risyachka · 3h ago
no

IDEs and tools don't do thinking for you.

dyauspitr · 7h ago
My total output is definitely higher.
xigoi · 2h ago
If you only care about the volume of code and not the quality or usefulness, I have an even better tool for you:

    yes 'print("hello world")' > program.py
darkwater · 7h ago
5 years from now, will you ability to explain or build from first principles on your own be increased, though?
wiseowise · 7h ago
Yes. Because none of this bullshit matters. I’ve heard this mantra for 20 years now.

Smug face: “weeeell, how can you say you’re a real programmer if you use a compiler? You need write raw assembly”, “how can you call yourself reeeeal programmer if you don’t know your computer down to every register?”, “real programmurs do not use SO/Google” and all the rest of the crap. It is all nerds trying to make themselves feel good by inflating their ego with trivia that is not interesting to anyone.

Well, what do you know? I’m still in business, despite relying a lot on Google/SO, and still create solutions that fix real human problems.

If AI can make 9 to 5 more bearable for majority of people and provide value in terms less cognitive load, let’s fucking go then.

nottorp · 7h ago
Common bullshit. Expert not realizing that even if they are capable of using these tools because they can - subconsciously in your case - verify them, it doesn't mean it helps non experts.
wiseowise · 7h ago
Touché. On this I agree with you, since I’ve started in different age.
nottorp · 7h ago
I haven't written serious assembly since high school :)

But I'm 100% sure i have some "natural" neural connections based on those experiences and those help me even when doing high level languages.

By the way, I am using LLMs. They help until they don't. One real life example i'm hitting at work is they keep mixing couchdb and couchbase when you ask about features. Their training dataset doesn't seem to be large enough in that area.

nottorp · 7h ago
You are exhibiting traces of long term thinking.

This is not what founder culture is about.

darkwater · 7h ago
I hope the ASI overlord will pardon me.
risyachka · 3h ago
>>My total output is definitely higher.

its paper gains, the value you create is not correlated with your code output.

and the value you will create decreases if you don't think hard and train in solving problems on your own.

heroku · 7h ago
don't overpromote these witchcraft hunts.
StopDisinfo910 · 7h ago
This study is methodologically poor: only 18 people, SAT topics (so broad and pretty poor with the expectation of an American style “essay”), only 20 minutes of writing so far too little time to properly use the tool given to explore (be it search engine or LLM).

With only 20 minutes, I’m not even trying to do a search. No surprise the people using LLM have zero recollection of what they wrote.

Plus they spend ages discussing correct quoting (why?) and statistical analysis via NLP which is entirely useless.

Very little space is dedicated to knowing if the essays are actually any good.

Overall pretty disappointing.

javierbg95 · 7h ago
Quoting is actually extremely important. There's a big difference between making a certain claim a) because [1] performed an experiment that confirms it and [2] and [3] reproduced it and b) because the magic machine told me so.

This is still true whether or not the claim is true/accurate or not, as it allows for actual relevant and constructive critique of the work.

StopDisinfo910 · 3h ago
It’s about free form essay redaction in 20 minutes and the article claims to be about cognitive impacts. Exact quoting is approximately useless in this context. It’s not about experimental results. It’s about whether or not someone can quote verbatim from a piece of literature.
T4iga · 7h ago
While the results are not unexpected i think the conclusion is questionable. Of course the recall for something you did not write will be lower, but to conclude from it, that this will impeded overall learning is in my opinion far fetched.

I think what we are seeing is that learning and education has not adapted to these new tools yet. Producing a string of words that counts as an essay has become easier. If this frees up a students time to do more sports or work on their science project that's a huge net positive even if for the essay it is net negative. The essay does not exist in a school vacuum.

The thing students might not understand is: their reduced recall will make them worse at the exam ... Well they will hopefully draw their own conclusion after first their failed exam.

I think the quantitative study is important but I think this qualitative interpretation is missing the point. Recall->Learning is a pretty terrible way to define learning. Reproducing is the lowest step on the ladder to mastery

neepi · 8h ago
It’s not because I’m not using it.

It’s the vape of IT.

moffkalast · 6h ago
> The reported ownership of LLM group's essays in the interviews was low. The Search Engine group had strong ownership, but lesser than the Brain-only group. The LLM group also fell behind in their ability to quote from the essays they wrote just minutes prior.

So having someone else do a task for you entirely makes your brain work less on that task? Impossible.

isaacremuant · 7h ago
Tool rots your brain alarmism, news at 11.

The claim "My geo spatial skills are attrophied due to use of Google maps" and yet I can use Google maps once to quickly find a good path, and go back next time without using. I can judge when the suggestions seem awkward and adjust.

Tools augment skills and you can use them for speedier success if you know what you're doing.

The people who need hand-held alarmism are mediocre.

No comments yet