Anecdote here, but when I was in grad school, I was talking to a PhD student i respected a lot. Whenever he read a paper, he would try to write the code out and get it working. I would take a couple of months but he could whip it up in a few days. He explained to me that it was just practice and the more you practice the better you become. He not only coded things quickly, he started analyzing papers quicker too and became really good at synthesizing ideas, knowing what worked and didn't, and built up a phenomenal intuition.
These days, I'm fairly senior and don't touch code much anymore but I find it really really instructive to get my hands dirty and struggle through new code and ideas. I think the "just tweak the prompts bro" people are missing out on learning.
hintymad · 6m ago
This is in a way like doing math. I can read a math book all day and even appreciate the ideas in the book, but I'd practically learn little if I don't actually attempt to work out examples for the definitions, the theorems, and some exercises in the book.
TheNewsIsHere · 2m ago
I fall into this trap more than I’d care to admit.
I love learning by reading, to the point that I’ll read the available documentation for something before I decide to use it. This consumes a lot of time, and there’s a tradeoff.
Eventually if I do use the thing, I’m well suited to learning it quickly because I know where to go when I get stuck.
But by the same token I read a lot of documentation I never again need to use. Sometimes it’s useful for learning about how others have done things.
benterix · 3h ago
We are literally witnessing the skills split right in front of our eyes: (1) people who are able to understand the concepts deeply, build a mental model of it and implement them in code at any level, and (2) people who outsource it to a machine and slowly, slowly lose that capability.
For now the difference between these two populations is not that pronounced yet but give it a couple of years.
CuriouslyC · 3h ago
We're just moving up the abstraction ladder, like we did with compilers. I don't care about the individual lines of code, I care about architecture, code structure, rigorous automated e2e tests, contracts with comprehensive validation, etc. Rather than waste a bunch of time pouring over agent PRs I just make them jump over extremely high static/analytic hurdles that guarantee functionality, then my only job is to identify places where the current spec and the intended functionality differ and create a new spec to mitigate.
e3bc54b2 · 2h ago
As the other comment said, LLMs are not an abstraction.
An abstraction is a deterministic, pure function, than when given A always returns B. This allows the consumer to rely on the abstraction. This reliance frees up the consumer from having to implement the A->B, thus allowing it to move up the ladder.
LLMs, by their very nature are probabilistic. Probabilistic is NOT deterministic. Which means the consumer is never really sure if given A the returned value is B. Which means the consumer now has to check if the returned value is actually B, and depending on how complex A->B transformation is, the checking function is equivalent in complexity as implementing the said abstraction in the first place.
pmarreck · 1m ago
> LLMs, by their very nature are probabilistic
I believe that if you can tweak the temperature input (OpenAI recently turned it off in their API, I noticed), an input of 0 should hypothetically result in the same output, given the same input.
stuartjohnson12 · 2h ago
It's delegation then.
We can use different words if you like (and I'm not convinced that delegation isn't colloquially a form of abstraction) but you can't control the world by controlling the categories.
hoppp · 1h ago
Delegation of intelligence?
So one party gets more stupid for the other to be smart?
arduanika · 46m ago
Yes, just like moving into management. So we'll get a generation of programmers who get to turn prematurely into the Pointy-Haired Boss.
benterix · 1h ago
Except that (1) the other party doesn't become smart, (2) the one who delegates doesn't become stupid, it just loses the opportunity to become smarter when compared to a human who'd actually do the work.
charcircuit · 1h ago
It's not 0 sum. All parties can become more intelligent over time.
matt_kantor · 31m ago
They could, but you're commenting on a study whose results indicate that this isn't what happens.
charcircuit · 21m ago
And you are in a comment chain discussing how there is a subset of people where the study is not true.
robenkleene · 2h ago
One argument for abstraction being different from delegation, is when a programmer uses an abstraction, I'd expect the programmer to be able to work without the abstraction, if necessary, and also be able to build their own abstractions. I wouldn't have that expectation with delegation.
vidarh · 1h ago
The vast majority of programmers don't know assembly, so can in fact not work without all the abstractions they rely on.
Do you therefore argue programming languages aren't abstractions?
benterix · 1h ago
> The vast majority of programmers don't know assembly, so can in fact not work without all the abstractions they rely on.
The problem with this analogy is obvious when you imagine an assembler generating machine code that doesn't work half of the time and a human trying to correct that.
WD-42 · 1h ago
The vast majority of programmers could learn assembly, most of it in a day. They don’t need to, because the abstractions that generate it are deterministic.
To address your specific point in the same way: When we're talking about programmers using abstractions, we're usually not talking about the programming language their using, we're talking about the UI framework, networking libraries, etc... they're using. Those are the APIs their calling with their code, and those are all abstractions that are all implemented at (roughly) the same level of abstraction as the programmer's day-to-day work. I'd expect a programmer to be able to re-implement those if necessary.
Jensson · 1h ago
> I wouldn't have that expectation with delegation.
Managers tend to hire sub managers to manage their people. You can see this with LLM as well, people see "Oh this prompting is a lot of work, lets make the LLM prompt the LLM".
robenkleene · 1h ago
Note, I'm not saying there are never situations where you'd delegate something that you can do yourself (the whole concept of apprenticeship is based on doing just that). Just that it's not an expectation, e.g., you don't expect a CEO to be able to do the CTO's job.
I guess I'm not 100% sure I agree with my original point though, should a programmer working on JavaScript for a website's frontend be able to implement a browser engine. Probably not, but the original point I was trying to make is I would expect a programmer working on a browser engine to be able to re-implement any abstractions that they're using in their day-to-day work if necessary.
AnIrishDuck · 21m ago
The advice I've seen with delegation is the exact opposite. Specifically: you can't delegate what you can't do.
Partially because of all else fails, you'll need to step in and do the thing. Partially because if you can't do it, you can't evaluate whether it's being done properly.
That's not to say you need to be _as good_ at the task as the delegee, but you need to be competent.
For example, this HBR article [1]. Pervasive in all advice about delegation is the assumption that you can do the task being delegated, but that you shouldn't.
> Just that it's not an expectation, e.g., you don't expect a CEO to be able to do the CTO's job.
I think the CEO role is actually the outlier here.
I can only speak to engineering, but my understanding has always been that VPs need to be able to manage individual teams, and engineering managers need to be somewhat competent if there's some dev work that needs to be done.
This only happens as necessary, and it obviously should be rare. But you get in trouble real quickly if you try to delegate things you cannot accomplish yourself.
I think what you're trying to reference is APIs or libraries, most of which I wouldn't consider abstractions. I would hope most senior front-end developers are capable of developing a date library for their use case, but in almost all cases it's better to use the built in Date class, moment, etc. But that's not an abstraction.
meheleventyone · 1h ago
There's an interesting comparison in delegation where for example people that stop programming through delegation do lose their skills over time.
hosh · 1h ago
There is a form of delegation that develops the people involved, so that people can continue to contribute and grow. Each individual can contribute what is unique to them, and grow more capable as they do so. Both people, and the community of those people remain alive, lively, and continue to grow. Some people call this paradigm “regenerative”; only living systems regenerate.
There is another form of delegation where the work needed to be done is imposed onto another, in order to exploit and extract value. We are trying to do this with LLMs now, but we also did this during the Industrial Revolution, and before that, humanity enslaved each other to get the labor to extract value out of the land. This value extraction leads to degeneration, something that happens when living systems dies.
While the Industrial Revolution afforded humanity a middle-class, and appeared to distribute the wealth that came about — resulting in better standards of living — it came along with numerous ills that as a society, we still have not really figured out.
I think that, collectively, we figure that the LLMs can do the things no one wants to do, and so _everyone_ can enjoy a better standard of living. I think doing it this way, though, leads to a life without purpose or meaning. I am not at all convinced that LLMs are going to give us back that time … not unless we figure out how to develop AIs that help grow humans instead of replacing them.
So it's a noisy abstraction. Programmers deal with that all the time. Whenever you bring in an outside library or dependency there's an implicit contract that you don't have to look underneath the abstraction. But it's noisy so sometimes you do.
Colleagues are the same thing. You may abstract business domains and say that something is the job of your colleague, but sometimes that abstraction breaks.
Still good enough to draw boxes and arrows around.
delfinom · 1h ago
Noisy is an understatement, it's buggy, it's error filled, it's time consuming and inefficient. It's exact opposite of automation but great for job security.
Paradigma11 · 2h ago
"LLMs, by their very nature are probabilistic."
So are humans and yet people pay other people to write code for them.
const_cast · 2h ago
Yes but we don't call humans abstractions. A software engineer isn't an abstraction over code.
threatofrain · 1h ago
No, but depending on your governance structure, we have software engineers abstract over domains. And then we draw boxes and arrows around the works of your colleagues without looking inside the box.
skydhash · 1h ago
You wish! Bus factor risk is why you don’t do this. Having siloed knowledge is one of the first steps towards engineering, unless someone else code is proven bug free, you don’t usually rely on that. You just have someone to throw bug tickets at.
benterix · 1h ago
Yeah but in spite of that if you ask me take a Jira ticket and do it properly, there is a much higher chance that I'll do it reliably and the rest of my team will be satisfied, whereas if I bring an LLM into the equation it will wreak havoc (I've witnessed a few cases and some people got fired, not really for using LLMs but for not reviewing their output properly - which I can even understand somehow as reviewing code is much less fun than creating it).
zasz · 2h ago
Yeah and the people paying other people to write code won't understand how the code works. AI as currently deployed stands a strong chance of reducing the ranks of the next generation of talented devs.
glitchc · 1h ago
> LLMs, by their very nature are probabilistic. Probabilistic is NOT deterministic.
Although I'm on the side of getting my hands dirty, I'm not sure if the difference is that different. A modern compiler embeds a considerable degree of probabilistic behaviour.
ashton314 · 1h ago
Compilers use heuristics which may result in dramatically different results between compiler passes. Different timing effects during compilation may constrain certain optimization passes (e.g. "run algorithm x over the nodes and optimize for y seconds") but in the end the result should still not modify defined observable behavior, modulo runtime. I consider that to be dramatically different than the probabilistic behavior we get from an LLM.
davidrupp · 1h ago
> A modern compiler embeds a considerable degree of probabilistic behaviour.
Can you give some examples?
WD-42 · 46m ago
I keep hearing this but it’s a head scratcher. They might be thinking of branch prediction, but that’s a function of the cpu, not the compiler.
eikenberry · 1h ago
Local models can be deterministic and that is one of the reasons why they will win out over service based models once the hardware becomes available.
CuriouslyC · 2h ago
Ok, let's call it a stochastic transformation over abstraction spaces. It's basically sampling from the set of deterministic transformations given the priors established by the prompt.
upcoming-sesame · 2h ago
agree but does this distinction really make a difference ? I think the OP point is still valid
groby_b · 40m ago
> An abstraction is a deterministic, pure function
That must be why we talk about leaky abstractions so much.
They're neither pure functions, nor are they always deterministic. We as a profession have been spoilt by mostly deterministic code (and even then, we had a chunk of probabilistic algorithms, depending on where you worked).
Heck, I've worked with compilers that used simulated annealing for optimization, 2 decades ago.
Yes, it's a sea change for CRUD/SaaS land. But there are plenty of folks outside of that who actually took the "engineering" part of software engineering seriously, and understand just fine how to deal with probabilistic processes and risk management.
bckr · 2h ago
The LLM is not part of the application.
The LLM expands the text of your design into a full application.
The commenter you’re responding to is clear that they are checking the outputs.
charcircuit · 1h ago
>LLMs, by their very nature are probabilistic.
So are compilers, but people still successfully use them. Compilers and LLMs can both be made deterministic but for performance reasons it's convenient to give up that guarantee.
WD-42 · 44m ago
What are these non deterministic compilers I keep hearing about, honestly curious.
charcircuit · 17m ago
For example looping over the files in a directory can happen in a different order depending on the order the files were created in. If you are linking a bunch of objects the order typically matters. If the compiler is implemented correctly the resulting binary should functionally be the same but the binary itself may not be exactly the same. Or even when implemented correctly you will see cases where different objects can be the one to define a duplicate symbol depending on their relative order.
daveguy · 2h ago
> An abstraction is a deterministic, pure function, than when given A always returns B.
That is just not correct. There is no rule that says an abstraction is strictly functional or deterministic.
In fact, the original abstraction was likely language, which is clearly neither.
The cleanest and easiest abstractions to deal with have those properties, but they are not required.
robenkleene · 1h ago
This is such a funny example because language is the main way that we communicate with LLMs. Which means you can make tie both of your points together in the same example: If you take a scene and describe it in words, then have an LLM reconstruct the scene from the description, you'd likely get a scene that looks very different then the original source. This simultaneous makes both your point and the person you're responding to's point:
1. Language is an abstraction and it's not deterministic (it's really lossy)
2. LLMs behave differently than the abstractions involved in building software, where normally if you gave the same input, you'd expect the same output.
daveguy · 1h ago
Yes, most abstractions are not as clean as leak free functional abstractions. Most abstractions in the world are leaky and lossy. Abstraction was around long before computers were invented.
beepbooptheory · 2h ago
What is the thing that language itself abstracts?
fkyoureadthedoc · 2h ago
Your thought's I'd say, but it's more of a two way street than what I think of as abstraction.
daveguy · 1h ago
Okay, language was the original vehicle for abstraction if everyone wants to get pedantic about it. And yes, abstraction of thought. Only in computerland (programming, mathematics and physics) do you even have the opportunity to have leak-free functional abstractions. That is not the norm. LLM-like leaky abstractions are the norm.
blibble · 2h ago
I can count the amount of times in my 20 year career that I've had to look at compiler generated assembly on one finger
and I've never looked at the machine code produced by an assembler (other than when I wrote my own as a toy project)
is the same true of LLM usage? absolutely not
and it never will be, because it's not an abstraction
elAhmo · 2h ago
It is an abstraction.
Just because you end up looking at what the prompt looks like “under the hood” in whichever language it produced the output, doesn’t mean every user does.
Similar as with assembly, you might have not taken a look at it, but there are people that do and could argue the same thing as you.
The lines will be very blurry in the near future.
card_zero · 2h ago
I can fart in the general direction of the code and that's a kind of abstraction too. It distills my intent down to a simple raspberry noise which could then be interpreted by sufficiently intelligent software.
joenot443 · 2h ago
Sort of like when people argue about "what is art?", some folks find it clever to come up with the most bizarre of examples to point to, as if the entire concept rests on the existence of its members at the fringe.
Personally, I think if your farts are an abstraction that you can derive useful meaning from the mapping, who are we to tell you no?
card_zero · 2h ago
So, can derive useful meaning from is the point of contention.
The final result of course depending on how many r's there are in the raspberry.
chaps · 2h ago
Could you please expand on this thought? I'm curious where this abstraction's inflection points are.
KoolKat23 · 2h ago
It's still early stages, that is why.
It is not yet good enough or there is not yet sufficient trust. Also there are still resources allocated to checking the code.
I saw a post yesterday showing Brave browser's new tab using 70mb of RAM in the background. I'm very sure there's code there that can be optimized, but who gives a shit. It's splitting hairs and our computers are powerful enough now that it doesn't matter.
Immateriality has abstracted that particular few line codes away.
LandR · 2h ago
>> I saw a post yesterday showing Brave browser's new tab using 70mb of RAM in the background. I'm very sure there's code there that can be optimized, but who gives a shit.
I do. This sort of attitude is how we have machines more powerful than ever yet everything still seems to run like shit.
const_cast · 2h ago
This is barely related but I bet that the extra 70 mb of ram isn't even waste - it's probably an optimization. Its possible they're spinning up a JS VM preemptively so when you do navigate you have a hot interpreter for the inevitable script. Maybe they allocate memory for the DOM too.
KoolKat23 · 1h ago
Probably the case, I felt bad using this as an example as I don't know the specifics, but thought it is an easy way to convey my point (sorry if so Brave developers).
rafterydj · 2h ago
I feel like I'm taking crazy pills or misunderstanding you. Shouldn't it matter that they are using 70mb of RAM more or less totally wastefully? Maybe not a deal breaker for Brave, sure, but waste is waste.
I understand the world is about compromises, but all the gains of essentially every computer program ever could be summed up by accumulation of small optimizations. Likewise, the accumulation of small wastes kills legacy projects more than anything else.
Mtinie · 1h ago
It could matter but what isn't clear to me is if 70MB is wasteful in this specific context. Maybe? Maybe not?
Flagging something as potentially problematic is useful but without additional information related to the tradeoffs being made this may be an optimized way to do whatever Brave is doing which requires the 70MB of RAM. Perhaps the non-optimal way it was previously doing it required 250MB of RAM and this is a significant improvement.
No comments yet
KoolKat23 · 1h ago
Yes it can be construed as wasteful. But it's exactly that, a compromise. Could the programmer spend their time better elsewhere generating better value, not doing so is also wasteful.
Supply and demand will decide what compromise is acceptable and what that compromise looks like.
ToucanLoucan · 2h ago
> It's still early stages, that is why.
I have been hearing (reading?) this for a solid two years now, and LLMs were not invented two years ago: they are ostensibly the same tech as they were back in 2017, with larger training pools and some optimizations along the way. How many more hundreds of billions of dollars is reasonable to throw at a technology that has never once exceeded the lofty heights of "fine"?
At this point this genuinely feels like silicon valley's fever dream. Just lighting dumptrucks full of money on fire in the hope that it does something better than it did the previous like 7 or 8 times you did it.
And normally I wouldn't give a shit, money is made up and even then it ain't MY money, burn it on whatever you want. But we're also offsetting any gains towards green energy standing up these stupid datacenters everywhere to power this shit, not to mention the water requirements.
SamPatt · 56m ago
The difference between using Cursor when it launched and using Cursor today is dramatically different.
It was basically a novelty before. "Wow, AI can sort of write code!"
Now I find it very capable.
KoolKat23 · 1h ago
I know from my own use case, it went from Gemini 1.5 being unusable to Gemini 2.0 being useable. So 2 years makes a big difference. It's out there right now being used in business making money. This is tangible.
I suspect there's a lot more use out there generating money than you realize, there's no moat in using it, so I'm pretty sure it's kept on the downlow for fear of competitors catching up (which is quick and cheap to do).
How far can one extrapolate? I defer to the experts actually making these things and to those putting money on the line.
theptip · 2h ago
> and it never will be
The only thing I’m certain of is that you’re highly overconfident.
I’m sure plenty of assembly gurus said the same of the first compilers.
> because it's not an abstraction
This just seems like a category error. A human is not an abstraction, yet they write code and produce value.
An IDE is a tool not an abstraction, yet they make humans more productive.
When I talk about moving up the levels of abstraction I mean: taking on more abstract/less-concrete tasks.
Instead of “please wire up login for our new prototype” it might be “please make the prototype fully production-ready, figure out what is needed” or even “please ship a new product to meet customer X’s need”.
sarchertech · 1h ago
>“please ship a new product to meet customer X’s need”
The customer would just ask the AI directly to meet their needs. They wouldn’t purchase the product from you.
CuriouslyC · 2h ago
I like to think of it as a duck-typed abstraction. I have formal specs which guarantee certain things about my software, and the agent has to satisfy those (including performance, etc). Given that, the code the agent generates is essentially fungible in the manner of machine code.
sarchertech · 1h ago
Only if you can write specs precisely enough. For those of us who remember the way software used to be built, we learned that this is basically impossible and that English is a terrible language to even attempt it in.
If you do make your specs precise enough, such that 2 different dev shops will produce functionally equivalent software, your specs are equivalent to code.
CuriouslyC · 53m ago
This is doable, I have a multi-stage process that makes it pretty reliable. Stage 1 is ideation, this can be with a LLM or humans, w/e, you just need a log. Stage 2 is conversion of that ideation log to a simple spec format that LLMs can write easily called SRF, which is fenced inside a nice markdown document humans can read and understand. You can edit that SRF if desired, have a conversation with the agent about it to get them to massage it, or just have your agent feed it into a tool I wrote which takes a SRF and converts it to a CUE with full formal validation and lots of other nice features.
The value of this is that FOR FREE you can get comprehensive test defintions (unit+e2e), kube/terraform infra setup, documentation stubs, openai specs, etc. It's seriously magical.
Is the mapping 1:1 and completely lossless? Of course not, I'd say the former is most definitely a sort of abstraction of the latter, and one would be being disingenuous to pretend it's not.
chii · 2h ago
> my only job is to identify places where the current spec and the intended functionality differ and create a new spec to mitigate.
and to be able to do this efficiently or even "correctly", you'd need to have had mountains of experience evaluating an implementation, and be able to imagine the consequences of that implementation against the desired outcome.
Doing this requires experience that would get eroded by the use of an LLM. It's very similar to higher level maths (stuff like calculus) being much more difficult if you had poor arithmetic/algebra skills.
CuriouslyC · 2h ago
Would that experience get eroded though? LLMs let me perform experiments (including architecture/system level) quickly, and build stress test/benchmark/etc harnesses quickly to evaluate those experiments, so in the time you can build human intuition with one experiment I've done 10. I build less intuition from each experiment, but I'm building broader intuition, and if I choose a bad experiment it's a small cost, but choosing a bad experiment and performing it manually is brutal.
notTooFarGone · 1h ago
If you had google maps and you knew the directions it gives are 80% gonna be correct, would you still need navigation skills?
You could also tweak it by going like "Lead me to the US" -> "Lead me to the state of New York" -> "Lead me to New York City" -> "Lead me to Manhattan" -> "Lead me to the museum of new arts" and it would give you 86% accurate directions, would you still need to be able to navigate?
How about when you go over roads that are very frequently used you push to 92% accuracy, would you still need to be able to navigate?
Yes of course because in 1/10 trips you'd get fucking lost.
My point is: unless you get to that 99% mark, you still need the underlying skill and the abstraction is only a helper and always has to be checked by someone who has that underlying skill.
I don't see LLMs as that 99% solution in the next years to come.
matthewdgreen · 2h ago
It is possible to use LLMs this way. If you're careful. But at every place where you can use LLMs to outsource mechanical tasks, there will also be a temptation to outsource the full stack of conceptual tasks that allow you to be part of what's going on. This will create gaps where you're not sitting at any level of the abstraction hierarchy, you're just skipping parts of the system. That temptation will be greater for less experienced people, and for folks still learning the field. That's what I'm scared about.
lr4444lr · 2h ago
I'd be very cautious about calling LLM output "an abstraction layer" in software.
__loam · 2h ago
Leakiest abstraction of all time.
aprilthird2021 · 2h ago
> We're just moving up the abstraction ladder, like we did with compilers.
We're not because you have to still check every outputted code. You didn't have to check every compilation step of a compiler. It was testable actual code, not non-deterministic output from English language input
elAhmo · 2h ago
I would bet a significant amount of money that many LLM users don’t check the output. And as tools improve, this will only increase.
The number of users actually checking the output of a compiler is nonexistent. You just trust it.
LLMs are moving that direction, whether we like it or not
Jensson · 2h ago
> The number of users actually checking the output of a compiler is nonexistent. You just trust it.
Quite a few who work on low level systems do this. I have done this a few times to debug build issues: this one time a single file suddenly made compile times go up by orders of magnitude, the compiler inlined a big sort procedure in an unrolled loop, so it added the sorting code hundreds of times over in a single function and created a gigantic binary that took ages to compile since it tried to optimize that giant function.
That is slow both in runtime and compile time, so I added a tag to not inline the sort there, and all the issues disappeared. The sort didn't have a tag to inline it, so the compiler just made an error here, it shouldn't have inlined such a large function in an unrolled loop.
aprilthird2021 · 2h ago
Of course they don't. That's why things like the NX breach happen. That's also why they don't learn anything when they use these tools and their brains stagnate.
__loam · 2h ago
Well they're not improving that much anymore. That's why Sam Altman is out there saying it's a bubble.
CuriouslyC · 2h ago
This is incorrect, they are improving, you just don't understand how to measure and evaluate it.
The Chinese models are getting hyper efficient and really good at agentic tasks. They're going to overtake Claude as the agentic workhorses soon for sure, Anthropic is slow rolling their research and the Chinese labs are smoking. Speed/agentic ability don't show big headlines, but they really matter.
GPT5 might not impress you with its responses to pedestrian prompts, but it is a science/algorithm beast. I understand what Sam Altman was saying about how unnerving its responses can be, it can synthesize advanced experiments and pull in research from diverse areas to improve algorithms/optimize in a way that's far beyond the other LLMs. It's like having a myopic autistic savant postdoc to help me design experiments, I have to keep it on target/focused but the depth of its suggestions are pretty jaw dropping.
pessimizer · 2h ago
> We're not because you have to still check every outputted code.
To me, that's what makes it an abstraction layer, rather than just a servant or an employee. You have to break your entire architecture into units small enough that you know you can coax the machine to output good code for. The AI can't be trusted as far as you can throw it, but the distance from you to how far you can throw is the abstraction layer.
An employee you can just tell to make it work, they'll kill themselves trying to do it, or be replaced if they don't; eventually something will work, and you'll take all the credit for it. AI is not experimenting, learning and growing, it stays stupid. The longer it thinks, the wronger it thinks. You deserve the credit (and the ridicule) for everything it does that you put your name on.
-----
edit: and this thread seems to think that you don't have to check what your high level abstraction is doing. That's probably why most programs run like crap. You can't expect something you do in e.g. python to do the most algorithmically sensible thing, even if you wrote the algorithm just like the textbook said. It may make weird choices (maybe optimal for the general case, but horrifically bad for yours) that mean that it's not really running your cute algorithm at all, or maybe your cute algorithm is being starved by another thread that you have no idea why it would be dependent on. It may have made correct choices when you started writing, then decided to make wrong choices after a minor patch version change.
To pretend perfection is a necessary condition for abstraction is not even somebody would say directly. Never. All we talk about is leaky abstractions.
Remember when GTA loading times, which (a counterfactual because we'll never know) probably decimated sales, playtime, and at least the marketing of the game, turned out to be because they were scanning some large, unnecessary json array (iirc) hundreds of times a second? That's probably a billion dollar mistake. Just because some function that was being blindly called was not ever reexamined, and because nobody profiled properly (i.e. checked the output.)
bgwalter · 2h ago
Compilers are sound abstractions. CompCert is literally a proven compiler.
LLMs make up whatever they feel like and are pretty bad at architecture as well.
defgeneric · 1h ago
A lot of the boosterism seems to come from those who never had the ability in the first place, and never really will, but can now hack a demo together a little faster than before. But I'm mostly concerned about those going through school who don't even realize they're undermining themselves by reaching for AI so quickly.
NoGravitas · 37m ago
Perhaps more importantly, those boosters may never have had the ability to really model a problem in the first place, and didn't miss it, because muddling through worked well enough for them. Many such cases.
mooreds · 1h ago
I've posted this Asimov short story before, but this comment inspires me to post it again.
"Somewhere there must be men and women with capacity for original thought."
He wrote that in 1957. 1957!
Terr_ · 1h ago
The first sentence made me expect Asimov's "The Feeling of Power", which--avoiding spoilers--regards the (over-)use of calculators by a society.
However, since I brought up calculators, I'd like to pre-emphasize something: They aren't analogous to today's LLMs. Most people don't offload their "what and why" executive decision-making to a calculator, calculators are orders of magnitude more trustworthy, and they don't emit plausible lies to cover their errors... Though that last does sound like another short-story premise.
It turns out incremental thought is much better than original thought. I guess.
el_benhameen · 3h ago
What about people who use the machines to augment their learning process? I find that being able to ask questions, particularly “dumb” questions that I don’t want to bother someone else with and niche questions that might not be answered well in the corpus, helps me better understand new concepts. If you just take the answers and move on, then sure, you’re going to have a bad time. But if you critically interrogate the answers and synthesize the information, I don’t see how this isn’t a _better_ era for people who want to develop a deep understanding of something.
Peritract · 2h ago
There's a difference between learning and the perception/appearance of learning; teachers need to manage this in classrooms, but how do you manage it on your own?
el_benhameen · 1h ago
I don’t think this is a critique of llms so much as a general observation that actual deep learning is difficult.
I’ve read plenty of books (thanks, Dickens) where I looked at every word on every page but can recall very little of what they meant. You can look at the results from an llm and say “huh cool, I know that now) and do nothing to assimilate that knowledge, or you can think deeply about it and try to fit it in with everything else you know about the subject. The advantage here is that you can ask follow-up questions if something doesn’t click.
Peritract · 43m ago
It's not a critique of LLMs, but it is a reason to be wary of the claim that it really helps you learn.
We have the idea of 'tutorial hell' for programming (particularly gamedev), where people go through the motions of learning without actually progressing.
Until you go apply the skills and check, it's hard to evaluate the effectiveness of a learning method.
benterix · 2h ago
I fully agree. Using LLMs for learning concepts is great if you combine it with actively using/testing your knowledge. But outsourcing your tasks to an LLM makes your inner muscles weaker.
aprilthird2021 · 2h ago
> I don’t see how this isn’t a _better_ era for people who want to develop a deep understanding of something.
Same way a phone in your pocket gives you the world's compiled information available in a moment. But that's generally led to loneliness, isolation, social upheaval, polarization, and huge spread of wrong information.
If you can handle the negatives is a big if. Even the smartest of our professional class are addicted to doomscrolling these days. You think they will get the positives of AI use only and avoid the negatives?
prewett · 2h ago
You conflated two things. Availability to the world's information did not produce loneliness, etc., that was availability of social networks designed for engagement^Waddiction on that same device that did. You don't need to install FB et all on your phone.
aprilthird2021 · 2h ago
Again, most people have FB on their phone. The vast majority of people will not be like snowflake HN anecdotes who claim to only get the positives and not the negatives of technology
stocksinsmocks · 1h ago
(3) people who never had that capability.
Remember we aren’t all above average. You shouldn’t worry. Now that we have widespread literacy, nobody needs to and few even could recite Norse Sagas or the Illiad from memory. Basically nobody has useful skills for nomadic survival.
We’re about to move on to more interesting problems, and our collective abilities and motivation will still be stratified as it always has been and must be.
rf15 · 1h ago
I don't think it's down to ability, I think it's down to the decision not to learn itself. And that scares me.
segmondy · 1h ago
And 3, people who are about to use AI to build a mental model quickly and understand the concept deeper
noboostforyou · 2h ago
And it's almost the same exact trend in skill atrophy we saw between millennial and gen z/alpha where most kids these days cannot do basic computer troubleshooting even if they've owned a computer/smartphone practically their entire lives.
InfamousRece · 1h ago
> slowly, slowly lose that capability.
Well, not so slowly it seems.
chaps · 3h ago
> (1) people who are able to understand the concepts deeply, build a mental model of it and implement them in code at any level, and (2) people who outsource it to a machine and slowly, slowly loose that capability.
...is it really only going to be these two? No middle ground, gradient, or possibly even a trichotomous variation of your split?
> loose that capability
You mean "lose". ;)
benterix · 2h ago
Well, in real life you always have a gradient (or, more usually, a normal distribution) - actually it would be interesting to understand what the actual distribution is and how it changes with time.
chaps · 2h ago
It would! I'd be very interested in seeing the shape and outliers. Like, are there some folk who actually see improvements? Or who don't see any brain reprogramming at all? Do different methods of interacting with these systems have the same reprogramming effect? etc etc
andy99 · 3h ago
I think the bigger issue is outsourcing all or most thought to a LLM (data labeler really). Using tools you don't understand from first principles has always been commonplace and not really an issue. But not thinking anymore is new for most of us.
squigz · 2h ago
The loss of expertise does not mean the loss of a need for expertise. We'll still need experts. We'll also still need people who are... not experts. To that point...
> For now the difference between these two populations is not that pronounced yet but give it a couple of years.
There are lots and lots of programmers and other IT people who make a living that I wouldn't say fall into your first bucket.
mrits · 3h ago
I suppose the question is if we need to understand the concepts deeply. I'm not sure many of us did to begin with and we have shipped a lot of code.
MobiusHorizons · 2h ago
I see a lot of junior engineers, or more senior engineers who are outside their areas of expertise try to prioritize making progress without taking the time to understand. They will copy examples very closely, following best practices blindly, or get someone else to make the critical design decisions. This can get you surprisingly far, but it’s also dangerous because the further past your understanding you are operating, the more you might have to learn all at once in order to fix something when it breaks. Debugging challenges all your assumptions, and if you don’t have models of what the pieces are and how they interact, it’s incredibly hard to start building them when something is already broken. Even then, some engineers don’t learn the models when debugging and resort to defensive and superstitious behaviors based on whatever solution they stumbled on last time. This is a pretty normal part of the learning process, but some engineers don’t seem to get past this stage. Some don’t even want to.
tmcb · 2h ago
Well, cargo cult programming is definitely a thing, and has been for a long time. It may “deliver value”, but it is not guaranteed. I believe entrepreneurs have an easier time having AI do the work for them because their value assessment framework is decoupled from code generation proper.
pengaru · 1h ago
> and (2) people who outsource it to a machine and slowly, slowly lose that capability.
What I'm seeing is most of this group never really had the capability in the first place. These are the formerly unproductive slackers who now churn out GenAI slop with their name on it at an alarming rate.
theptip · 2h ago
Just beware the “real programmers hand-write assembly” fallacy. It was said that compilers would produce a generation of programmers unable to understand the workings of their programs. In some sense, this is true! But, almost nobody thinks it really matters for the actual project of building things.
If you stop thinking, then of course you will learn less.
If instead you think about the next level of abstraction up, then perhaps the details don’t always matter.
The whole problem with college is that there is no “next level up”, it’s a hand-curated sequence of ideas that have been demonstrated to induce some knowledge transfer. It’s not the same as starting a company and trying to build something, where freeing up your time will let you tackle bigger problems.
And of course this might not work for all PhDs; maybe learning the details is what matters in some fields - though with how specialized we’ve become, I could easily see this being a net win.
Jensson · 2h ago
> Just beware the “real programmers hand-write assembly” fallacy
All previous programming abstractions kept correctness, a python program produce no less reliable results than a C program running the same algorithm, it just took more time.
LLM doesn't keep correctness, I can write a correct prompt and get incorrect results. Then you are no longer programming, you are a manager over a senior programmer suffering from extreme dementia so they forget what they were doing a few minutes ago and you try to convince him to write what you want before he forgets about that as well and restart the argument.
That's not strictly speaking true, since most (all?) high level languages have undefined behaviors, and their behavior varies between compilers/architectures in unexpected ways. We did lose a level of fidelity. It's still smaller than the loss of fidelity from LLMs but it is there.
pnt12 · 1h ago
That's a bit pedantic: lots of python programs will work the same way in major OSs. If they don't, someone will likely try to debug the specific error and fix it. But LLMs frequently hallucinate in non deterministic ways.
Also, it seems like there's little chance for knowledge transfer. If I work with dictionaries in python all the timrle, eventually I'm better prepared to go under the hood and understand their implementation. If I'm prompting a LLM, what's the bridge from prompt engineering to software engineering? Not such direct connection, surely!
ashton314 · 1h ago
Undefined behavior does not violate correctness. Undefined behavior is just wiggle room for compiler engineers to not have to worry so much about certain edge cases.
"Correctness" must always be considered with respect to something else. If we take e.g. the C specification, then yes, there are plenty of compilers that are in almost all ways people will encounter correct according to that spec, UB and all. Yes, there are bugs but they are bugs and they can be fixed. The LLVM project has a very neat tool called Alive2 [1] that can verify optimization passes for correctness.
I think there's a very big gap between the kind of reliability we can expect from a deterministic, verified compiler and the approximating behavior of a probabilistic LLM.
However, the undefined behaviours are specified and known about (or at least some people know about them). With LLMs, there's no way to know ahead of time that a particular prompt will lead to hallucinations.
PhantomHour · 59m ago
> Just beware the “real programmers hand-write assembly” fallacy. It was said that compilers would produce a generation of programmers unable to understand the workings of their programs. In some sense, this is true! But, almost nobody thinks it really matters for the actual project of building things.
One of the other replies alludes to it, but I want to say it explicitly:
The key difference is that you can generally drill down to assembly, there is infinitely precise control to be had.
It'd be a giant pain in the ass, and not particularly fast, but if you want to invoke some assembly code in your Java, you can just do that. You want to see the JIT compiler's assembly? You can just do that. JIT Compiler acting up? Disable it entirely if you wish for more predictable & understandable execution of the code.
And while people used to higher level languages don't know the finer details of assembly or even C's memory management, they can incrementally learn. Assembly programming is hard, but it is still programming and the foundations you learn from other programming do help you there.
Yet AI is corrosive to those foundations.
theptip · 2m ago
I don't follow; you can read the code that your LLM produces as well.
It's way easier to drill down in this way than the bytecode/assembly vs. high-level language divide.
daemin · 2h ago
I would agree with the statement that you don't need to know or write in assembly to build programs, but what you end up with is usually slow and inefficient.
Having curiosity to examine the platform that your software is running on and taking a look into what the compilers generate is a skill worth having. Even if you never write raw assembly yourself, being able to see what the compiler generated and how data is laid out does matter. This then helps you make better decisions about what patterns of code to use in your higher level language.
bdelmas · 1h ago
Yes in data science there is say: “there is no free lunch”. With ChatGPT and others becoming so prevalent even at PhD level people that will work hard and avoid to use these tools will be more and more seen as magicians. I already see this in coding where people can’t code medium to hard things and their intuitions like you said is wacky. It’s not the imposter syndrome anymore it’s people not being able to get their job done without some AI involved.
What I do personally is for every subject that matters to me I take the time to first think about it. To explore ideas, concepts, etc… and answer questions that would ask to ChatGPT. Only once I get a good idea I start to ask chapgpt about it.
thisisit · 56m ago
This seems like the age old discussion of how does new technology changes our lives and makes us "lazy" or "lack of learning".
Before the advent of smartphones people needed to remember phone numbers of their loved ones and maybe do some small calculations on the fly. Now people sometimes don't even remember their own numbers and have it saved on their phones.
Now some might want to debate how smartphones are different from LLMs and it is not the same. But we have to remember for better or worse LLM adoption has been fast and it has become consumer technology. That is the area being discussed in the article. People using it to write essays. And those who might be using the label of "prompt bros" might be missing the full picture. There are people, however small, being helped by LLMs as there were people helped by smartphones.
This is by no means a defense for using LLMs for learning tasks. If you write code by yourself, you learn coding. If you write your essays yourself, you learn how to make a solid points.
geye1234 · 3h ago
Interesting, thanks. Do you mean he would write the code out by hand on pen and paper? That has often struck me as a very good way of understanding things (granted, I don't code for my job).
Similar thing in the historian's profession (which I also don't do for my job but have some knowledge of). Historians who spend all day immersed in physical archives tend, over time, to be great at synthesizing ideas and building up an intuition about their subject. But those who just Google for quotes and documents on whatever they want to write about tend to have more a static and crude view of their topic; they are less likely to consider things from different angles, or see how one things affects another, or see the same phenomenon arising in different ways; they are more likely to become monomaniacal (exaggerated word but it gets the point across) about their own thesis.
martingalex2 · 40m ago
Assuming this observation applies generally, give one point to the embodiment crowd.
scarface_74 · 8m ago
I work in cloud consulting specializing in application development. But most of the time when an assignment is to produce code instead of leading a project or doing strategy assessments, it’s to turn around a quick proof of concept that requires a broad set of skills - infrastructure, “DevOps”, backend development and ETL type jobs where the goal is to teach the client or to get them to sign off on a larger project where we will need to bring in a team.
For my last two projects, I didn’t write a single line of code by hand. But I refuse to use agents and I build up an implementation piece by piece via prompting to make sure I have the abstractions I want and reusable libraries.
I take no joy in coding anymore and I’ve been doing it for fourty years. I like building systems and solving business problems.
I’m not however disagreeing with you that LLMs will make your development skill atrophy, I’m seeing it in real time at 51. But between my customer facing work and supporting sales and cat herding, I don’t have time to sit around and write for loops and I’m damn sure not going to do side projects outside of work. Besides, companies aren’t willing to pay my company’s bill rates for me as a staff consultant to spend a lot of time coding.
I hopefully can take solace in the fact that studies also show that learning a second language strengthens the brain and I’m learning Spanish and my wife and I plan to spend a couple of months in the winter every year in a Central American Spanish speaking country.
We have already done the digital nomad thing across the US for a year until late 2023 so we are experienced with it and spent a month in Mexico.
giancarlostoro · 2h ago
> These days, I'm fairly senior and don't touch code much anymore but I find it really really instructive to get my hands dirty and struggle through new code and ideas. I think the "just tweak the prompts bro" people are missing out on learning.
If you just use prompts and don't actually read the output, and figure out why it worked, and why it works, you will never get better. But if you take the time to understand why it works, you will be better for it, and might not even bother asking next time.
I've said it before, but when I first started using Firefox w/ autocorrect in like 2005, I made it a point to learn to spell from it, so that over time I would make less typos. English is my second language, so its always been an uphill battle for me despite having a native American English accent. Autocorrect on Firefox helped me tremendously.
I can use LLMs to plunge into things I'm afraid of trying out due to impostor syndrome and get more done sooner and learn on the way there. I think the key thing is to use tools correctly.
AI is like the limitless drug to a degree, you have an insane fountain of knowledge at your fingertips, you just need to use it wisely and learn from it.
crinkly · 2h ago
Good to hear. I will add that not everyone who writes a paper expects anyone to read a paper with that level of diligence and it can lead to some interesting outcomes for the paper authors over time.
Keep up the good work is all I can say!
zingababba · 2h ago
I just decided to take a break from LLMs for coding assistance a couple days ago. Feels really good. It's funny how fast I am when I just understand the code myself instead of not understanding it and proooompting.
vonneumannstan · 3h ago
>I think the "just tweak the prompts bro" people are missing out on learning.
Alternatively they're just learning/building intuition for something else. The level of abstraction is moving upwards. I don't know why people don't seem to grok that the level of the current models is the floor, not the ceiling. Despite the naysayers like Gary Marcus, there is in fact no sign of scaling or progress slowing down at all on AI capabilities. So it might be that if there is any value in human labor left in the future it will be in being able to get AI models to do what you want correctly.
Brian_K_White · 3h ago
Wishful, self-serving, and beside the point. The primary argument here is not about the capability of the ai.
I think the same effect has been around forever in the form of every boss/manager/ceo/rando-divorcee-or-child-with-money using employees to do their thinking as a current information-handling worker or student using an ai to do their thinking.
vonneumannstan · 2h ago
>Wishful, self-serving, and beside the point. The primary argument here is not about the capability of the ai.
"Alternatively they're just learning/building intuition for something else."
Reading comprehension is hard.
benterix · 3h ago
That would be true if several conditions were fulfilled, starting with LLMs being actually to do their tasks properly, which they still very much struggle with which basically defeats the premise of moving up an abstraction layer if you have to constantly check and correct the lower layer.
vonneumannstan · 2h ago
Skill issue. Git gud.
jimkri · 3h ago
I don't think Gary Marcus is necessarily a naysayer; I take it that he is trying to get people to be mindful of the current AI tooling and its capabilities, and that there is more to do before we say it is what it is being marketed as. Like, GPT5 seems to be an additional feature layer of game theory examples. Check LinkedIn for how people think it behaves, and you can see patterns. But they market it as much more.
vonneumannstan · 2h ago
>I don't think Gary Marcus is necessarily a naysayer
Oh come on. He is by far the most well known AI poo-poo'er and it's not even close. He built his entire brand on it once he realized his own research was totally irrelevant.
lazide · 2h ago
I remember this exact discussion (and exact situation) with WYSIWYG UI design tools.
They were still useful, and did solve a significant portion of user problems.
They also created even more problems, and no one really went out of work long term because of them.
asveikau · 3h ago
This reads to me as extremely defensive.
vonneumannstan · 2h ago
It's not but ok. Just responding to another version of "This generation is screwed" that has been happening literally since Socrates.
codyb · 3h ago
Really? No signs of slowing down?
A year or two ago when LLMs popped on the scene my coworkers would say "Look at how great this is, I can generate test cases".
Now my coworkers are saying "I can still generate test cases! And if I'm _really pacificcccc_, I can get it to generate small functions too!".
It seems to have slowed down considerably, but maybe that's just me.
vonneumannstan · 2h ago
Yeah NGL if you can't get a model that is top 1% in Competitive Coding and Gold level medal IMO tier to do anything useful thats just an indictment on your skill level with them.
lazide · 2h ago
At the beginning, it’s easy to extrapolate ‘magic’ to ‘can do everything’.
Eventually, it stops being magic and the thinking changes - and we start to see the pros and cons, and see the gaps.
A lot of people are still in the ‘magic’ phase.
tuesdaynight · 2h ago
Sorry for the bluntness, but you sound like you have a lot of opinions about LLM performance for someone who says that doesn't use them. It's okay if you are against them, but if have used them 3 years ago, you have no idea if there were improvements or not.
Jensson · 1h ago
You can see what people built with LLM 3 years ago and what they build with LLM today and compare the two.
That is a very natural and efficient way to do it, and also more reliable than using your own experience since you are just a single data point with feelings.
You don't have to drive a car to see where cars were 20 years ago, see where cars are today, and say: "it doesn't look like cars will start flying anytime soon".
Peritract · 1h ago
> you sound like you have a lot of opinions about LLM performance for someone who says that doesn't use them
It's not reasonable to treat only opinions that you agree with as valid.
Some people don't use LLMs because they are familiar with them.
vonneumannstan · 2h ago
"It can't do 9.9-9.11 or count the number of r's in strawberry!"
lol
Nevermark · 1h ago
Since models are given tokens, not letters, to process, the famous issues with counting letters is not indicative of incompetence. They are just sub-sensory for the model.
None of us can reliably count the e’s as someone talks to us, either.
hatefulmoron · 50m ago
It does say something that the models simultaneously:
a) "know" that they're not able to do it for the reason you've outlined (as in, you can ask about the limitations of LLMs for counting letters in words)
b) still blindly engage with the query and get the wrong answer, with no disclaimer or commentary.
If you asked me how many atoms there are in a chair, I wouldn't just give you a large natural number with no commentary.
KoolKat23 · 3h ago
Agree with this.
I mean the guy assembling a thingymajig in the factory, after a few years, can put it together with his hands 10x faster than the actual thingymajig designer. He'll tell you apply some more glue here and less glue there (it's probably slightly better, but immaterial really).
However, he probably couldn't tell you what the fault tolerance of the item is, the designer can do that.
We still outsource manufacturing to the guy in the factory regardless.
We just have to get better at identifying risks with using the LLMs doing the grunt work and get better in mitigating them. As you say, abstracted.
fatata123 · 3h ago
LLMs are plateauing, and you’re in denial.
tomrod · 4h ago
A few things to note.
1. This is arxiv - before publication or peer review. Grain of salt.[0]
2. 18 participants per cohort
3. 54 participants total
Given the low N and the likelihood that this is drawn from 18-22 year olds attending MIT, one should expect an uphill battle for replication and for generalizability.
Further, they are brain scanning during the experiment, which is an uncomfortable/out-of-the-norm experience, and the object of their study is easy to infer if not directly known by the population (the person being studied using LLM, search tools, or no tools).
> We thus present a study which explores the cognitive cost of using an LLM while performing the task of writing an essay. We chose essay writing as it is a cognitively complex task that engages multiple mental processes while being used as a common tool in schools and in standardized tests of a student's skills. Essay writing places significant demands on working memory,
requiring simultaneous management of multiple cognitive processes. A person writing an essay must juggle both macro-level tasks (organizing ideas, structuring arguments), and micro-level tasks (word choice, grammar, syntax). In order to evaluate cognitive engagement and cognitive load as well as to better understand the brain activations when performing a task of essay writing, we used Electroencephalography (EEG) to measure brain signals of the participants. In addition to using an LLM, we also want to understand and compare the brain activations when performing the same task using classic Internet search and when no tools (neither LLM nor search) are available to the user.
>These 54 participants were between the ages of 18 to 39 years old (age M = 22.9, SD = 1.69) and all recruited from the following 5 universities in greater Boston area: MIT (14F, 5M), Wellesley (18F), Harvard (1N/A, 7M, 2 Non-Binary), Tufts (5M), and Northeastern (2M) (Figure 3). 35 participants reported pursuing undergraduate studies and 14 postgraduate studies. 6 participants either finished their studies with MSc or PhD degrees, and were currently working at the universities as post-docs (2), research scientists (2), software engineers (2)
I would describe the study size and composition as a limitation, and a reason to pursue a larger and more diverse study for confirmation (or lack thereof), rather than a reason to expect an "uphill battle" for replication and so forth.
hedora · 38m ago
The experimental setup is hopelessly flawed. It assumes that people’s tasks will remain unchanged in the presence of an LLM.
If the computer writes the essay, then the human that’s responsible for producing good essays is going to pick up new (probably broader) skills really fast.
tomrod · 4h ago
> I would describe the study size and composition as a limitation, and a reason to pursue a larger and more diverse study for confirmation, rather than a reason to expect an "uphill battle" for replication and so forth.
Maybe. I believe we both agree it is a critical gap in the research as-is, but whether it is a neutral item or an albatross is an open question. Much of psychology and neuroscience research doesn't replicate, often because of the limited sample size / composition as well as unrealistic experimental design. Your approach of deepening and broadening the demographics would attack generalizability, but not necessarily replication.
My prior puts this on an uphill battle.
genewitch · 45m ago
do you feel this way about every study with N~=54? For instance the GLP-1 brain cancer one?
jdietrich · 38m ago
Most studies don't replicate. Unless a study is exceptionally large and rigorous, your expectation should be that it won't replicate.
stackskipton · 2h ago
I'd love to see much more diverse selection of schools. All of these schools are extremely selective so you are looking at extremely selective slice of the population.
mnky9800n · 4h ago
I feel like saying papers pre peer review should be taken with a grain of salt should be stopped. Peer review is not some idealistic scientific endeavour it often leads to bullshit comments, slows down release, is free work for companies that have massive profit margins, etc. From my experience publishing 30+ papers I have received as many bad or useless comments as I have good ones. We should at least default to open peer review and editorial communication.
Science should become a marketplace of ideas. Your other criticisms are completely valid. Those should be what’s front and center. And I agree with you. The conclusions of the paper are premature and designed to grab headlines and get citations. Might as well be posting “first post” on slashdot. IMO we should not see the current standard of peer review as anything other than anachronistic.
chaps · 3h ago
Please no. Remember that room temperature superconductor nonsense that went on for way too long? Let's please collectively try to avoid that..
physarum_salad · 3h ago
That paper was debunked as a result of the open peer review enabled by preprints! Its astonishing how many people miss that and assume that closed peer review even performs that function well in the first place. For the absolute top journals or those with really motivated editors closed peer review is good. However, often it's worse...way worse (i.e. reams of correct seeming and surface level research without proper methods or review of protocols).
The only advantage to closed peer review is it saves slight scientific embarrassment. However, this is a natural part of taking risks ofc and risky science is great.
P.s. in this case I really don't like the paper or methods. However, open peer review is good for science.
ajmurmann · 3h ago
To your point the paper AFAIK wasn't debunked because someone read it carefully but because people tried to reproduce it. Peer reviews don't reproduce. I think we'd be better off with fewer peer reviews and more time spent actually reproducing results. That's why we had a while crisis named after that
jcranmer · 3h ago
> To your point the paper AFAIK wasn't debunked because someone read it carefully but because people tried to reproduce it.
Actually, from my recollection, it was debunked pretty quickly by people who read the paper because the paper was hot garbage. I saw someone point out that its graph of resistivity showed higher resistance than copper wire. It was no better than any of the other claimed room-temperature semiconductor papers that came out that year; it merely managed to catch virality on social media and therefore drove people to attempt to reproduce it.
chaps · 3h ago
To be clear, I'm not saying that peer review is bad!! Quite the opposite.
physarum_salad · 3h ago
Yes ofc! I guess the major distinction is closed versus open peer review. Having observed some abuses of the former I am inclined to the latter. Although if editors are good maybe it's not such a big difference. The superconducting stuff was more of a saga rather than a reasonable process of peer review too haha.
mwigdahl · 3h ago
And cold fusion. A friend's father (a chemistry professor) back in the early 90s wasted a bunch of time trying variants on Pons and Fleischmann looking to unlock tabletop fusion.
stonemetal12 · 3h ago
Rather given the reproducibility crisis, how much salt does peer review nock off that grain? How often does peer review catch fraud or just bad science?
Bender · 3h ago
I would also add, how often are peer reviews the same group of buddy-bro back-scratchers that know if they help that person with a positive peer review that person will return the favor. How many peer reviewers actually reproduce the results? How many peer reviewers would approve a paper if their credentials were on the line?
Ironically, I am waiting for AI to start automating the process of teasing apart obvious pencil whipping, back scratching, buddy-bro behavior. Some believe its in the 1% range of falsified papers and pencil whipped reviews. I expect it to be significantly higher based on reading NIH papers for a long time in the attempt to actually learn things. I've reported the obvious shenanigans and sometimes papers are taken down but there are so many bad incentives in this process I predict it will only get worse.
genewitch · 38m ago
who says it's "1%"? i'd reckon it's closer to 50% than 1%; that could mean 27%, it could mean 40%. I always have this at the back of my mind when i say something, and someone rejects it by citing a paper (or two). I doubt they even read the paper they're telling me to read as proof i am wrong, to start with. And then the "what are the chances this is repro?" itches a bit.
This also ignores the fact that you can find a paper to support nearly everything if one is willing to link people "correlative" studies.
tomrod · 3h ago
> I feel like saying papers pre peer review should be taken with a grain of salt should be stopped.
Absolutely not. I am an advocate for peer review, warts and all, and find that it has significant value. From a personal perspective, peer review has improved or shot down 100% of the papers that I have worked on -- which to me indicates its value to ensure good ideas with merit make it through. Papers I've reviewed are similarly improved -- no one knows everything and its helpful to have others with knowledge add their voice, even when the reviewers also add cranky items.[0] I would grant that it isn't a perfect process (some reviewers, editors are bad, some steal ideas) -- but that is why the marketplace of ideas exists across journals.
> Science should become a marketplace of ideas.
This already happens. The scholarly sphere is the savanna when it comes to resources -- it looks verdant and green but it is highly resource constrained. A shitty idea will get ripped apart unless it comes from an elephant -- and even then it can be torn to shreds.
That it happens behind paywalls is a huge problem, and the incentive structures need to be changed for that. But unless we want blatant charlatanism running rampant, you want quality checks.
There's two questions at play. First, does the research pass the most rigorous criteria to become widely-accepted scientific fact? Second, does the research present enough evidence to tip your priors and change your personal decisions?
So it's possible to be both skeptical of how well these results generalize (and call for further research), but also heed the warning: AI usage does appear to change something fundamental about our congnitive processes, enough to give any reasonable person pause.
memco · 2h ago
It’s also worth noting that this was specifically about the effects of ChatGPT on users’s ability to write essays: which means that if you don’t practice your writing skills, then your writing skills decline. This doesn’t seem to show that it is harmful just that it does not induce the same brain activity that is observed in other essay writing methods.
Additionally, the original paper uses the term “cognitive debt“ not cognitive decline, which may have an important ramifications for interpretation and conclusions.
I wouldn’t be surprised to see similar results in other similar types of studies, but it does feel a bit premature to broadly conclude that all LLM/AI use is harmful to your brain. In a less alarmist take: this could also be read to show that AI use effectively simplifies the essay writing process by reducing cognitive load, therefore making essays easier and more accessible to a broader audience but that would require a different study to see how well the participants scored on their work.
bjourne · 2h ago
> In a less alarmist take: this could also be read to show that AI use effectively simplifies the essay writing process by reducing cognitive load, therefore making essays easier and more accessible to a broader audience but that would require a different study to see how well the participants scored on their work.
In much the same way chess engines make competitive chess accessible to a broader audience. :)
giancarlostoro · 3h ago
The other thing to note is "AI" is being used in place of LLMs. AI is a lot of things, I would be surprised to find out that generating images, video and audio would lead to cognitive decline. What I think LLMs might lead to is intellectual laziness, why memorize or remember something if the LLM can remember it type of thing.
KoolKat23 · 2h ago
I'd say the framing is wrong. Do we call delivery drivers lazy because they take the highway rather than the backroads? Or because they drive the goods there rather than walk? They're missing out on all that traffic intersection experience.
Perhaps the issue of cognitive decline comes from sitting there vegetating rather applying themselves during all that additional spare time.
Although my experience has been perhaps different using LLM's, my mind still tires at work. I'm still having to think on the bigger questions, it's just less time spent on the grunt work.
mym1990 · 3h ago
I would argue that intellectual laziness can and will lead to cognitive decline as much as physical laziness can and will lead to muscle atrophy. It’s akin to using a maps app to get from point a to b but not ever remembering the route, even though someone has done it 100 times.
I don’t know the percentage of people who are still critically thinking while using AI tools, but I can first hand see many students just copy pasting content to their school work.
giancarlostoro · 2h ago
Fully agree, I think the cognitive decline is probably over time. Look at old, retired people, how they go from feeling like a teenager, to barely remembering anything as an example.
somenameforme · 3h ago
In general I agree with you regarding the weakness of the paper, but not the skepticism towards its outcome.
Our bodies naturally adjust to what we do. Do things and your body reinforces that enabling you do even more advanced versions of those things. Don't do things and your skill or muscle in such tends to atrophy over time. Asking LLMs to (as in this case) write an essay is always going to be orders of magnitude easier than actually writing an essay. And so it seems fairly self evident that using LLMs to write essays would gradually degrade your own ability to do so.
I mean it's possible that this, for some reason, might not be true, but that would be quite surprising.
tomrod · 3h ago
Ever read books in the Bobiverse? They provide an pretty functional cognitive model for how human interfaces with tooling like AI will probably work (even though it is fiction) -- lower level actions are pushed into autonomous regions until a certain deviancy threshold is achieved. Much like breathing -- you don't typically think about breathing until it becomes a problem (choking, underwater, etc.) and then it very much hits the high level of the brain.
What is reported as cognitive decline in the paper might very well be cognitive decline. It could also be alternative routing focused on higher abstractions, which we interpret as cognitive decline because the effect is new.
I share your concern, for the record, that people become too attached to LLMs for generation of creative work. However, I will say it can absolutely be used to unblock and push more through. The quality versus quantity balance definitely needs consideration (which I think they are actually capturing vs. cognitive decline) -- the real question to me is whether an individual's production possibility frontier is increased (which means more value per person -- a win!), partially negative in impact (use with caution), or decreased overall (a major loss). Cognitive decline points to the latter.
IshKebab · 4h ago
Yeah my bullshit detector is going off even more than when I use ChatGPT...
4. This is clickbait research, so it's automatically less likely to be true.
5. They are touting obvious things as if they are surprising, like the fact that you're less likely to remember an essay that you got something else to write, or that the ChatGPT essays were verbose and superficial.
dahart · 3h ago
This comment reminds me of the so called Dunning Kruger Effect. That paper had pretty close to the same participation size, and participants were pulled from a single school (Cornell). It also has major methodology problems, and has had an uphill battle for replication and generalizability, actually losing the battle in some cases. And yet, we have a famous term for it that people love to use, often and incorrectly, even when you take the paper at face value!
The problem is that a headline that people want to believe is a very powerful force that can override replication and sample size and methodology problems. AI rots your brain follows behind social media rots your brain, which came after video games rot your brain, which preceded TV rots your brain. I’m sure TV wasn’t even the first. There’s a long tradition of publicly worrying about machines making us stupider.
tomrod · 3h ago
> The problem is that a headline that people want to believe is a very powerful force that can override replication and sample size and methodology problems. AI rots your brain follows behind social media rots your brain, which came after video games rot your brain, which preceded TV rots your brain. I’m sure TV wasn’t even the first. There’s a long tradition of publicly worrying about machines making us stupider.
Your comment reminded me of this (possibly spurious) quote:
>> An Assyrian clay tablet dating to around 2800 B.C. bears the inscription: “Our Earth is degenerate in these later days; there are signs that the world is speedily coming to an end; bribery and corruption are common; children no longer obey their parents; every man wants to write a book and the end of the world is evidently approaching.”[0]
there's newspaper clippings with the same headlines / lead-in dating back to newspapers, so this has been going on for at least 5 generations, lends a bit of credence.
People have also been complaining about politicians for hundreds of years, and the ruling class for millennia, as well. and the first written math mistake was about beer feedstock, so maybe it's all correlated.
hamburga · 3h ago
Socrates famously complained about literacy making us stupider in Phaedrus.
Which I believe still does have a large grain of truth.
These things can make us simultaneously dumber and smarter, depending on usage.
imchillyb · 2h ago
Socrates was correct. In his day memory was treasured. Memory was how ideas were linked, how quotes were attained, and how arguments were made.
Writing leads to the rapid decline in memory function. Brains are lazy.
Ever travel to a new place and the brain pipes up with: ‘this place is just like ___’? That the brain’s laziness showing itself. The brain says: ‘okay I solved that, go back to rest.’ The observation is never true; never accurate.
Pattern recognition saves us time and enables us too survive situations that aren’t readily survivable. Pattern recognition leads to short cuts that do humanity a disservice.
Socrates recognized these traits in our brains and attempted to warn humanity of the damage these shortcuts do to our reasoning and comprehension skills. In Socrates day it was not unheard of for a person to memorize their entire family tree, or memorize an entire treaty and quote from it.
Humanity has -overwhelmingly- lost these abilities. We rely upon our external memories. We forget names. We forget important dates. We forget times and seasons. We forget what we were just doing!!!
Socrates had the right of it. Writing makes humans stupid. Reduces our token limits. Reduces paging table sizes. Reduces overall conversation length.
We may have more learning now, but what have we given up to attain it?
boringg · 3h ago
I mean there are clear problems with heavy exposure to TV, video games and I have no doubt that there are similar problems with heavy AI use. Any adult with children can clearly see the addiction qualities and behavioral fall out.
LocalPCGuy · 2h ago
This is a bad and sloppy regurgitation of a previous (and more original) source[1] and the headline and article explicitly ignore the paper authors' plea[2] to avoid using the paper to try to draw the exact conclusions this article saying the paper draws.
The comments (some, not all) are also a great example of how cognitive bias can cause folks to accept information without doing a lot of due diligence into the actual source material.
> Is it safe to say that LLMs are, in essence, making us "dumber"?
> No! Please do not use the words like “stupid”, “dumb”, “brain rot”, "harm", "damage", "passivity", "trimming" and so on. It does a huge disservice to this work, as we did not use this vocabulary in the paper, especially if you are a journalist reporting on it
> Additional vocabulary to avoid using when talking about the paper
> In addition to the vocabulary from Question 1 in this FAQ - please avoid using "brain scans", "LLMs make you stop thinking", "impact negatively", "brain damage", "terrifying findings".
Yeah I feel like HN is being Reddit-ified with the amount of reposted clickbait that keeps making the front page :(
This study in particular has made the rounds several times as you said. The study measures impact of 18 people using ChatGPT just four times over four months. I'm sorry but there is no way that is controlling for noise.
I'm sympathetic to the idea that overusing AI causes atrophy but this is just clickbait for a topic we love to hate.
LocalPCGuy · 30m ago
Yup, I even found myself a bit hopeful that maybe it was a follow-up or new study and we'd get either more or at least different information. But that bit of hope is also an example of my bias/sympathy to that idea that it might be harmful.
It should be ok to just say "we don't know yet, we're looking into that", but that isn't the world we live in.
TheAceOfHearts · 4h ago
Personally, I don't think you should ever allow the LLM to write for you or to modify / update anything you're writing. You can use it to get feedback when editing, to explore an idea-space, and to find any topical gaps. But write everything yourself! It's just too easy to give in and slowly let the LLM take over your brain.
This article is focused on essay writing, but I swear I've experienced cognitive decline when using AI tools a bit too much to help solve programming-related problems. When dealing with an unfamiliar programming ecosystem it feels so easy and magical to just keep copy / pasting error outputs until the problem is resolved. Previously solving the problem would've taken me longer but I would've also learned a lot more. Then again, LLMs also make it way easier to get started and feel like you're making significant progress, instead of getting stuck at the first hurdle. There's definitely a balance. It requires a lot of willpower to sit with a problem in order to try and work through it rather than praying to the LLM slot machine for an instant solution.
jbstack · 3h ago
> I've experienced cognitive decline when using AI tools a bit too much to help solve programming-related problems. When dealing with an unfamiliar programming ecosystem it feels so easy and magical to just keep copy / pasting error outputs until the problem is resolved. Previously solving the problem would've taken me longer but I would've also learned a lot more.
I've had the opposite experience, but my approach is different. I don't just copy/paste errors, accept the AI's answer when it works, and move on. I ask follow up questions to make sure I understand why the AI's answer works. For example, if it suggests running a particular command, I'll ask it to break down the command and all the flags and explain what each part is doing. Only when I'm satisfied that I can see why the suggestion solves the problem do I accept it and move on to the next thing.
The tradeoff for me ends up being that I spend less time learning individual units of knowledge than if I had to figure things out entirely myself e.g. by reading the manual (which perhaps leads to less retention), but I learn a greater quantity of things because I can more rapidly move on to the next problem that needs solving.
mzajc · 3h ago
> I ask follow up questions to make sure I understand why the AI's answer works.
I've tried a similar approach and found it very prone to hallucination[0]. I tend to google things first and ask a LLM as fallback, so maybe it's not a fair comparison, but what do I need a LLM for if a search engine can answer my question.
[0]: Just the other day I asked ChatGPT what a colonn (':') after systemd's ExecStart= means. The correct answer is that it inhibits variable expansion, but it kept giving me convincing yet incorrect answers.
jbstack · 2h ago
It's a tradeoff. After using ChatGPT for a while you develop somewhat of an instinct for when it might be hallucinating, especially when you start probing it for the "why" part and you get a feel for whether its explanations make sense. Having at least some domain knowledge helps too - you're more at risk of being fooled by hallucinations if you are trying to get it to do something you know nothing about.
While not foolproof, when you combine this with some basic fact-checking (e.g. quickly skim read a command's man page to make sure the explanation for each flag sounds right, or read the relevant paragraph from the manual) plus the fact that you see in practice whether the proposed solution fixes the problem, you can reach a reasonably high level of accuracy most of the time.
Even with the risk of hallucinations it's still a great time saver because you short-circuit the process of needing to work out which command is useful and reading the whole of the man page / manual until you understand which component parts do the job you want. It's not perfect but neither is Googling - that can lead to incorrect answers too.
To give an example of my own, the other day I was building a custom Incus virtual machine image from scratch from an ISO. I wanted to be able to provision it with cloud-init (which comes configured by default in cloud-enabled stock Incus images). For some reason, even with cloud-init installed in the guest, the host's provisioning was being ignored. This is a rather obscure problem for which Googling was of little use because hardly anyone makes cloud-init enabled images from ISOs in Incus (or if they do, they don't write about it on the internet).
At this point I could have done one of two things: (a) spend hours or days learning all about how cloud-init works and how Incus interacts with it until I eventually reached the point where I understood what the problem was; or (b) ask ChatGPT. I opted for the latter and quickly figured out the solution and why it worked, thus saving myself a bunch of pointless work.
majewsky · 2h ago
Does it work better when the AI is instructed to describe a method of answering the question, instead of answering the question directly?
For example, in this specific case, I am enough of a domain expert to know that this information is accessible by running `man systemd.service` and looking for the description of command line syntax (findable with grep for "ExecStart=", or, as I have now seen in preparing this answer, more directly with grep for "COMMAND LINES").
dpkirchner · 2h ago
Could you give an example of an ExecStart line that uses a colon? I haven't found any documentation for that while using Google and I don't have examples of it in my systemd unit files.
defgeneric · 56m ago
This is exactly the problem, but there's still a sweet spot where you can get quickly up to speed on a technical areas adjacent to your specialty and not have small gaps in your own knowledge hold you back from the main task. I was quickly able to do some signal processing for underwater acoustics in C, for example, and don't really plan to become highly proficient in it. I was able to get something workable and move on to other tasks while still getting an idea of what was involved if I ever wanted to come back to it. In the past I would have just read a bunch of existing code.
giancarlostoro · 3h ago
When Firefox added autocorrect, and I started using it, I made it a point to learn what it was telling me was correct, so I could write more accurately. I have since become drastically better at spelling, I still goof, I'm even worse when pronouncing words I've read but never heard. English is my second language mind you.
I think any developer worth their salt would use LLMs to learn quicker, and arrive to conclusions quicker. There's some programming problems I run into when working on a new project that I've run into before but cannot recall what my last solution was and it is frustrating, I could see how an LLM could help with such a resolution coming back quicker. Sometimes its 'first time setup' stuff that you have not had to do for like 5 years, so you forget, and maybe you wrote it down on a wiki, two jobs ago, but an LLM could help you remember.
I think we need to self-evaluate how we use LLMs so that they help us become better Software Engineers, not worse ones.
Manik_agg · 3h ago
I agree. Asking LLM to write for you is being lazy and it also results in sub-par results (don't know about brain-rot).
I also like preparing a draft and using llm for critique, it helps me figure out some blind spots or ways to articulate better.
lazide · 3h ago
I’d consider it similar to always using a GPS/Google Maps/Apple Maps to get somewhere without thinking about it first.
It’s really convenient. It also similarly rots the parts of the brain required for spatial reasoning and memory for a geographic area. It can also lead to brain rot with decision making.
Usually it’s good enough. Sometimes it leads to really ridiculous outcomes (especially if you never double check actual addresses and just put in a business name or whatever). In many edge cases depending on the use case, it leads to being stuck, because the maps data is wrong, or doesn’t have updated locations, or can’t consider weather conditions, etc. especially if we’re talking in the mountains or outside of major cities.
Doing it blindly has led to numerous people dying by stupidly getting themselves into more and more dumb situations.
People still got stuck using paper maps. Sometimes they even died. It was much rarer and people were more aware they were lost, instead of persisting thinking they weren’t. So different failure modes.
Paper maps were very inconvenient, so dealt with it using more human interaction and adding more buffer time. Which had it’s own costs.
In areas where there are active bad actors (Eastern Europe now a days, many other areas in that region sometimes) it leads to actively pathological outcomes.
It is now rare for anyone outside of conflict zones to use paper maps except for specific commercial and gov’t uses, and even then they often use digitized ‘paper’ maps.
planetmcd · 2h ago
This article was probably written by AI, because anyone with half a brain could not read the study and come to the same conclusions.
Basically, participants spent less than half an hour, 4 times, over 4 months, writing some bullcrap SAT type essay. Some participants used AI.
So to accept the premise of the article, using an AI tool once a month for 20 minutes caused noticeable brain rot. It is silly on its face.
What the study actually showed, people don't have an investment or strong memory to output they didn't produce. Again, this is a BS essay written (mostly by undergrads) in 20 minutes, so not likely to be deep in any capacity. So to extrapolate, if you have a task that requires you to understand the output, you are less likely to have a grasp of it if you didn't help produce the output.
This would also be true of work some other person did.
sudosteph · 3h ago
Meanwhile my main use cases for AI outside of work:
- Learning how to solder
- Learning how to use a multimeter
- Learning to build basic circuits on breadboxes
- learning about solar panels, mppt, battery management system, and different variations of li-on batteries
- learning about LoRa band / meshtastic / how to build my own antenna
And every single one of these things I've learned I've also applied practically to experiment and learn more. I'm doing things with my brain that I couldn't do before, and it's great. When something doesn't work like I thought it would, AI helps me understand where I may have went wrong, I ask it a ton of questions, and I try again until I understand how it works and how to prove it.
You could say you can learn all of this from YouTube, but I can't stand watching videos. I have a massive textbook about electronics, but it doesn't help me break down different paths to what I actually want to do.
And to be blunt: I like making mistakes and breaking things to learn. That
strategy works great for software (not in prod obviously...), but now I can do it reasonably effectively for cheap electronics too.
nancyminusone · 2h ago
As someone who does these things, I am curious to know how and why you would choose AI.
Working these from text seems to be the hardest way I could think to learn them. I've yet to encounter a written description as to what it feels like to solder, what a good/bad job actually looks like, etc. A well shot video is much better at showing you what you need to do (although finding one is getting more and more difficult)
sudosteph · 2h ago
I just process text information better. Videos are kind of overstimulating and often have unrelated content, and I hate having to rewind back to a part I need while I'm in the middle of something. With LLMs I can get a broad overview of what I'm doing, tell it what materials I already have on hand and get specific ideas for how to practice. Soldering is probably one of the harder ones to learn by text, but the description of the techniques to use were actually really understandable (use flux, be sure the tip is tinned, touch the pad with the tip to warm it up a little, touch again with the iron on one side of the pad and insert the solder in on the other side and it gets drawn in, pull away (timing was trial and error). And then I'd upload a picture of what I did for review and it would point out the ones that had issues and what likely went wrong to cause it (ex: solder sticking to the top of the iron and not the pad), and I would keep practicing and test that it worked and looked like what was described. It may not be the ideal technique or outcome, but it unblocked me relatively quick so I could continue my project.
Being able to ask it stupid questions and edge cases is also something I like with LLMs, like I would propose a design for something (ex: a usb battery pack w/ lifepo4 batts that could charge my phone and be charged by solar at the same time), it would say what it didn't like about my design, counter with its own, then I would try to change aspects of their design to see "what would happen if .." and it would explain why it chose a particular component or design choice and what my change would do and the trade-offs, risks, etc other paths to building it with that, etc. Those types of interactions are probably the best for me actually understanding things, helps me understand limitations and test my assumptions interactively.
stripe_away · 3h ago
and to be blunt, I learned similar things building analog synths, before the dawn of LLMs.
Like you, I don't like watching videos. However, the web also has text, the same text used to train the LLMs that you used.
> When something doesn't work like I thought it would, AI helps me understand where I may have went wrong, I ask it a ton of questions, and I try again until I understand how it works and how to prove it.
Likewise, but I would have to ask either the real world or written docs.
I'm glad you've found a way to learn with LLMs. Just remember that people have been learning without LLMs for a long time, and it is not at all clear that LLMs are a better way to learn than other methods.
sudosteph · 3h ago
The asking people part was the hard thing for me, always has been. That honestly was the missing piece for me. I absolutely agree that written docs and online content are sufficient for some people, that's how I learned Linux and sysadmin stuff, but I tried on and off to get into electronics for years that way and never got anywhere.
I think the problem was all of the getting started guides didn't really solve problems I cared about, they're just like "see, a light! isn't that neat?" and then I get bored and impatient and don't internalize anything. The textbooks had theory but so much of it I would forget
most of it before I could use it and actually learn. Then when I tried to build something actually interesting to me, I didn't actually understand the fundamentals, it always fails, Google doesn't help me find out why because it could be a million things and no human in my life understands this stuff either, so I would just go back to software.
It could be LLMs are at least possibly better for certain people to learn certain things in certain situations.
chaps · 3h ago
> However, the web also has text, the same text used to train the LLMs that you used.
The person you're responding to isn't denying that other people learn from those. But they're explicit that having the text isn't helpful either:
> I have a massive textbook about electronics, but it doesn't help me break down different paths to what I actually want to do.
defgeneric · 52m ago
The physicality of having to actually do things in the real world slows things down to the rate at which our brains actually learn. The "vibe coding" loop is too fast to learn anything, and ends up teaching your brain to avoid the friction of learning.
amelius · 3h ago
Yeah, if you're using LLMs like an apprentice who asks their master, then there's nothing wrong with that, imho.
fxwin · 3h ago
Same here. I've been working through some textbooks without solutions for the contained exercises, and ChatGPT has been invaluable for getting feedback on solutions and hints when I'm stuck
aprilthird2021 · 2h ago
Cool, but most people will get brain rotted by this. It's the same way we constantly talk about how social media is probably bad for people then some commenter comes and says he's not addicted and there's no other way he could communicate with his high school friends who live overseas and know about their lives. Not everyone will only get the positives out of any technology
shironandonon_ · 1m ago
aren’t those with higher intellect at greater risk of depression?
I’m going to use 2x the amount of AI that I was planning to use today.
epolanski · 2h ago
I can't but think this has to be tied to _how_ AI is used.
I actively use AI to research, question and argue a lot, this pushes me to reason a lot more than I normally would.
Today's example:
- recognize docs are missing for a feature
- have AI explore the code to figure out what's happening
- back and forth for ours trying to find how to document, rename, refactor, improve, write mermaid charts, stress over naming to be as simple as possible
The only step I'm doing less is the exploration/search one, because an LLM can process a lot more text than I can at the same time. But for every other step I am pushing myself to think more and more profoundly than I would without an LLM because gathering the same amount of information would've bene too exhausting to proceed with this.
Sure, it may have spared me to dig into mermaid too, for what is worth.
So yes, lose some, win others, albeit in reality no work would've been done at all without the LLM enabling it. I would've moved to another mundane task such as "update i18 formatting of date for swiss german customers".
gandalfgeek · 2h ago
The coverage of this has been so bad that the authors have had to put up an FAQ[1] on their website, where the first question is the following:
Is it safe to say that LLMs are, in essence, making us "dumber"?
No! Please do not use the words like “stupid”, “dumb”, “brain rot”, "harm", "damage", "brain damage", "passivity", "trimming" , "collapse" and so on. It does a huge disservice to this work, as we did not use this vocabulary in the paper, especially if you are a journalist reporting on it.
I can't say I'm surprised by this. The brain is, figuratively speaking, a muscle. Learning through successes and (especially) failures is hard work, though not without benefit, in that the trials and exercises your brain works through exercises the 'muscle', making it stronger.
Using LLMs to do replace the effort we would've otherwise endured to complete a task short-circuits that exercising function, and I would suggest is potentially addictive because it's a near-instant reward for little work.
It would be interesting to see a longitudinal study on the affect of LLMs, collective attention spans, and academic scores where testing is conducted on pen and paper.
onlyrealcuzzo · 3h ago
Sounds bullish for AI.
It's like a drug. You start using it, and think you have super powers, and then you've forgotten how to think, and you need AI just to maybe be as smart as you were before.
Every company will need enterprise AI solutions just to maybe get the same amount of productivity as they got before without it.
Was going to comment the same but you beat me to it!
On that note, reading the ChatGPT-esque summary in the linked article gave me more brain damage than any AI I've used so far
causal · 2h ago
The irony. It isn't even a new study. Way too much has been written about this flawed study when we should just be doing more studies.
jennyholzer · 4h ago
There are dozens of duplicates for pro-AI dreck, so this post should stand.
ayhanfuat · 4h ago
We can at least change the link to the actual paper instead of a vaccine denier's AI generated summary.
causal · 2h ago
Instead of trying to balance dreck can we just... not upvote any dreck
fortyseven · 4h ago
Being anti-AI drivel is completely fine though.
eviks · 4h ago
No, vibe science is not as powerful as to be able to determine "long-term cognitive harm", especially when such "technical wonders" as "measurable through EEG brain scans." are used.
> 83.3% of LLM users were unable to quote even one sentence from the essay they had just written
Not sure why you need to wire EEG up, it's pretty obvious that they simply did _not_ write the essay, LLM did it for them, and likely didn't even read it, so there is no surprise that they don't remember what didn't pass through their own thinking apparatus properly.
matwood · 4h ago
I write all the time and couldn't quote anything off hand. What I can talk about are the ideas in the writing. I find LLMs useful as an editor. Here's what I want to say, is it clear or are there better words, etc... And then I never take the output blindly, and depending on how important the writing is I may go back and forth line by line.
The idea that I would say 'write an essay on X' and then never look at the output is kind of wild. I guess that's vibe writing instead of vibe coding.
infecto · 4h ago
Everyone is different. I don’t have a good grasp on the distribution of HN readers these days but I know for myself as a heavy user of LLMs, I am not sold on this for myself. I am asking more questions than ever. I use it for proof reading and editing. But I can see the risk as a software engineer. I really appreciate tools like cursor, I give it bite size chunks and review. Using tools like Claude code though. It becomes a black box and I no longer feel at the helm of the ship. I could see if you outsourced all thinking to an LLM there can be consequences. That said I am not sold on the paper and suspects it’s mostly hyperbole.
Taek · 4h ago
Cognitive decline is a broad term, and a research paper could claim "decline" if even a single cognitive metric loses strength.
When writing was invented, societies started depending on long form memorization less, which is a cognitive "decline". When calculators were invented, societies started depending on mental math less, which is a cognitive "decline".
I'm sure LLMs are doing the same thing. People aren't getting dumber, they are just outsourcing tasks more, so that their brains spend more time on the tasks that can't be outsourced.
yuehhangalt · 3h ago
My concern is more attributed to the tasks that can't or won't be outsourced.
People who maintain a high level of curiosity or a have drive to create things will most assuredly benefit from using AI to outsource work that doesn't support those drives. It has the potential to free up more time for creative endeavors or those that require more deep thinking. Few would argue the benefit there.
Unfortunately, anti-intellectualism is rampant, media literacy is in decline, and a lot of people are content to consume content and not think unless they absolutely have to. Dopamine is a helluva drug.
If LLMs reduce the cognitive effort at work, and the people go home to doom scroll on social media or veg out in front of their streaming media of choice, it seems that we're heading down the path of creating a society of mindless automatons. Idiocracy is cited so often today that I hate to do so myself, but it seems increasingly prescient.
Edit: I also don't think that AI will enable a greater work-life harmony. The pandemic showed that a large number of jobs could effectively be done remotely. However, after the pandemic, there was significant "Return to Office" movement that almost seemed like retribution for believing we could achieve a better balance. Corporations won't pass on the time savings to their employees and enable things like 4-day work weeks. They'll simply expect more productivity from the employees they have.
IAmBroom · 3h ago
Absolutely true.
Also, domesticated dogs show indications of lower intelligence and memory than wolves. They don't have to plan complex strategies to find and kill food, anymore.
Taek · 3h ago
The difference between us and dogs is that we DO still need to make a salary. Dogs live in a lap of luxury where their needs are guaranteed to be handled.
But humans need jobs, and jobs need to capture value from society. So we do actually still have to stay sharp, whatever form "sharp" takes.
pessimizer · 2h ago
You and dogs have the same job, which is to please the boss. The boss then takes care of you like a child, either with a paycheck (with which you can pay servants to supply your earthly needs), or directly if you're a dog and lack both thumbs and pockets to hold a wallet or a phone. A domestic dog would die left alone in a forest, about two or three weeks after you would.
If you're an entrepreneur, your job is to please the customer and to squeeze your vendors and employees. You still take little to no part in directly taking care of yourself, except as a hobby. Unless you want to be congratulated for wiping your own ass or lifting a fork to your mouth.
infecto · 3h ago
This is super interesting and I had not thought about it like that!
ceejayoz · 4h ago
> I am asking more questions than ever.
Wouldn't that be the expected result here? Less knowledge, more questions?
infecto · 4h ago
That’s one interpretation, but I think there’s a distinction between “asking more questions because I’ve forgotten things” and “asking more questions because I’m exploring further.”
When I use LLMs, it’s less about patching holes in my memory and more about taking an idea a few steps further than I otherwise might. For me it’s expanding the surface area of inquiry, not shrinking it. If the study’s thesis were true in my case, I’d expect to be less curious, not more.
Now that said I also have a healthy dose of skepticism for all output but I find for the general case I can at least explore my thoughts further than what I may have done in the past.
rwnspace · 4h ago
In my personal experience new knowledge tends to beget questions.
xnorswap · 4h ago
> I am asking more questions than ever.
I don't have a dog in this fight, but "asking more questions" could be evidence of cognitive decline if you're having to ask more questions than ever!
It's easy to twist evidence to fit biases, which is why I'd hold judgement to better evidence comes through.
IAmBroom · 3h ago
Well, that's certainly a take.
But if I'm teaching a class, and one student keeps asking questions that they feel the material raised, I don't tend to think "brain damage". I think "engaged and interested student".
charlie-83 · 3h ago
Not OP but there's a difference between needing to ask more questions and asking more questions because its easier now.
Personally, I find myself often asking AI about things I wouldn't have been bothered to find out about before.
For example I've always these funny little grates on the outside of houses near me and wondered what they are. Googling "little grates outside houses" doesn't help at all. Give AI a vagueish description and it instantly tells you they are old boot scapers.
infecto · 3h ago
Haha you nailed it. Walking around and experiencing the world I can now ask a vague question and usually find an answer.
Maybe there is a movie in the back of my head or a song. Typical search engine queries would never find it. I can give super vague references to a LLM and with search enabled get an answer that’s correct often enough.
danenania · 2h ago
The ability to keep following the thread and interrogating the answers is also very valuable. You never have to accept an answer you only half understand.
infecto · 3h ago
Fair point, though I think there’s a difference between “questions out of confusion” and “questions out of curiosity.”
If I’m constantly asking “what does this mean again?” that would signal decline. But if I’m asking “what if I combine this with X?” or “what are the tradeoffs of Y?” that feels like the opposite: more engagement, not less.
That’s why I’m skeptical of blanket claims from one study, the lived experience doesn’t map so cleanly.
ergonaught · 2h ago
No idea whether this holds up, but the human body is all about conditioning and maximizing energy efficiency, so it should at least be unsurprising if true.
My vehicle has a number of self-driving capabilities. When I used them, my brain rapidly stopped attending to the functions I'd given over, to the extent that there was a "gap" before I noticed it was about to do the wrong thing. On resumption of performing that work myself, it was almost as if I had forgotten some elements of it for a moment while my brain sorted it out.
No real reason to think that outsourcing our thinking/writing/etc will cause our brains to respond any differently. Most of the "reasoned" arguments I see against that idea seem based on false equivalences.
Gareth321 · 2h ago
This is why I am not so concerned. I am old enough to remember when teachers thought that outsourcing calculations to calculators would atrophy my brain. They said the same about computers. Then the internet and Wikipedia. On one hand, yes, I am slower at calculating things by hand. On the other, it doesn't matter anymore. I am much faster at getting things accomplished. AI might just be the latest way in which humans are exploring transhumanism. Perhaps we are irreversible altering our brains. I'm just not convinced that's a terrible thing.
Insanity · 1h ago
So, logically, I know this is the case. I can feel it happen to myself, when I use an LLM to generate any kind of work. Although I rarely use it for coding as my job is more at a higher level (designs etc), if I have the LLM write part of a trade-off analysis, I'll remember it less and be less engaged.
What's really bothering me though, is that I enjoy my job less when using an LLM. I feel less accomplished, I learn less, and I overall don't derive the same value out of my work.. But, on the flip side, by not adopting an LLM I'll be slower than my peers, which then also impacts my job negatively.
So it's like being stuck between a rock and a hard place - I don't enjoy the LLM usage but feel somewhat obligated to.
Brian_K_White · 2h ago
It's probably an effect of the transition period where today people are using ais to meet work expectations and metrics of yesterday.
At some point ai will probably be like calculators where once everyone is using them for everything, that will be a new and different normal from today, and the expectations and the way of judging quality etc will be different than today.
Once everyone is doing the same one weird trick as you, it's no longer useful. You can no longer pretend to be a developer or an artist etc.
There will still be a sea of bottom-feeders doing the same thing, but they will just be universally recognized as cheap junk. Annd that's actually fine, kinda. There is a place and a use for cheap junk that just barely does something, the same as a cheap junky screwdriver or whatever.
kawfey · 2h ago
The "your brain on ChatGPT" is giving the same feel as DARE's "your brain on drugs" campagign, and we now see how that went. It immediately loses any credibility for me.
It wasn't immediately clear what they actually had the subjects do. It seems like they wrote an essay, which...duh? I would bet brain activity would be similar -- if not identical -- as an LLM user if the subjects were asked to have the other cohorts to write their essay.
causal · 2h ago
Just look at this comment section - one flawed headline is all it takes to get hundreds of people writing essays about how they totally understand how the brain works and knew it all along.
colincooke · 2h ago
It is worth noting that this study was tbh pretty poorly performed from a psychology/neuroscience perspective and the neuro community was kind of roasting their results as uninterpretable.
Their trial design and interpretation of results is not properly done (i.e. they are making unfair comparison of LLM users to non-LLM users), so they can't really make the kind of claims they are making.
This would not stand up to peer review in it's current form.
I'm also saying this as someone who generally does believe these declines exist, but this is not the evidence it claims to be.
Shank · 2h ago
> It is worth noting that this study was tbh pretty poorly performed from a psychology/neuroscience perspective and the neuro community was kind of roasting their results as uninterpretable.
Do you have links or citations to people saying these claims?
Comes down to:
- Self selection bias
- Trial design
- Dubious intepretations of neural connectivity
variadix · 3h ago
Seems obvious. If you don’t use it you lose it. Same thing happened with mental arithmetic, remembering phone numbers, etc. Letting an LLM do your thinking will make you worse at thinking.
NiloCK · 3h ago
Every augmentation is also an amputation.
Calculators reduced our capabilities in mental and pencil-paper arithmetic. Graphing calculators later reduced our capacity to sketch curves, and in turn, our intuition in working directly with equations themselves. Power tools and electric mixers reduced our grip strength. Cheap long distance plans and electronic messaging reduced our collective abilities in long-form letter writing. The written word decimated the population of bards who could recite Homer from memory.
It's not that there aren't pitfalls and failure modes to watch out for, but the framing as a "general decline" is tired, moralizing, motivated, clickbait.
add-sub-mul-div · 2h ago
> Calculators reduced our capabilities in mental and pencil-paper arithmetic.
And now people make bad decisions in their daily life about money etc. Most people can't do the math in their head but they also aren't using their calculator at the grocery store to avoid being taken advantage of. The math doesn't get done.
The lesson isn't that we survived calculators, it's that they did dull us, and our general thinking and creativity are about to get likewise dulled.
stevenjgarner · 41m ago
This MIT study does not seem to address whether AI use causes true cognitive decline, or simply shifts the role of cognition from "doing the task" to "managing the task"?
stevenjgarner · 42m ago
This study does not seem to address whether AI use causes true cognitive decline, or simply shifts the role of cognition from "doing the task" to "managing the task"?
DrNosferatu · 4h ago
If you blindly trust it instead of using it as an iterative tool, I guess…
But didn’t pocket calculators present the same risk / panic?
diddid · 4h ago
Graphing calculators did, which is why in a lot of math classes they got banned. If your calculator can solve for x, you won’t spend time learning how to. The best math classes usually do without calculators focusing on concepts and skip numbers you’d need a calculator for.
boesboes · 4h ago
This, I was allowed to use the grahpic mode to do integrals and differentials. It made high school easy, but in uni I had zero math skills it turned out. Had to switch studies..
wiredfool · 4h ago
There’s a narrow band of math that’s amenable to pocket calculators. When used in that band, they can repeatably return the correct answer.
bell-cot · 4h ago
The cognitive decline described here sounds far broader than just getting rusty at arithmetic.
jennyholzer · 4h ago
When I enter 5 x 5 on a pocket calculator, I always get 25
flanbiscuit · 2h ago
> and diminished sense of ownership over their own writing.
Anecdotally, this is how I felt when I tried out AI agents to help me write code (vibe coding). I always review the code and I ask it to break it down into smaller steps but because I didn't actually write and think of the code myself, I don't have it all in my brain. Sure, yes I can spend a lot of time really going through it and building my mental model but it's not the same (for me).
But this is also how I felt when I managed a small team once. When you start to manage more and code less, you have to let go of the fact that you have more intimate knowledge of the codebase and place that trust in your team. But at least you have a team of humans.
AI agentic coding is like shifting your job from developer to manager. Like the article that was posted yesterday said: 'treating AI like a "junior developer who doesn't learn"' [1,2].
One good thing I like about AI is that it's forcing people to write more documentation. No more complaining about that.
AI solves the 2-sigma problem when used correctly.
AI is extremely neurodegenerative when used incorrectly.
The people using it as a research assistant to discover quality sources they can dive into, and as a tutor while working through those resources, are getting smarter.
The people using it as an “oracle made from magic talking sand” are getting dumber.
To be fair, the same thing is true of the web in general, but not to the extreme I’ve been seeing with AI.
I’m predicting the bell curve of IQ is going to flatten quite a bit over the next decade, as people shift two sigma in both directions.
pjio · 4h ago
First step out of this mess: Use AI only to proof read or get a second opinion, but not to write the whole thing.
bookofjoe · 4h ago
That ship has sailed.
>Everyone Is Cheating Their Way Through College. ChatGPT has unraveled the entire academic project.
Its as if somebody finds shocking the fact that people are generally lazy. Then you have the other extreme group, deniers. "I work more than ever!", "I ask even more questions!" and so on here and elsewhere.
Sure you do, and maybe its really an actual benefit for ya. Not for most though. For young folks still going through education, this is devastating. If I didn't have kids I wouldn't care, less quality competition at work, but I do (too young to be affected by it now, and by the time they will be allowed to use these, frameworks for use and restrictions will be in place already).
But since maybe 30% of folks here are directly or indirectly dependent on LLMs to be pushed down every possible throat and then some more, I expect much more denial and resistance to critique of their little pets or investments.
sudosteph · 3h ago
I'm one of the people who find LLMs extremely helpful from a learning perspective, but to be perfectly honest, I've met the children of complete "luddites" (no tablets, internet on home on timer for school work, not allowed phones until 16, home schooled, house filled with a million books) and they honestly were some of the more intelligent, well-read, and thoughtful young people I've met.
LLMs may end up being both educationally valuable in certain contexts for certain users, and totally unsuitable for developing brains. I would err towards caution for young minds especially.
charlie-83 · 3h ago
It feels like all this is because the point of school/college/university is just to get a piece of paper rather than to earn skills. Why wouldn't you get chatgpt to write your essay when your only goal is to get a passing grade.
My optimistic take is that the rise of AI in education could cause more workplaces to move away from "must have xyz degree" and actually determine if the candidate has the skills needed.
jbstack · 3h ago
I agree with this in principle, but the problem is what happens to the in-between generation that cheats their way towards getting the piece of paper before the world moves on to a better way? At least for previous generations you got the piece of paper and you acquired some skills/knowledge.
For this reason, I don't feel as optimistic as you do. I worry instead that equality gaps will widen significantly: there will be the majority which abuses AI and graduates with empty brains, and there will be the minority who somehow manage to avoid doing that (e.g. lucky enough to have parents with sufficient foresight to take preventative measures with their children).
"That’s because the Chinese Communist Party knows their youth learn less when they use artificial intelligence. Surely, President Xi Jinping is reveling in this leg up over American students, who are using AI as a crutch and missing out on valuable learning experiences as a result.
It’s just one of the ways China protects their youth, while we feed ours into the jaws of Big Tech in the name of progress."
IAmBroom · 3h ago
Then there's this new law in China, which sounds amazing - informing, not censoring.
Let's say I'm a writer of no skill who still wants attention. I could spend years learning to write better, but I still might not get any attention.
Or I could use AI to write something today. It won't be all that interesting, because AI still can't write all that well, but it may be better than I can do on my own, and I can get attention today.
If you care about your own growth (or even not dwindling) as a human, that's a trap. But not everyone cares about that...
Bluecobra · 3h ago
This is exactly how I use AI at work—-to quickly generate funny meme images/inside jokes for a quick chuckle. I’m no artist and probably will never be one. My digital art skills amount to drawing stick figures in MS Paint.
TYPE_FASTER · 3h ago
I used to know a bunch of phone numbers by heart. I haven't done that since I got a cellphone. Has that had an impact on my ability to memorize things? I have no idea.
I have recently been finding it noticeably more difficult to come up with the word I'm thinking of. Is this because I've been spending more time scrolling than reading? I have no idea.
isodev · 3h ago
An AI is telling me these could be symptoms of the onset of a degenerative neurological condition. Is it true? I have no idea.
blackqueeriroh · 2h ago
I’d encourage folks to listen to this podcast[1] or read the transcript which is done by two incredibly respected people, Dr. Cat Hicks, a psychologist who studies software teams, and Dr. Ashley Juavinett, who is a practicing and teaching neuroscientist. They note the many flaws in the study and discuss what actually good brain research would look like.
Chatting with vibe coders on reddit, I can definitely tell.. although my hunch is that a lot of people "not smart" enough to learn to program will be entering the field calling themselves programmers.
I think maybe they are project managers since the programming is outsourced to Ai, but the idea don't seem to catch on there
mansilladev · 4h ago
“…our cognitive abilities and creative capacities appear poised to take a nosedive into oblivion.”
Don’t sugarcoat it. Tell us how you really feel.
jennyholzer · 4h ago
I think developers who use "AI" coding assistants are putting their careers at risk.
dguest · 4h ago
And here I'm wondering if I'm putting my career at risk by not trying them out.
Probably both are true: you should try them out and then use them where they are useful, not for everything.
Taek · 3h ago
HN is full of people who say LLMs aren't good at coding and don't "really" produce productivity gains.
None of my professional life reflects that whatsoever. When used well, LLMs are exceptional and putting out large amounts of code of sufficient quality. My peers have switched entire engineering departments to LLM-first development and are reporting that the whole org is moving 2x as fast even after they fired the 50% of devs who couldn't make the switch and didn't hire replacements.
If you think LLM coding is a fad, your head is in the sand.
010101010101 · 3h ago
Yesterday I used Warp’s LLM integrations to write two shell scripts that would have taken me longer to author myself than to do the task manually. Of the three options, this was the fastest by a wide margin.
For this kind of low stakes, easily verifiable task it’s hard to argue against using LLMs for me.
dguest · 3h ago
Right now I'm mostly an "admin" coder: I look at merge requests and tell people how to fix stuff. I point them to LLMs a lot too. People I know who are actually writing a lot of code are usually saying LLMs are nice.
bgwalter · 3h ago
The instigators say they were correct and fired the political opponents. Unheard of!
I have no doubt that volumes of code are being generated and LGTM'd.
mooxie · 3h ago
Agreed. I work for a tiny startup where I wear multiple hats, and one of them is DevOps. I manage our cloud infra with Terraform, and anyone who's scaled cloud infrastructure out of the <10 head count company to a successful 500+ company knows how critical it can be to get a wrangle on the infrastructure early. It's basically now or never.
It used to take me days or even multiple sprints to complete large-scale infrastructure projects, largely because of having to repeatedly reference Terraform cloud provider docs for every step along the way.
Now I use Claude Code daily. I use an .md to describe what I want in as much detail as possible and with whatever idiosyncrasies or caveats I know are important from a career of doing this stuff, and then I go make coffee and come back to 99% working code (sometimes there are syntax errors due to provider / API updates).
I love learning, and I love coding. But I am hired to get things done, and to succeed (both personally and in my role, which is directly tied to our organization's security, compliance, and scalability) I can't spend two weeks on my pet projects for self-edification. I also have to worry about the million things that Claude CAN'T do for me yet, so whatever it can take off of my plate is priceless.
I say the same things to my non-tech friends: don't worry about it 'coming for your job' yet - just consider that your output and perceived worth as an employee could benefit greatly from it. If it comes down to two awesome people but one can produce even 2x the amount of work using AI, the choice is obvious.
010101010101 · 4h ago
Developers who don’t understand how the most basic aspects of systems they work on function are a dime a dozen already, I’m not sure LLMs change the scale of that problem.
baq · 4h ago
fighter jet pilots who use the ejection seat are putting their careers at risk, but so are the ones who don't use it when they should.
bookofjoe · 4h ago
>F-35 pilot held 50-minute airborne conference call with engineers before fighter jet crashed in Alaska
I would say that the careers of everyone who views themselves as writing code for a living are already at great risk. So if you're in that situation, you have to see how to go up (or down) the ladder of abstraction, and getting comfortable with using GenAI is possibly a good way to do that.
flanked-evergl · 4h ago
The future is increased productivity. If someone can outproduce you if they use AI, then they will take your job.
tmcb · 3h ago
This is industrial-grade FOMO. They will take the jobs of the first handful of people. The moment it is obvious that LLMs are a productivity booster, people will learn how to use it, just like it happened with any other technology before.
boesboes · 4h ago
After working with claude code for a few months, I am not worried.
falcor84 · 3h ago
What does that mean? If you're still paying for a Claude Code, you are supposedly getting increased productivity, right? Or otherwise, why are you still using it?
lexandstuff · 3h ago
I find it useful. A nice little tool in the toolkit: saves a bunch of typing, helps to over come inertia, helps me find things in unfamiliar parts of the codebase, amongst other things.
But for it to be useful, you have to already know what you're doing. You need to tell it where to look. Review what it does carefully. Also, sometimes I find particular hairy bits of code need to be written completely by hand, so I can fully internalise the problem. Only once I've internalised hard parts of codebase can I effectively guide CC. Plus there's so many other things in my day-to-day where next token predictors are just not useful.
In short, its useful but no one's losing a job because it exists. Also, the idea of having non-experts manage software systems at any moderate and above level of complexity is still laughable.
falcor84 · 2h ago
I don't think the concern is that non-experts would manage large software systems, but that experts would use it to manage larger software systems on their own before needing to hire additional devs, and in that way reduce the number of available roles. I.e. it increases the "pain threshold" before I would say to myself "it's worth the hassle to hire and onboard another dev to help with this".
unethical_ban · 3h ago
Were accountants that adopted Excel foolish?
Like any new tool that automates a human process, humans must still learn the manual process to understand the skill.
Students should still learn to write all their code manually and build things from the ground up before learning to use AI as an assistant.
micromacrofoot · 3h ago
everyone's also telling us that if we don't use AI we're putting our careers at risk, and that AI will eventually take our jobs
personally I think everyone should shut up
gandalfgeek · 2h ago
The title of the study is provocatively framed and the actual findings don't live up to it. I made a short video explaining it-- https://www.youtube.com/watch?v=hLDCi0VwyiQ
sigbottle · 3h ago
obviously obvious caveats like, intentional use is good, lazy use is bad, etc.
I've found it both helpful and dangerous, it's great for expanding scope obviously, greater search engine.
But I've also significantly noticed further some of the "harmful patterns" I guess that I would not have noticed about... myself? For example, AI is way too eager to "solve things" when given a prompt, even if you give it an abstract one. It's unable to take a step back and just.... think?
And hey, I notice that I do that too! Lol.
It's helped me realize more refined "stages" of thinking I guess, even beyond just "plan" and "solve".
But for sure a lot of the time I'm just lazy and ask AI to just "go do it" and turn off critical thinking, hoping that it can just 1 shot the problem instead of me breaking it down. Sometimes it genuinely works. Often it doesn't.
I think if I stay way more intentional with my thinking, I can use it to good use. Which will probably reduce AI usage - but it's the first principles of real critical thinking, not the usage of AI.
---
These kinds of studies remind me of when my parents told me "stop getting addicted to games" as a kid. Sure, anyone can observe effects, it takes real brains to really try and understand the first principles effects. Addiction went away in a flash once I understood the principles, lol.
whatamidoingyo · 3h ago
I've been seeing people use LLMs to reply to people on Facebook. Like, they'll just be having a general discussion, and then reply as ChatGPT. I don't know if they think it makes them look smart; I think it has the complete opposite effect.
Not many people can perform mental arithmetic beyond single-digit numbers. Just plug it into a calculator...
We're at the point of people plugging their thoughts into an LLM and having it do the work for them... what's going to happen to thinking?
rekrsiv · 2h ago
I believe this is true for literally anything that replaces practice. We're meant to build muscle memory for things through repetition, but if we sidestep the repetition by farming it out to another process, we never build muscle memory.
badbart14 · 4h ago
I remember this paper when it came out a couple months ago. Makes a lot of sense, the use of tools like ChatGPT essentially offshore the thinking processes in your brain. I really like the analogy to time under tension they talk about in https://www.theringer.com/podcasts/plain-english-with-derek-... (they also discuss this study and some of the flaws/results with it)
siliconc0w · 2h ago
Isn't it obvious that you use your brain less to generate an essay with AI vs writing it manually?
I think what you'd want to measure is someone completing a task manually and someone completing n times the tasks with a copilot.
digitcatphd · 3h ago
So users are more detached from their work? How does this correspond with cognitive decline? Wouldn’t it need to be cross referenced in other areas beside the task at hand? Seems a bit of a headline grabbing study to me. Personally I find thinking with an LLM helps me take a more structured and unbiased approach to my thought process
briandw · 2h ago
This is standard response to any new technology. Socrates call books the death of knowledge, in the 19th century there was a moral panic about girls reading novels etc etc.
lif · 3h ago
What are the costs of convenience? Surely most LLM use by consumers leans into that heavily.
jugg1es · 2h ago
All of the nay-sayers in the comments here are thinking about this from the POV of a person who reached intellectual maturity without LLMs and now use it as a force multiplier, and rightly so.
However, I think that take is too short-sighted and doesn't take into account the effect that these products have on minds that have not yet reached maturity. What happens when you've been using ChatGPT since grade school and have effectively offloaded all the hard stuff to AI through college? Those people won't be using it as a force multiplier - they will be using it to perform basic tasks. Ray-Ban sells glasses now with LLMs built in with a camera and microphone so you can constantly interact with it all day. What happens when everyone has one of these devices and use it for everything?
tuesdaynight · 2h ago
I believe that they will solve problems in different ways, just like we solve problems different from our ancestors because of the internet.
vonneumannstan · 3h ago
No different than Socrates complaining about students using writing ruining their memory.
babycheetahbite · 4h ago
Does anyone have any suggestions for approaches they are taking to avoid the potential for this? Something I did recently in ChatGPT's 'Instructions' box (so far I have only used ChatGPT) is requesting it to "Make me think through the problem before just giving me the answer." and a few other similar notes.
deadbabe · 3h ago
At the very least, don’t use LLMs tightly integrated into your IDE. Keep them at arms length, use them the way you use a search engine.
teekert · 4h ago
Anybody who has tried to shortcut themselves into a report on something using an LLM, and was then asked to defend the plans contained within it knows that writing is thinking. And if you outsource the writing, you do less thinking and with less thinking there is less understanding. Your mental model is less complete, less comprehensive.
I wouldn't call it "cognitive decline", more "a less deep understanding of the subject".
Try solving bugs from your vibe coded projects... It's pain, you haven't learned anything while you build something. And as a result you don't fully grasp how your creation works.
LLM are tools, but also shortcuts, and humans learn by doing ¯\_(ツ)_/¯
This is pretty obvious to me after using LLMs for various tasks over the past years.
jennyholzer · 4h ago
This dynamic is frustrating on the individual level, but it is poisonous on the organizational level.
I am offended by coworkers who submit incompletely considered, visibly LLM generated code.
These coworkers are dragging my team down.
gkilmain · 4h ago
I find this acceptable if your coworkers are checked out and looking for that next big thing
warmedcookie · 4h ago
On the bright side, if you are forced to write AI code, at least reviewing PRs of AI generated slop gives your brain an exercise, albeit a frustrating one.
teekert · 4h ago
I'm sure they are, but maybe they just need some guidance. I was fortunate to learn this by myself, but when you just start out, it feels like magic. Only later do you realize you have also sacrificed something.
grim_io · 4h ago
I have never used LLM's to write essays, so I can't comment on that.
What I can comment on is how valuable and energizing it is for me to cooperatively code with LLM's using agents.
I find it sad to hear when someone finds this experience disappointing, and I wonder what could go wrong to make it so.
grugagag · 3h ago
I don’t thik someone finds this experience dissapointing but harmful for cognition, probably in the long run as the cognition ‘muscle’ athrophies in some regions as I see it. Remains to be seen how it pans out. However, how much would you be willing to pay for LLMs before you decide it’s not worth it? It is unexpensive at this stage but this won’t last.
yayitswei · 3h ago
Management roles have always involved outsourcing cognitive work to subordinates. Are we seeing a cognitive decline there too? Maybe delegation was the original misalignment problem.
CuriouslyC · 4h ago
This does not mesh with my personal experience. I find that AI reduces task noise that prevents me from getting in the flow of high level creative/strategic thinking. I can just plan algorithms/models/architectures and very quickly validate, test, iterate and always work at a high level while the AI handles syntax and arcane build processes.
Maybe it's my natural ADHD tendencies, but having that implementation/process noise removed from my workflow has been transformational. I joke about having gone super saiyan, but it's for real. In the last month, I've gotten 3 papers in pre-print ready state, I'm working on a new model architecture that I'm about to test on ARC-AGI, and I've gotten ~20 projects to initial release or very close (several of which concretely advance SOTA).
tqwhite · 3h ago
What a load of crap. I don't believe it for one second. Also, AI has only been an important influence for about twenty minutes.
Here's what I think: AI causes you to forget how to program but causes you to learn how to plan.
Also, AI enhances who you are. Dummies get dummer. Smarties get smarter.
But that's not proven. It's anecdote. And I don't believe anyone knows what is really happening and those that claim to are counterproductive.
jugg1es · 2h ago
I think you are looking at this from a too-narrow lens. What happens when people have ChatGPT built into their eyeglasses and they use it for literally everything. Ray-Ban is already selling this as a product.
jennyholzer · 4h ago
> In post-task interviews:
> 83.3% of LLM users were unable to quote even one sentence from the essay they had just written.
> In contrast, 88.9% of Search and Brain-only users could quote accurately.
> 0% of LLM users could produce a correct quote, while most Brain-only and Search users could.
Reminds me of my coworkers who have literally no idea what Chat GPT put into their PR from last week.
aurareturn · 4h ago
Maybe we should question the value of essays in the ChatGPT world?
Could a person, armed with ChatGPT, come up with a better solution in a real world problem than without ChatGPT? Maybe that's what actually matters.
Ekaros · 4h ago
Can they evaluate if the idea that came up with is better if they do not remember how it was stated? Isn't point of writing actually to formulate down the thoughts in communicable manner. And then possibly to be verified by others.
But how can they discuss any content if even the "writer" does not remember what they wrote.
kibwen · 4h ago
The point of writing essays is not to produce an essay, it's to demonstrate that you understand something well enough to engage with it critically, in addition to being an exercise for critical thinking itself.
abirch · 4h ago
College was transformed from an apprentice style institution of the 1500s to mass produced thing of the early 2000s (where a professor can "teach" 500 students in a class).
I think a return to the apprentice style of institution where people try to create the best real world solution as possible with LLMs, 3D printers, etc. Then use recorded college courses like our grandparents used books.
Remember, they only measured that the less time you spend on a task, the less you remember it.
MarkusWandel · 2h ago
Muscles atrophy from lack of use - as an aging cyclist with increasing numbers of e-bikes all around, I think I may some day have to use one because of age, but what are all these younger people doing, cheating themselves out of exercise?
And so it is with many things. I wrote cursive right through the end of my high school years, but while I can type well on a computer, I have trouble even writing block lettering without mistakes now, and cursive is a lost cause.
Ubiquitous electronic calculators have eroded the heroic mental calculation skills of old. And now artificial "thinking machines" to do the thinking for you cause your brain to atrophy. Colour me surprised. The Whispering Earring story was mentioned here just recently but is totally topical.
I feel like this sort of thing will be referenced for comic relief in future talks about hysteria at the dawn of the AI era.
The article actually contains the sentence "The machines aren’t just taking over our work—they’re taking over our minds." which reminds me more of Reefer Madness than an honest critique of modern tech.
amelius · 4h ago
Isn't intelligence -> asking the right questions?
Rather than coming up with the right answers?
tiborsaas · 4h ago
It's both and they form a feedback loop. You come up with a problem (question) and you solve the problem which might lead to more questions. So problem solving and reflecting back on it are both building blocks of intelligence.
nzach · 3h ago
> 0% of LLM users could produce a correct quote, while most Brain-only and Search users could
I think a better interpretation would be to say that LLMs gives people the ability to "filter out" certain tasks in our brains. Maybe a good parallel would be to point out that some drivers are able to drive long distances on what is essentially an "auto-pilot". When this happens they are able to drive correctly but don't really register every single action they've taken during the process.
In this study you are asking for information that is irrelevant (to the participant). So, I think it is expected that people would filter it out if given the chance.
I think the “crushing nihilism” pro-AI argument is what makes me most depressed. We are going to have so much fun when we do not communicate with other humans because it is a task that we can easily “filter out.”
kelsey98765431 · 4h ago
Misleading title, the article explicitly says when used to cheat on essays.
arzig · 2h ago
Honestly the only use I’ve found for AI so far is for executing refactorings that are mechanical but don’t fit nicely into the rename/move or multi-cursor mode.
I’ll do it once or twice, tell the llm to do it and reference the changes I made and it’s usually passable. It’s not fit for anything more imo.
gowld · 9m ago
This research is based on people being given 20 minutes to research and write an "essay"? (Or, in the Brain only case, write an "essay" without doing any research.)
How is that not utter garbage? You're comparing text that is barely more than a forum comment, and noticing that people who spend the short time thinking and writing are engaging in different activity from people who spend the time using a research tools and different activity from people whow spend the time asking an AI (and waiting for it) to generate content.
asimovfan · 3h ago
Writing long texts for school is stupid and it is a skill that is in practice purely developed in order to do homework. I am not surprised it immediately declines as soon as the necessity is removed.
patrickmay · 3h ago
On the contrary, writing is key to organizing and clarifying one's thoughts. It is an essential part of learning.
"Writing is nature’s way of letting you know how sloppy your thinking is."
-- Guindon
asimovfan · 3h ago
People write a lot of stuff that is not for homework. Maybe they should make a measurement of something else they write. I would even say that writing for homework is a special skill in bullshitting that does not (cannot) exist in other forms of writing.
LMKIIW · 3h ago
> ...is a skill that is in practice purely developed in order to do homework.
I would argue that it helps kids learn how to organize and formulate coherent thoughts and communicate with others. I'm sure it helps them do homework, too.
miltonlost · 3h ago
An Asimov fan saying writing long texts is stupid? I bet he would have some strong feelings against that
asimovfan · 2h ago
An asimov novel is not a homework, i explicitly referred to homeworks. People write a lot of stuff other than homework.
krapp · 3h ago
Well yeah, he was probably getting paid by the word :)
lowbloodsugar · 11m ago
I mean, I felt the same way about people who built things with Visual Basic instead of C or assembly, back in the day. Then there were super smart people who were doing critical things in C/C++ and using VB to make a nice UI.
AI is no different. Most will use it and not learn the fundamentals. There’s still lots of work for those people. Then some of us are doing things like looking at the state machines that rust async code generation produces or inspecting what the Java JIT is producing and still others are hacking ARM assembly. I use AI to take care of the boring bits, just like writing a nice UI in C++ was tedious back in 1990 so we used VB for that.
rogerkirkness · 3h ago
This article is written by AI. The em dashes and 'Don't just X, but Y' logic is a classic ChatGPT writing pattern in particular.
Kuinox · 3h ago
The em dashes exists in ChatGPT output because existing human text contains it, like journal articles.
agigao · 1h ago
Skill atrophy will be a composition of 2 words that will very much define the tech industry in 2025.
And, it is something we need to talk about loudly, but I guess it wouldn't crank up a number of followers or valuation of AI grifters.
footy · 4h ago
there's going to be an avalanche of dementia for the generations that outsource all their thinking to LLMs
johnisgood · 4h ago
IMO that is a misuse of LLMs. You are not supposed to outsource your thinking. You need to be part of the whole process, incl. the architectural design. I am, and I got far with LLMs (Claude mostly, not much with GPT). I use GPT for personal stuff or ramblings, not for coding.
There will always be people who misuse something, but we should not hurt those who do not. Same with drugs. There are functional junkies who know when to stop, go on a tolerance break, take just enough of a dose and so forth, vs. the irresponsible ones. The situation is quite similar and I do not want AI to be "banned" (assuming it could) because of people who misuse LLMs.
People, let us have nice things.
As for the article... did they not say the same thing about search engines and Wikipedia? Do you remember how cheating actually helps us learn (by writing down the things you want to cheat)? Problem is, people do not even bother reading the output of the LLM and that is on them.
footy · 4h ago
sure, we may call that a misuse. But there are already people using them this way, and they're marketed this way, and I was not making a point about the correctness of using them this way---just observing that this is going to have far-reaching consequences.
johnisgood · 3h ago
I know, and it is huge problem that people use it this way, and that it is marketed this way.
jajko · 3h ago
Misuse or not, who cares about labeling.
Internet was supposed to be this wonderful free place with all information available and unbiased, not the cesspool of scams and tracking that makes 1984 look like a fairytale for children. Atomic energy was supposed to free mankind from everlasting struggle for energy dependency, end wars and whatnot. LLMs we supposed to be X and not Y and used as Z and not BBCCD.
For what population loses overall, compared to whats gained (really, what? a mild increased efficiency sometimes experienced on individual level, sometimes made up for PR), I consider these LLMs are a net loss for whole mankind.
Above should tell you something about human nature, how naive some of the brightest of us are.
johnisgood · 3h ago
It works for me, so I would rather not have it taken away from me. Take it away from people who misuse it.
If it is a human nature issue (with which I agree), then we are in a deep shit and this is why we cannot have nice things.
Educate, and if that fails, then punish those who "misuse" it. I do not have a better idea. It works for me quite well for coding, and it will continue to work as long as it is not going to get nerfed.
jajko · 3h ago
Nobody is taking it away from you, but as we seem to agree that ship has sailed for some deep waters, nobody is backpedaling back.
Well cheers to even bigger gap between elite who can afford good education and upbringing and cheap crappy rest. Number of scifi novels come to mind where poor semi-mindless masses are governed by 'educated' elites. I always thought how such society must have screwed up badly in the past to end up like that. Nope, road to hell is indeed paved with good intentions and small little steps which seem innocent or even beneficial on their own, in their time.
johnisgood · 3h ago
It is just crazy that people still believe in the "think of the children" narratives, or "it is for your own safety". I think these seemingly good intentions (which are not actually good intentions, just seem so) are a huge problem, and lack of resistance because if you resist, their rebuttal is "you don't want our kids to be safe?!" and so forth, appealing to emotions and shame.
bgwalter · 4h ago
I tried to see what the hype is about and translated one build system to another using "AI". The result was wrong, bloated and did not work. I then used smaller steps like the prompt geniuses recommend. It was exhausting, still riddled with errors, like a poor version of copy & paste.
Most importantly, I did not remember anything (which is a good thing because half of the output is wrong). I then switched to Stackoverflow etc. instead of the "AI". Suddenly my mental maps worked again, I recalled what I read, programming was fun again, the results were correct and the process much faster.
ramesh31 · 2h ago
I think like a lot of people here, my posture towards AI usage over the last 2 years has gone from:
"Won't touch it, I'd never infect my codebase with whatever garbage that thing could output" -> ChatGPT for a small function here or there -> Cursor/Copilot style autocomplete -> Claude Code fully automating 90% of my tasks.
It felt like magic at first once reaching that last (current) point. In a lot of ways for certain things it still is. But it's becoming clearer and clearer that this will never be a silver bullet, and I'm ready to evolve further to "It's another tool in the toolbox to be applied judiciously when and where it makes sense, which it usually does not.". I've also come to greatly distrust anything an LLM says that isn't verified by a domain expert.
I've also felt a great amount of joy from my work go away over this time. Much as the artisans of old who were forced to sit back and supervise the automated machines taking over their craft churn out crappier versions of something faster. There's more to this than just being an old fart who doesn't want to change. We all got into this field for a reason, and a huge part of that reason is that it brings us joy. Without that joy we are going to burn out quickly, and quality is going to nosedive.
hnpolicestate · 3h ago
I've stopped thinking to formulate content. I now think to prompt.
This makes complete sense though. We're simply trying to automate the human thinking process like we try to use technology to automate/handoff everything else.
Why is this surprising? "Use it or lose it" may be a cliche, but it's true; if you don't keep some faculty conditioned, it gets "rusty". That's the general principle, so it would be surprising if this were an exception.
The age of social media and constant distraction already atrophies the ability to maintain sustained focus. Who reads a book these days, never mind a thick book requiring struggle to master? That requires immersion, sustained engagement, persevering through discomfort, and denying yourself indulgence in all sorts of temptations and enticements to get a cheap fix. It requires postponed gratification, or a gratification that is more subtle and measured and piecemeal rather than some sharp spike. We become conditioned in Pavlovian fashion, more habituated to such behavior, the more we engage in such behavior.
The reliance on AI for writing is partly rooted in the failure to recognize that writing is a form of engagement with the material. Clear writing is a way of developing knowledge and understanding. It helps uncover what you understand and what you don't. If you can't explain something, you don't know it well enough to have clear ideas about it. What good does an AI do you - you as a knowing subject - if it does the "writing" for you? You, personally, don't become wiser or better. You don't become fit by watching others exercise.
This isn't to say AI has no purpose, but our attitude toward technology is often irresponsible. We think that if we have the power to do something, we are missing out by not using it. This is boneheaded. The ultimate measure is whether the technology is good for you in some particular use case. Sometimes, we make prudential allowances for practical reasons. There can be a place for AI to "write" for us, but there are plenty of cases where it is simply senseless to use. You need to be prudent, or you end up abusing the technology.
SkyBelow · 3h ago
The main issue I see is that the methodology section of the paper limited the full time to 20 minutes. Is this a study of using LLMs to write an essay for you, or using LLMs to help you write an essay? To be fair, LLMs can't be swapped between the two modes, so the distinction is left up to the user in how they engage in.
Thinking about it myself, and looking at the questions and time limits, I'm not sure how I would be able to navigate that distinction given only 20 minutes. The way I would use an LLM to aid me in writing an essay on the topic wouldn't fit within the time limit, so even with an LLM, I would likely stick to brain only except in a few specific case that might occur (forgetting how to spell a word or forgetting a name for a concept).
So this study likely is applicable to similar timed instances, like letting use LLMs on a test, but that's one I would have already seen as extremely problematic for learning to begin with (granted, still worth while to find evidence to back even the 'obvious' conclusions).
j45 · 4h ago
The gap i see is the definition of "AI use" is not clearly delineated between passive (usage similar to consumption) vs active.
Passive AI use where you let something else think for your will obvious cause cognitive decline.
Active use of AI as a thought partner, and learning as you go yourself seem to feel different.
The issue with studying 18-22 year olds is their prefrontal cortex (a center of logic, will power, focus, reasoning, discipline) is not fully developed until 26. But that probably doesn't matter if the study is trying to make a point about technology.
The art of learning fake information from real could also increase cognitive capacity.
Mistletoe · 4h ago
The future for humans worries me a lot. What evolutionary pressures will exist to keep us intelligent? We are already seeing IQ drop alarmingly across the world. Now AI comes in from the top rope with the steel chair?
Why does it matter? Some will become Eloy and some Trogs.
latexr · 3h ago
> Why does it matter?
Because the people around you affect your life. Presumably you don’t want to live in a world of stupid people who are incapable of critical thought or doing anything which are not direct instructions from a machine. Think about it every time you are frustrated by your interaction with a system you have no choice but to use, such as a bank or a government branch.
If you are referencing The Time Machine, I remember reading a neat comic book version of the book when I was a kid. Sometimes I feel we are quite close to having Eloi and Murlocks evolving already.
>the gentle, childlike Eloi and the subterranean, predatory Morlocks.
Seems like a nice metaphor for the current two political parties we are provided with.
latexr · 3h ago
> a neat comic book version
Wikipedia lists several. Do you recall which you read?
What a rather ironic headline that generalizes across all "AI use", while the story is about a study that is specifically about "essay writing tasks". But that kind of slop is just par for the course for journalists and also always has been.
But it does highlight that this mind-slop decline is not new in any way even if it may have accelerated with the decline and erosion of standards.
Think of it what you want, but if the standards that led to a state everyone really enjoys and benefits from are done away with, inevitably that enjoyable state everyone benefited from and you really like will start crumbling all around you.
AI is not really unusual in this manner, other than maybe that it is squarely hitting a group and population like public health policy journalists and programmers that previously thought they were immune because they were engaged in writing. Yes, programmers are essentially just writers.
feverzsj · 4h ago
"@gork Is this true?"
iphone_elegance · 3h ago
well now that explains HN
ath3nd · 4h ago
That explains a lot of Hacker News lately. /s
Like everything else in our life, cognition is "use it or lose it". Oursourcing your decision making and critical thinking to a fancy autocomplete with sycopantic tendencies and incapable of reasoning sure is fun, but as the study found, it has its downsides.
kibwen · 4h ago
To be fair, a lot of commenters on HN were demonstrably suffering the effects of cognitive decline for years before LLMs.
AnimalMuppet · 3h ago
Not totally sure that's /s.
Over the last three years or so, I have seen more and more posts where the position just doesn't make sense. I mean, ten years ago, there were posts on HN that I disagreed with that I upvoted anyway, because they made me think. That has become much more rare. An increasing number of posts now are just... weird (I don't know a better word for it). Not thoughtful, not interesting (even if wrong), just weird.
I can't prove that any of them are AI-generated. But I suspect that at least some of them are.
quotemstr · 4h ago
"Studies" like this bite at the ankles of every change in information technology. Victorians thought women reading too many magazines would rot their minds.
Given that AI is literally just words on a monitor just like the rest of the internet, I have a strong prior it's not "reprogram[ming]" anyone's mind, at least not in some manner that, e.g. heavy Reddit use might.
stego-tech · 4h ago
That’s a pretty spicy take for first thing in the morning. The confidence with which you assert a repeatedly proven facile argument is…unenviable. “Fractal wrongness,” I’ve seen it called.
We have decades of research - brain scans, studies, experiments, imaging, stimuli responses, etc - proving that when a human no longer has to think about performing a skill, that skill immediately begins to atrophy and the brain adapts accordingly. It’s why line workers at McDonalds don’t actually learn how to properly cook food (it’s all been procedured-out and automated where possible to eliminate the need for critical thinking skills, thus lowering the quality of labor needed to function), and it’s why - at present - we’re effectively training a cohort of humans who lack critical thinking and reasoning skills because “that’s what the AI is for”.
This is something I’ve known about long before the current LLM craze, and it’s why I’ve always been wary or hostile to “aggressively helpful” tools like some implementations of autocorrect, or some driving aides: I am not just trying to do a thing quickly, I am trying to do it well, and that requires repeatedly practicing a skill in order to improve.
Studies like these continue to support my anxiety that we’re dumbing down the best technical generation ever into little more than agent managers and prompt engineers who can’t solve their own problems anymore without subscribing to an AI service.
quotemstr · 3h ago
Learning and habit formation are not "reprogramming". If you define "reprogramming" as anything that updates neuron weights, the term encompasses all of life and becomes useless.
My point is that I don't see LLM's effect on the brain as being anything more than the normal experience we have of living and that the level of drama the headline suggests is unwarranted. I don't believe in infohazards.
Might they result in skill atrophy? For sure! But it's the same kind of atrophy we saw when, e.g. transitioning from paper maps to digital ones, or from memorizing phone numbers to handing out email addresses. We apply the neurons we save by no longer learning paper map navigation and such to other domains of life.
The process has been ongoing since homo erectus figured out that if you bang a rock hard enough, you get a knife. So what?
AnimalMuppet · 3h ago
So what is, the skill in question is thinking critically. Letting that atrophy is kind of a bigger deal than if our paper map reading skills atrophy.
Now, you could argue that, when we use AI, critical thinking skills are more important, because we have to check the output of a tool that is quite prone to error. But in actual use, many people won't do that. We'll be back at "Computers Do Not Lie" (look for the song on Youtube if you're not familiar with it), only with a much higher error rate.
flanked-evergl · 4h ago
VS Code copilot has reprogrammed my mind to the point where not using it is just not worth it. It actually seldomly helps me do difficult things, it often helps me do incredibly mundane things, and if I have to go back to doing those incredibly mundane things by hand I would rather become a gardener.
AnimalMuppet · 3h ago
> "Studies" like this bite at the ankles of every change in information technology. Victorians thought women reading too many magazines would rot their minds.
If the Victorians had scientific studies showing that, you might have a point. Instead, you just have a flawed analogy.
And, why the scare quotes? If you can point to some actual flaws in the study, do so. If not, you're just dismissing a study that you don't agree with, but you have no actual basis for doing so. Whereas the study does give us a basis for accepting its conclusions.
quotemstr · 3h ago
> And, why the scare quotes?
N=54, students and academics only (mostly undergrad), impossible to blind, and, worst of all, the conclusion of the study supports a certain kind of anti-technology moralizing want to do anyway. I'd be shocked if it replicated, and even if it did, it wouldn't mean much concretely.
You could run the same experiment comparing paper maps versus Google Maps in a simulated navigation scenario. I'd bet the paper map group would score higher on various comprehension metrics. So what? Does that make digital maps bad for us? That's the implication of the article, and I don't think the inference is warranted.
ath3nd · 4h ago
If it wasn't for studies like this, you'd still think arsenic is a great way to produce a vibrant green color to paint your house in the color of nature!
Because of studies like this we know the burning of fossil fuels is a dead-end for us and our climate, and due to that have developed alternative methods of generating energy.
And the study actually proved that LLM usage reprograms your brain and makes you a dumbass. Social media usage does as well, those two things are not exclusive, if anything, their effects compound on an already pretty dumb and gullible population. So if your argumemt is 'but what about reddit', thats a non argument called 'whataboutism'. Look it up and hopefully it might give you a hint why you are getting downvoted.
There have been three recent studies showing that:
We reached a stage where people on the internet mistake their opinion on a subject to be as relevant as a study on the subject.
If you don't have another study or haven't done the science to disprove this study, how come you dismiss so easily a study that actually took time, data and the scientific method to reach to a conclusion? I feel we gotta actively and firmly call out that kind of behavior and ridicule it.
planetmcd · 59m ago
1. Wait, in a category where the general failure rate is traditionally 75%, using a bleeding edge technology adds 20% more risk, what a shock.
2. This is an interesting study, but perhaps limited. Draws conclusions from a set of 16 developers on very large projects, many of whom did not have previous experience with the editor used in the study or LLMs in general. The study did conclude it added time in these cases. There is a reason for the large sense of value, this would be the thing of note to uncover based on these results. Study notes 79% continued to use the AI tools. Speed is not the only value to be gained, but it was the only value measured. (Study notes this.)
3. Author didn't read or used AI to poorly summarize the poorly thought out study it is based on. Also, it seems you didn't read the study.
These days, I'm fairly senior and don't touch code much anymore but I find it really really instructive to get my hands dirty and struggle through new code and ideas. I think the "just tweak the prompts bro" people are missing out on learning.
I love learning by reading, to the point that I’ll read the available documentation for something before I decide to use it. This consumes a lot of time, and there’s a tradeoff.
Eventually if I do use the thing, I’m well suited to learning it quickly because I know where to go when I get stuck.
But by the same token I read a lot of documentation I never again need to use. Sometimes it’s useful for learning about how others have done things.
For now the difference between these two populations is not that pronounced yet but give it a couple of years.
An abstraction is a deterministic, pure function, than when given A always returns B. This allows the consumer to rely on the abstraction. This reliance frees up the consumer from having to implement the A->B, thus allowing it to move up the ladder.
LLMs, by their very nature are probabilistic. Probabilistic is NOT deterministic. Which means the consumer is never really sure if given A the returned value is B. Which means the consumer now has to check if the returned value is actually B, and depending on how complex A->B transformation is, the checking function is equivalent in complexity as implementing the said abstraction in the first place.
I believe that if you can tweak the temperature input (OpenAI recently turned it off in their API, I noticed), an input of 0 should hypothetically result in the same output, given the same input.
We can use different words if you like (and I'm not convinced that delegation isn't colloquially a form of abstraction) but you can't control the world by controlling the categories.
Do you therefore argue programming languages aren't abstractions?
The problem with this analogy is obvious when you imagine an assembler generating machine code that doesn't work half of the time and a human trying to correct that.
To address your specific point in the same way: When we're talking about programmers using abstractions, we're usually not talking about the programming language their using, we're talking about the UI framework, networking libraries, etc... they're using. Those are the APIs their calling with their code, and those are all abstractions that are all implemented at (roughly) the same level of abstraction as the programmer's day-to-day work. I'd expect a programmer to be able to re-implement those if necessary.
Managers tend to hire sub managers to manage their people. You can see this with LLM as well, people see "Oh this prompting is a lot of work, lets make the LLM prompt the LLM".
I guess I'm not 100% sure I agree with my original point though, should a programmer working on JavaScript for a website's frontend be able to implement a browser engine. Probably not, but the original point I was trying to make is I would expect a programmer working on a browser engine to be able to re-implement any abstractions that they're using in their day-to-day work if necessary.
Partially because of all else fails, you'll need to step in and do the thing. Partially because if you can't do it, you can't evaluate whether it's being done properly.
That's not to say you need to be _as good_ at the task as the delegee, but you need to be competent.
For example, this HBR article [1]. Pervasive in all advice about delegation is the assumption that you can do the task being delegated, but that you shouldn't.
> Just that it's not an expectation, e.g., you don't expect a CEO to be able to do the CTO's job.
I think the CEO role is actually the outlier here.
I can only speak to engineering, but my understanding has always been that VPs need to be able to manage individual teams, and engineering managers need to be somewhat competent if there's some dev work that needs to be done.
This only happens as necessary, and it obviously should be rare. But you get in trouble real quickly if you try to delegate things you cannot accomplish yourself.
1. https://hbr.org/2025/09/why-arent-i-better-at-delegating
There is another form of delegation where the work needed to be done is imposed onto another, in order to exploit and extract value. We are trying to do this with LLMs now, but we also did this during the Industrial Revolution, and before that, humanity enslaved each other to get the labor to extract value out of the land. This value extraction leads to degeneration, something that happens when living systems dies.
While the Industrial Revolution afforded humanity a middle-class, and appeared to distribute the wealth that came about — resulting in better standards of living — it came along with numerous ills that as a society, we still have not really figured out.
I think that, collectively, we figure that the LLMs can do the things no one wants to do, and so _everyone_ can enjoy a better standard of living. I think doing it this way, though, leads to a life without purpose or meaning. I am not at all convinced that LLMs are going to give us back that time … not unless we figure out how to develop AIs that help grow humans instead of replacing them.
The following article is an example of what I mean by designing an AI that helps develop people instead of replacing them: https://hazelweakly.me/blog/stop-building-ai-tools-backwards...
Colleagues are the same thing. You may abstract business domains and say that something is the job of your colleague, but sometimes that abstraction breaks.
Still good enough to draw boxes and arrows around.
So are humans and yet people pay other people to write code for them.
Although I'm on the side of getting my hands dirty, I'm not sure if the difference is that different. A modern compiler embeds a considerable degree of probabilistic behaviour.
Can you give some examples?
That must be why we talk about leaky abstractions so much.
They're neither pure functions, nor are they always deterministic. We as a profession have been spoilt by mostly deterministic code (and even then, we had a chunk of probabilistic algorithms, depending on where you worked).
Heck, I've worked with compilers that used simulated annealing for optimization, 2 decades ago.
Yes, it's a sea change for CRUD/SaaS land. But there are plenty of folks outside of that who actually took the "engineering" part of software engineering seriously, and understand just fine how to deal with probabilistic processes and risk management.
The LLM expands the text of your design into a full application.
The commenter you’re responding to is clear that they are checking the outputs.
So are compilers, but people still successfully use them. Compilers and LLMs can both be made deterministic but for performance reasons it's convenient to give up that guarantee.
That is just not correct. There is no rule that says an abstraction is strictly functional or deterministic.
In fact, the original abstraction was likely language, which is clearly neither.
The cleanest and easiest abstractions to deal with have those properties, but they are not required.
1. Language is an abstraction and it's not deterministic (it's really lossy)
2. LLMs behave differently than the abstractions involved in building software, where normally if you gave the same input, you'd expect the same output.
and I've never looked at the machine code produced by an assembler (other than when I wrote my own as a toy project)
is the same true of LLM usage? absolutely not
and it never will be, because it's not an abstraction
Just because you end up looking at what the prompt looks like “under the hood” in whichever language it produced the output, doesn’t mean every user does.
Similar as with assembly, you might have not taken a look at it, but there are people that do and could argue the same thing as you.
The lines will be very blurry in the near future.
Personally, I think if your farts are an abstraction that you can derive useful meaning from the mapping, who are we to tell you no?
(Also: bizarre examples = informative edge cases. Sometimes.)
It is not yet good enough or there is not yet sufficient trust. Also there are still resources allocated to checking the code.
I saw a post yesterday showing Brave browser's new tab using 70mb of RAM in the background. I'm very sure there's code there that can be optimized, but who gives a shit. It's splitting hairs and our computers are powerful enough now that it doesn't matter.
Immateriality has abstracted that particular few line codes away.
I do. This sort of attitude is how we have machines more powerful than ever yet everything still seems to run like shit.
I understand the world is about compromises, but all the gains of essentially every computer program ever could be summed up by accumulation of small optimizations. Likewise, the accumulation of small wastes kills legacy projects more than anything else.
Flagging something as potentially problematic is useful but without additional information related to the tradeoffs being made this may be an optimized way to do whatever Brave is doing which requires the 70MB of RAM. Perhaps the non-optimal way it was previously doing it required 250MB of RAM and this is a significant improvement.
No comments yet
Supply and demand will decide what compromise is acceptable and what that compromise looks like.
I have been hearing (reading?) this for a solid two years now, and LLMs were not invented two years ago: they are ostensibly the same tech as they were back in 2017, with larger training pools and some optimizations along the way. How many more hundreds of billions of dollars is reasonable to throw at a technology that has never once exceeded the lofty heights of "fine"?
At this point this genuinely feels like silicon valley's fever dream. Just lighting dumptrucks full of money on fire in the hope that it does something better than it did the previous like 7 or 8 times you did it.
And normally I wouldn't give a shit, money is made up and even then it ain't MY money, burn it on whatever you want. But we're also offsetting any gains towards green energy standing up these stupid datacenters everywhere to power this shit, not to mention the water requirements.
It was basically a novelty before. "Wow, AI can sort of write code!"
Now I find it very capable.
I suspect there's a lot more use out there generating money than you realize, there's no moat in using it, so I'm pretty sure it's kept on the downlow for fear of competitors catching up (which is quick and cheap to do).
How far can one extrapolate? I defer to the experts actually making these things and to those putting money on the line.
The only thing I’m certain of is that you’re highly overconfident.
I’m sure plenty of assembly gurus said the same of the first compilers.
> because it's not an abstraction
This just seems like a category error. A human is not an abstraction, yet they write code and produce value.
An IDE is a tool not an abstraction, yet they make humans more productive.
When I talk about moving up the levels of abstraction I mean: taking on more abstract/less-concrete tasks.
Instead of “please wire up login for our new prototype” it might be “please make the prototype fully production-ready, figure out what is needed” or even “please ship a new product to meet customer X’s need”.
The customer would just ask the AI directly to meet their needs. They wouldn’t purchase the product from you.
If you do make your specs precise enough, such that 2 different dev shops will produce functionally equivalent software, your specs are equivalent to code.
The value of this is that FOR FREE you can get comprehensive test defintions (unit+e2e), kube/terraform infra setup, documentation stubs, openai specs, etc. It's seriously magical.
``` Circle() .fill(Color.red) .overlay( Circle().stroke(Color.white, lineWidth: 4) ).frame(width: 100, height: 100) ```
Is the mapping 1:1 and completely lossless? Of course not, I'd say the former is most definitely a sort of abstraction of the latter, and one would be being disingenuous to pretend it's not.
and to be able to do this efficiently or even "correctly", you'd need to have had mountains of experience evaluating an implementation, and be able to imagine the consequences of that implementation against the desired outcome.
Doing this requires experience that would get eroded by the use of an LLM. It's very similar to higher level maths (stuff like calculus) being much more difficult if you had poor arithmetic/algebra skills.
You could also tweak it by going like "Lead me to the US" -> "Lead me to the state of New York" -> "Lead me to New York City" -> "Lead me to Manhattan" -> "Lead me to the museum of new arts" and it would give you 86% accurate directions, would you still need to be able to navigate?
How about when you go over roads that are very frequently used you push to 92% accuracy, would you still need to be able to navigate?
Yes of course because in 1/10 trips you'd get fucking lost.
My point is: unless you get to that 99% mark, you still need the underlying skill and the abstraction is only a helper and always has to be checked by someone who has that underlying skill.
I don't see LLMs as that 99% solution in the next years to come.
We're not because you have to still check every outputted code. You didn't have to check every compilation step of a compiler. It was testable actual code, not non-deterministic output from English language input
The number of users actually checking the output of a compiler is nonexistent. You just trust it.
LLMs are moving that direction, whether we like it or not
Quite a few who work on low level systems do this. I have done this a few times to debug build issues: this one time a single file suddenly made compile times go up by orders of magnitude, the compiler inlined a big sort procedure in an unrolled loop, so it added the sorting code hundreds of times over in a single function and created a gigantic binary that took ages to compile since it tried to optimize that giant function.
That is slow both in runtime and compile time, so I added a tag to not inline the sort there, and all the issues disappeared. The sort didn't have a tag to inline it, so the compiler just made an error here, it shouldn't have inlined such a large function in an unrolled loop.
The Chinese models are getting hyper efficient and really good at agentic tasks. They're going to overtake Claude as the agentic workhorses soon for sure, Anthropic is slow rolling their research and the Chinese labs are smoking. Speed/agentic ability don't show big headlines, but they really matter.
GPT5 might not impress you with its responses to pedestrian prompts, but it is a science/algorithm beast. I understand what Sam Altman was saying about how unnerving its responses can be, it can synthesize advanced experiments and pull in research from diverse areas to improve algorithms/optimize in a way that's far beyond the other LLMs. It's like having a myopic autistic savant postdoc to help me design experiments, I have to keep it on target/focused but the depth of its suggestions are pretty jaw dropping.
To me, that's what makes it an abstraction layer, rather than just a servant or an employee. You have to break your entire architecture into units small enough that you know you can coax the machine to output good code for. The AI can't be trusted as far as you can throw it, but the distance from you to how far you can throw is the abstraction layer.
An employee you can just tell to make it work, they'll kill themselves trying to do it, or be replaced if they don't; eventually something will work, and you'll take all the credit for it. AI is not experimenting, learning and growing, it stays stupid. The longer it thinks, the wronger it thinks. You deserve the credit (and the ridicule) for everything it does that you put your name on.
-----
edit: and this thread seems to think that you don't have to check what your high level abstraction is doing. That's probably why most programs run like crap. You can't expect something you do in e.g. python to do the most algorithmically sensible thing, even if you wrote the algorithm just like the textbook said. It may make weird choices (maybe optimal for the general case, but horrifically bad for yours) that mean that it's not really running your cute algorithm at all, or maybe your cute algorithm is being starved by another thread that you have no idea why it would be dependent on. It may have made correct choices when you started writing, then decided to make wrong choices after a minor patch version change.
To pretend perfection is a necessary condition for abstraction is not even somebody would say directly. Never. All we talk about is leaky abstractions.
Remember when GTA loading times, which (a counterfactual because we'll never know) probably decimated sales, playtime, and at least the marketing of the game, turned out to be because they were scanning some large, unnecessary json array (iirc) hundreds of times a second? That's probably a billion dollar mistake. Just because some function that was being blindly called was not ever reexamined, and because nobody profiled properly (i.e. checked the output.)
LLMs make up whatever they feel like and are pretty bad at architecture as well.
http://employees.oneonta.edu/blechmjb/JBpages/m360/Professio...
"Somewhere there must be men and women with capacity for original thought."
He wrote that in 1957. 1957!
However, since I brought up calculators, I'd like to pre-emphasize something: They aren't analogous to today's LLMs. Most people don't offload their "what and why" executive decision-making to a calculator, calculators are orders of magnitude more trustworthy, and they don't emit plausible lies to cover their errors... Though that last does sound like another short-story premise.
I’ve read plenty of books (thanks, Dickens) where I looked at every word on every page but can recall very little of what they meant. You can look at the results from an llm and say “huh cool, I know that now) and do nothing to assimilate that knowledge, or you can think deeply about it and try to fit it in with everything else you know about the subject. The advantage here is that you can ask follow-up questions if something doesn’t click.
We have the idea of 'tutorial hell' for programming (particularly gamedev), where people go through the motions of learning without actually progressing.
Until you go apply the skills and check, it's hard to evaluate the effectiveness of a learning method.
Same way a phone in your pocket gives you the world's compiled information available in a moment. But that's generally led to loneliness, isolation, social upheaval, polarization, and huge spread of wrong information.
If you can handle the negatives is a big if. Even the smartest of our professional class are addicted to doomscrolling these days. You think they will get the positives of AI use only and avoid the negatives?
Remember we aren’t all above average. You shouldn’t worry. Now that we have widespread literacy, nobody needs to and few even could recite Norse Sagas or the Illiad from memory. Basically nobody has useful skills for nomadic survival.
We’re about to move on to more interesting problems, and our collective abilities and motivation will still be stratified as it always has been and must be.
Well, not so slowly it seems.
> For now the difference between these two populations is not that pronounced yet but give it a couple of years.
There are lots and lots of programmers and other IT people who make a living that I wouldn't say fall into your first bucket.
What I'm seeing is most of this group never really had the capability in the first place. These are the formerly unproductive slackers who now churn out GenAI slop with their name on it at an alarming rate.
If you stop thinking, then of course you will learn less.
If instead you think about the next level of abstraction up, then perhaps the details don’t always matter.
The whole problem with college is that there is no “next level up”, it’s a hand-curated sequence of ideas that have been demonstrated to induce some knowledge transfer. It’s not the same as starting a company and trying to build something, where freeing up your time will let you tackle bigger problems.
And of course this might not work for all PhDs; maybe learning the details is what matters in some fields - though with how specialized we’ve become, I could easily see this being a net win.
All previous programming abstractions kept correctness, a python program produce no less reliable results than a C program running the same algorithm, it just took more time.
LLM doesn't keep correctness, I can write a correct prompt and get incorrect results. Then you are no longer programming, you are a manager over a senior programmer suffering from extreme dementia so they forget what they were doing a few minutes ago and you try to convince him to write what you want before he forgets about that as well and restart the argument.
That's not strictly speaking true, since most (all?) high level languages have undefined behaviors, and their behavior varies between compilers/architectures in unexpected ways. We did lose a level of fidelity. It's still smaller than the loss of fidelity from LLMs but it is there.
Also, it seems like there's little chance for knowledge transfer. If I work with dictionaries in python all the timrle, eventually I'm better prepared to go under the hood and understand their implementation. If I'm prompting a LLM, what's the bridge from prompt engineering to software engineering? Not such direct connection, surely!
"Correctness" must always be considered with respect to something else. If we take e.g. the C specification, then yes, there are plenty of compilers that are in almost all ways people will encounter correct according to that spec, UB and all. Yes, there are bugs but they are bugs and they can be fixed. The LLVM project has a very neat tool called Alive2 [1] that can verify optimization passes for correctness.
I think there's a very big gap between the kind of reliability we can expect from a deterministic, verified compiler and the approximating behavior of a probabilistic LLM.
[1]: https://github.com/AliveToolkit/alive2
One of the other replies alludes to it, but I want to say it explicitly:
The key difference is that you can generally drill down to assembly, there is infinitely precise control to be had.
It'd be a giant pain in the ass, and not particularly fast, but if you want to invoke some assembly code in your Java, you can just do that. You want to see the JIT compiler's assembly? You can just do that. JIT Compiler acting up? Disable it entirely if you wish for more predictable & understandable execution of the code.
And while people used to higher level languages don't know the finer details of assembly or even C's memory management, they can incrementally learn. Assembly programming is hard, but it is still programming and the foundations you learn from other programming do help you there.
Yet AI is corrosive to those foundations.
It's way easier to drill down in this way than the bytecode/assembly vs. high-level language divide.
Having curiosity to examine the platform that your software is running on and taking a look into what the compilers generate is a skill worth having. Even if you never write raw assembly yourself, being able to see what the compiler generated and how data is laid out does matter. This then helps you make better decisions about what patterns of code to use in your higher level language.
What I do personally is for every subject that matters to me I take the time to first think about it. To explore ideas, concepts, etc… and answer questions that would ask to ChatGPT. Only once I get a good idea I start to ask chapgpt about it.
Before the advent of smartphones people needed to remember phone numbers of their loved ones and maybe do some small calculations on the fly. Now people sometimes don't even remember their own numbers and have it saved on their phones.
Now some might want to debate how smartphones are different from LLMs and it is not the same. But we have to remember for better or worse LLM adoption has been fast and it has become consumer technology. That is the area being discussed in the article. People using it to write essays. And those who might be using the label of "prompt bros" might be missing the full picture. There are people, however small, being helped by LLMs as there were people helped by smartphones.
This is by no means a defense for using LLMs for learning tasks. If you write code by yourself, you learn coding. If you write your essays yourself, you learn how to make a solid points.
Similar thing in the historian's profession (which I also don't do for my job but have some knowledge of). Historians who spend all day immersed in physical archives tend, over time, to be great at synthesizing ideas and building up an intuition about their subject. But those who just Google for quotes and documents on whatever they want to write about tend to have more a static and crude view of their topic; they are less likely to consider things from different angles, or see how one things affects another, or see the same phenomenon arising in different ways; they are more likely to become monomaniacal (exaggerated word but it gets the point across) about their own thesis.
For my last two projects, I didn’t write a single line of code by hand. But I refuse to use agents and I build up an implementation piece by piece via prompting to make sure I have the abstractions I want and reusable libraries.
I take no joy in coding anymore and I’ve been doing it for fourty years. I like building systems and solving business problems.
I’m not however disagreeing with you that LLMs will make your development skill atrophy, I’m seeing it in real time at 51. But between my customer facing work and supporting sales and cat herding, I don’t have time to sit around and write for loops and I’m damn sure not going to do side projects outside of work. Besides, companies aren’t willing to pay my company’s bill rates for me as a staff consultant to spend a lot of time coding.
I hopefully can take solace in the fact that studies also show that learning a second language strengthens the brain and I’m learning Spanish and my wife and I plan to spend a couple of months in the winter every year in a Central American Spanish speaking country.
We have already done the digital nomad thing across the US for a year until late 2023 so we are experienced with it and spent a month in Mexico.
If you just use prompts and don't actually read the output, and figure out why it worked, and why it works, you will never get better. But if you take the time to understand why it works, you will be better for it, and might not even bother asking next time.
I've said it before, but when I first started using Firefox w/ autocorrect in like 2005, I made it a point to learn to spell from it, so that over time I would make less typos. English is my second language, so its always been an uphill battle for me despite having a native American English accent. Autocorrect on Firefox helped me tremendously.
I can use LLMs to plunge into things I'm afraid of trying out due to impostor syndrome and get more done sooner and learn on the way there. I think the key thing is to use tools correctly.
AI is like the limitless drug to a degree, you have an insane fountain of knowledge at your fingertips, you just need to use it wisely and learn from it.
Keep up the good work is all I can say!
Alternatively they're just learning/building intuition for something else. The level of abstraction is moving upwards. I don't know why people don't seem to grok that the level of the current models is the floor, not the ceiling. Despite the naysayers like Gary Marcus, there is in fact no sign of scaling or progress slowing down at all on AI capabilities. So it might be that if there is any value in human labor left in the future it will be in being able to get AI models to do what you want correctly.
I think the same effect has been around forever in the form of every boss/manager/ceo/rando-divorcee-or-child-with-money using employees to do their thinking as a current information-handling worker or student using an ai to do their thinking.
"Alternatively they're just learning/building intuition for something else."
Reading comprehension is hard.
Oh come on. He is by far the most well known AI poo-poo'er and it's not even close. He built his entire brand on it once he realized his own research was totally irrelevant.
They were still useful, and did solve a significant portion of user problems.
They also created even more problems, and no one really went out of work long term because of them.
A year or two ago when LLMs popped on the scene my coworkers would say "Look at how great this is, I can generate test cases".
Now my coworkers are saying "I can still generate test cases! And if I'm _really pacificcccc_, I can get it to generate small functions too!".
It seems to have slowed down considerably, but maybe that's just me.
Eventually, it stops being magic and the thinking changes - and we start to see the pros and cons, and see the gaps.
A lot of people are still in the ‘magic’ phase.
That is a very natural and efficient way to do it, and also more reliable than using your own experience since you are just a single data point with feelings.
You don't have to drive a car to see where cars were 20 years ago, see where cars are today, and say: "it doesn't look like cars will start flying anytime soon".
It's not reasonable to treat only opinions that you agree with as valid.
Some people don't use LLMs because they are familiar with them.
lol
None of us can reliably count the e’s as someone talks to us, either.
a) "know" that they're not able to do it for the reason you've outlined (as in, you can ask about the limitations of LLMs for counting letters in words)
b) still blindly engage with the query and get the wrong answer, with no disclaimer or commentary.
If you asked me how many atoms there are in a chair, I wouldn't just give you a large natural number with no commentary.
I mean the guy assembling a thingymajig in the factory, after a few years, can put it together with his hands 10x faster than the actual thingymajig designer. He'll tell you apply some more glue here and less glue there (it's probably slightly better, but immaterial really). However, he probably couldn't tell you what the fault tolerance of the item is, the designer can do that. We still outsource manufacturing to the guy in the factory regardless.
We just have to get better at identifying risks with using the LLMs doing the grunt work and get better in mitigating them. As you say, abstracted.
1. This is arxiv - before publication or peer review. Grain of salt.[0]
2. 18 participants per cohort
3. 54 participants total
Given the low N and the likelihood that this is drawn from 18-22 year olds attending MIT, one should expect an uphill battle for replication and for generalizability.
Further, they are brain scanning during the experiment, which is an uncomfortable/out-of-the-norm experience, and the object of their study is easy to infer if not directly known by the population (the person being studied using LLM, search tools, or no tools).
> We thus present a study which explores the cognitive cost of using an LLM while performing the task of writing an essay. We chose essay writing as it is a cognitively complex task that engages multiple mental processes while being used as a common tool in schools and in standardized tests of a student's skills. Essay writing places significant demands on working memory, requiring simultaneous management of multiple cognitive processes. A person writing an essay must juggle both macro-level tasks (organizing ideas, structuring arguments), and micro-level tasks (word choice, grammar, syntax). In order to evaluate cognitive engagement and cognitive load as well as to better understand the brain activations when performing a task of essay writing, we used Electroencephalography (EEG) to measure brain signals of the participants. In addition to using an LLM, we also want to understand and compare the brain activations when performing the same task using classic Internet search and when no tools (neither LLM nor search) are available to the user.
[0] https://arxiv.org/pdf/2506.08872
I would describe the study size and composition as a limitation, and a reason to pursue a larger and more diverse study for confirmation (or lack thereof), rather than a reason to expect an "uphill battle" for replication and so forth.
If the computer writes the essay, then the human that’s responsible for producing good essays is going to pick up new (probably broader) skills really fast.
Maybe. I believe we both agree it is a critical gap in the research as-is, but whether it is a neutral item or an albatross is an open question. Much of psychology and neuroscience research doesn't replicate, often because of the limited sample size / composition as well as unrealistic experimental design. Your approach of deepening and broadening the demographics would attack generalizability, but not necessarily replication.
My prior puts this on an uphill battle.
Science should become a marketplace of ideas. Your other criticisms are completely valid. Those should be what’s front and center. And I agree with you. The conclusions of the paper are premature and designed to grab headlines and get citations. Might as well be posting “first post” on slashdot. IMO we should not see the current standard of peer review as anything other than anachronistic.
The only advantage to closed peer review is it saves slight scientific embarrassment. However, this is a natural part of taking risks ofc and risky science is great.
P.s. in this case I really don't like the paper or methods. However, open peer review is good for science.
Actually, from my recollection, it was debunked pretty quickly by people who read the paper because the paper was hot garbage. I saw someone point out that its graph of resistivity showed higher resistance than copper wire. It was no better than any of the other claimed room-temperature semiconductor papers that came out that year; it merely managed to catch virality on social media and therefore drove people to attempt to reproduce it.
Ironically, I am waiting for AI to start automating the process of teasing apart obvious pencil whipping, back scratching, buddy-bro behavior. Some believe its in the 1% range of falsified papers and pencil whipped reviews. I expect it to be significantly higher based on reading NIH papers for a long time in the attempt to actually learn things. I've reported the obvious shenanigans and sometimes papers are taken down but there are so many bad incentives in this process I predict it will only get worse.
This also ignores the fact that you can find a paper to support nearly everything if one is willing to link people "correlative" studies.
Absolutely not. I am an advocate for peer review, warts and all, and find that it has significant value. From a personal perspective, peer review has improved or shot down 100% of the papers that I have worked on -- which to me indicates its value to ensure good ideas with merit make it through. Papers I've reviewed are similarly improved -- no one knows everything and its helpful to have others with knowledge add their voice, even when the reviewers also add cranky items.[0] I would grant that it isn't a perfect process (some reviewers, editors are bad, some steal ideas) -- but that is why the marketplace of ideas exists across journals.
> Science should become a marketplace of ideas.
This already happens. The scholarly sphere is the savanna when it comes to resources -- it looks verdant and green but it is highly resource constrained. A shitty idea will get ripped apart unless it comes from an elephant -- and even then it can be torn to shreds.
That it happens behind paywalls is a huge problem, and the incentive structures need to be changed for that. But unless we want blatant charlatanism running rampant, you want quality checks.
[0] https://x.com/JustinWolfers/status/591280547898462209?lang=e... if a car were a manuscript
So it's possible to be both skeptical of how well these results generalize (and call for further research), but also heed the warning: AI usage does appear to change something fundamental about our congnitive processes, enough to give any reasonable person pause.
Additionally, the original paper uses the term “cognitive debt“ not cognitive decline, which may have an important ramifications for interpretation and conclusions.
I wouldn’t be surprised to see similar results in other similar types of studies, but it does feel a bit premature to broadly conclude that all LLM/AI use is harmful to your brain. In a less alarmist take: this could also be read to show that AI use effectively simplifies the essay writing process by reducing cognitive load, therefore making essays easier and more accessible to a broader audience but that would require a different study to see how well the participants scored on their work.
In much the same way chess engines make competitive chess accessible to a broader audience. :)
Perhaps the issue of cognitive decline comes from sitting there vegetating rather applying themselves during all that additional spare time.
Although my experience has been perhaps different using LLM's, my mind still tires at work. I'm still having to think on the bigger questions, it's just less time spent on the grunt work.
I don’t know the percentage of people who are still critically thinking while using AI tools, but I can first hand see many students just copy pasting content to their school work.
Our bodies naturally adjust to what we do. Do things and your body reinforces that enabling you do even more advanced versions of those things. Don't do things and your skill or muscle in such tends to atrophy over time. Asking LLMs to (as in this case) write an essay is always going to be orders of magnitude easier than actually writing an essay. And so it seems fairly self evident that using LLMs to write essays would gradually degrade your own ability to do so.
I mean it's possible that this, for some reason, might not be true, but that would be quite surprising.
What is reported as cognitive decline in the paper might very well be cognitive decline. It could also be alternative routing focused on higher abstractions, which we interpret as cognitive decline because the effect is new.
I share your concern, for the record, that people become too attached to LLMs for generation of creative work. However, I will say it can absolutely be used to unblock and push more through. The quality versus quantity balance definitely needs consideration (which I think they are actually capturing vs. cognitive decline) -- the real question to me is whether an individual's production possibility frontier is increased (which means more value per person -- a win!), partially negative in impact (use with caution), or decreased overall (a major loss). Cognitive decline points to the latter.
4. This is clickbait research, so it's automatically less likely to be true.
5. They are touting obvious things as if they are surprising, like the fact that you're less likely to remember an essay that you got something else to write, or that the ChatGPT essays were verbose and superficial.
The problem is that a headline that people want to believe is a very powerful force that can override replication and sample size and methodology problems. AI rots your brain follows behind social media rots your brain, which came after video games rot your brain, which preceded TV rots your brain. I’m sure TV wasn’t even the first. There’s a long tradition of publicly worrying about machines making us stupider.
Your comment reminded me of this (possibly spurious) quote:
>> An Assyrian clay tablet dating to around 2800 B.C. bears the inscription: “Our Earth is degenerate in these later days; there are signs that the world is speedily coming to an end; bribery and corruption are common; children no longer obey their parents; every man wants to write a book and the end of the world is evidently approaching.”[0]
Same as it ever was. [1]
[0] https://quoteinvestigator.com/2012/10/22/world-end/
[1] https://www.youtube.com/watch?v=5IsSpAOD6K8
People have also been complaining about politicians for hundreds of years, and the ruling class for millennia, as well. and the first written math mistake was about beer feedstock, so maybe it's all correlated.
Which I believe still does have a large grain of truth.
These things can make us simultaneously dumber and smarter, depending on usage.
Writing leads to the rapid decline in memory function. Brains are lazy.
Ever travel to a new place and the brain pipes up with: ‘this place is just like ___’? That the brain’s laziness showing itself. The brain says: ‘okay I solved that, go back to rest.’ The observation is never true; never accurate.
Pattern recognition saves us time and enables us too survive situations that aren’t readily survivable. Pattern recognition leads to short cuts that do humanity a disservice.
Socrates recognized these traits in our brains and attempted to warn humanity of the damage these shortcuts do to our reasoning and comprehension skills. In Socrates day it was not unheard of for a person to memorize their entire family tree, or memorize an entire treaty and quote from it.
Humanity has -overwhelmingly- lost these abilities. We rely upon our external memories. We forget names. We forget important dates. We forget times and seasons. We forget what we were just doing!!!
Socrates had the right of it. Writing makes humans stupid. Reduces our token limits. Reduces paging table sizes. Reduces overall conversation length.
We may have more learning now, but what have we given up to attain it?
The comments (some, not all) are also a great example of how cognitive bias can cause folks to accept information without doing a lot of due diligence into the actual source material.
> Is it safe to say that LLMs are, in essence, making us "dumber"?
> No! Please do not use the words like “stupid”, “dumb”, “brain rot”, "harm", "damage", "passivity", "trimming" and so on. It does a huge disservice to this work, as we did not use this vocabulary in the paper, especially if you are a journalist reporting on it
> Additional vocabulary to avoid using when talking about the paper
> In addition to the vocabulary from Question 1 in this FAQ - please avoid using "brain scans", "LLMs make you stop thinking", "impact negatively", "brain damage", "terrifying findings".
1. https://www.brainonllm.com/
2. https://www.brainonllm.com/faq
This study in particular has made the rounds several times as you said. The study measures impact of 18 people using ChatGPT just four times over four months. I'm sorry but there is no way that is controlling for noise.
I'm sympathetic to the idea that overusing AI causes atrophy but this is just clickbait for a topic we love to hate.
It should be ok to just say "we don't know yet, we're looking into that", but that isn't the world we live in.
This article is focused on essay writing, but I swear I've experienced cognitive decline when using AI tools a bit too much to help solve programming-related problems. When dealing with an unfamiliar programming ecosystem it feels so easy and magical to just keep copy / pasting error outputs until the problem is resolved. Previously solving the problem would've taken me longer but I would've also learned a lot more. Then again, LLMs also make it way easier to get started and feel like you're making significant progress, instead of getting stuck at the first hurdle. There's definitely a balance. It requires a lot of willpower to sit with a problem in order to try and work through it rather than praying to the LLM slot machine for an instant solution.
I've had the opposite experience, but my approach is different. I don't just copy/paste errors, accept the AI's answer when it works, and move on. I ask follow up questions to make sure I understand why the AI's answer works. For example, if it suggests running a particular command, I'll ask it to break down the command and all the flags and explain what each part is doing. Only when I'm satisfied that I can see why the suggestion solves the problem do I accept it and move on to the next thing.
The tradeoff for me ends up being that I spend less time learning individual units of knowledge than if I had to figure things out entirely myself e.g. by reading the manual (which perhaps leads to less retention), but I learn a greater quantity of things because I can more rapidly move on to the next problem that needs solving.
I've tried a similar approach and found it very prone to hallucination[0]. I tend to google things first and ask a LLM as fallback, so maybe it's not a fair comparison, but what do I need a LLM for if a search engine can answer my question.
[0]: Just the other day I asked ChatGPT what a colonn (':') after systemd's ExecStart= means. The correct answer is that it inhibits variable expansion, but it kept giving me convincing yet incorrect answers.
While not foolproof, when you combine this with some basic fact-checking (e.g. quickly skim read a command's man page to make sure the explanation for each flag sounds right, or read the relevant paragraph from the manual) plus the fact that you see in practice whether the proposed solution fixes the problem, you can reach a reasonably high level of accuracy most of the time.
Even with the risk of hallucinations it's still a great time saver because you short-circuit the process of needing to work out which command is useful and reading the whole of the man page / manual until you understand which component parts do the job you want. It's not perfect but neither is Googling - that can lead to incorrect answers too.
To give an example of my own, the other day I was building a custom Incus virtual machine image from scratch from an ISO. I wanted to be able to provision it with cloud-init (which comes configured by default in cloud-enabled stock Incus images). For some reason, even with cloud-init installed in the guest, the host's provisioning was being ignored. This is a rather obscure problem for which Googling was of little use because hardly anyone makes cloud-init enabled images from ISOs in Incus (or if they do, they don't write about it on the internet).
At this point I could have done one of two things: (a) spend hours or days learning all about how cloud-init works and how Incus interacts with it until I eventually reached the point where I understood what the problem was; or (b) ask ChatGPT. I opted for the latter and quickly figured out the solution and why it worked, thus saving myself a bunch of pointless work.
For example, in this specific case, I am enough of a domain expert to know that this information is accessible by running `man systemd.service` and looking for the description of command line syntax (findable with grep for "ExecStart=", or, as I have now seen in preparing this answer, more directly with grep for "COMMAND LINES").
I think any developer worth their salt would use LLMs to learn quicker, and arrive to conclusions quicker. There's some programming problems I run into when working on a new project that I've run into before but cannot recall what my last solution was and it is frustrating, I could see how an LLM could help with such a resolution coming back quicker. Sometimes its 'first time setup' stuff that you have not had to do for like 5 years, so you forget, and maybe you wrote it down on a wiki, two jobs ago, but an LLM could help you remember.
I think we need to self-evaluate how we use LLMs so that they help us become better Software Engineers, not worse ones.
I also like preparing a draft and using llm for critique, it helps me figure out some blind spots or ways to articulate better.
It’s really convenient. It also similarly rots the parts of the brain required for spatial reasoning and memory for a geographic area. It can also lead to brain rot with decision making.
Usually it’s good enough. Sometimes it leads to really ridiculous outcomes (especially if you never double check actual addresses and just put in a business name or whatever). In many edge cases depending on the use case, it leads to being stuck, because the maps data is wrong, or doesn’t have updated locations, or can’t consider weather conditions, etc. especially if we’re talking in the mountains or outside of major cities.
Doing it blindly has led to numerous people dying by stupidly getting themselves into more and more dumb situations.
People still got stuck using paper maps. Sometimes they even died. It was much rarer and people were more aware they were lost, instead of persisting thinking they weren’t. So different failure modes.
Paper maps were very inconvenient, so dealt with it using more human interaction and adding more buffer time. Which had it’s own costs.
In areas where there are active bad actors (Eastern Europe now a days, many other areas in that region sometimes) it leads to actively pathological outcomes.
It is now rare for anyone outside of conflict zones to use paper maps except for specific commercial and gov’t uses, and even then they often use digitized ‘paper’ maps.
Basically, participants spent less than half an hour, 4 times, over 4 months, writing some bullcrap SAT type essay. Some participants used AI.
So to accept the premise of the article, using an AI tool once a month for 20 minutes caused noticeable brain rot. It is silly on its face.
What the study actually showed, people don't have an investment or strong memory to output they didn't produce. Again, this is a BS essay written (mostly by undergrads) in 20 minutes, so not likely to be deep in any capacity. So to extrapolate, if you have a task that requires you to understand the output, you are less likely to have a grasp of it if you didn't help produce the output. This would also be true of work some other person did.
- Learning how to solder
- Learning how to use a multimeter
- Learning to build basic circuits on breadboxes
- learning about solar panels, mppt, battery management system, and different variations of li-on batteries
- learning about LoRa band / meshtastic / how to build my own antenna
And every single one of these things I've learned I've also applied practically to experiment and learn more. I'm doing things with my brain that I couldn't do before, and it's great. When something doesn't work like I thought it would, AI helps me understand where I may have went wrong, I ask it a ton of questions, and I try again until I understand how it works and how to prove it.
You could say you can learn all of this from YouTube, but I can't stand watching videos. I have a massive textbook about electronics, but it doesn't help me break down different paths to what I actually want to do.
And to be blunt: I like making mistakes and breaking things to learn. That strategy works great for software (not in prod obviously...), but now I can do it reasonably effectively for cheap electronics too.
Working these from text seems to be the hardest way I could think to learn them. I've yet to encounter a written description as to what it feels like to solder, what a good/bad job actually looks like, etc. A well shot video is much better at showing you what you need to do (although finding one is getting more and more difficult)
Being able to ask it stupid questions and edge cases is also something I like with LLMs, like I would propose a design for something (ex: a usb battery pack w/ lifepo4 batts that could charge my phone and be charged by solar at the same time), it would say what it didn't like about my design, counter with its own, then I would try to change aspects of their design to see "what would happen if .." and it would explain why it chose a particular component or design choice and what my change would do and the trade-offs, risks, etc other paths to building it with that, etc. Those types of interactions are probably the best for me actually understanding things, helps me understand limitations and test my assumptions interactively.
Like you, I don't like watching videos. However, the web also has text, the same text used to train the LLMs that you used.
> When something doesn't work like I thought it would, AI helps me understand where I may have went wrong, I ask it a ton of questions, and I try again until I understand how it works and how to prove it.
Likewise, but I would have to ask either the real world or written docs.
I'm glad you've found a way to learn with LLMs. Just remember that people have been learning without LLMs for a long time, and it is not at all clear that LLMs are a better way to learn than other methods.
I think the problem was all of the getting started guides didn't really solve problems I cared about, they're just like "see, a light! isn't that neat?" and then I get bored and impatient and don't internalize anything. The textbooks had theory but so much of it I would forget most of it before I could use it and actually learn. Then when I tried to build something actually interesting to me, I didn't actually understand the fundamentals, it always fails, Google doesn't help me find out why because it could be a million things and no human in my life understands this stuff either, so I would just go back to software.
It could be LLMs are at least possibly better for certain people to learn certain things in certain situations.
I’m going to use 2x the amount of AI that I was planning to use today.
I actively use AI to research, question and argue a lot, this pushes me to reason a lot more than I normally would.
Today's example: - recognize docs are missing for a feature - have AI explore the code to figure out what's happening - back and forth for ours trying to find how to document, rename, refactor, improve, write mermaid charts, stress over naming to be as simple as possible
The only step I'm doing less is the exploration/search one, because an LLM can process a lot more text than I can at the same time. But for every other step I am pushing myself to think more and more profoundly than I would without an LLM because gathering the same amount of information would've bene too exhausting to proceed with this.
Sure, it may have spared me to dig into mermaid too, for what is worth.
So yes, lose some, win others, albeit in reality no work would've been done at all without the LLM enabling it. I would've moved to another mundane task such as "update i18 formatting of date for swiss german customers".
Is it safe to say that LLMs are, in essence, making us "dumber"? No! Please do not use the words like “stupid”, “dumb”, “brain rot”, "harm", "damage", "brain damage", "passivity", "trimming" , "collapse" and so on. It does a huge disservice to this work, as we did not use this vocabulary in the paper, especially if you are a journalist reporting on it.
[1]: https://www.media.mit.edu/projects/your-brain-on-chatgpt/ove...
Using LLMs to do replace the effort we would've otherwise endured to complete a task short-circuits that exercising function, and I would suggest is potentially addictive because it's a near-instant reward for little work.
It would be interesting to see a longitudinal study on the affect of LLMs, collective attention spans, and academic scores where testing is conducted on pen and paper.
It's like a drug. You start using it, and think you have super powers, and then you've forgotten how to think, and you need AI just to maybe be as smart as you were before.
Every company will need enterprise AI solutions just to maybe get the same amount of productivity as they got before without it.
On that note, reading the ChatGPT-esque summary in the linked article gave me more brain damage than any AI I've used so far
> 83.3% of LLM users were unable to quote even one sentence from the essay they had just written
Not sure why you need to wire EEG up, it's pretty obvious that they simply did _not_ write the essay, LLM did it for them, and likely didn't even read it, so there is no surprise that they don't remember what didn't pass through their own thinking apparatus properly.
The idea that I would say 'write an essay on X' and then never look at the output is kind of wild. I guess that's vibe writing instead of vibe coding.
When writing was invented, societies started depending on long form memorization less, which is a cognitive "decline". When calculators were invented, societies started depending on mental math less, which is a cognitive "decline".
I'm sure LLMs are doing the same thing. People aren't getting dumber, they are just outsourcing tasks more, so that their brains spend more time on the tasks that can't be outsourced.
People who maintain a high level of curiosity or a have drive to create things will most assuredly benefit from using AI to outsource work that doesn't support those drives. It has the potential to free up more time for creative endeavors or those that require more deep thinking. Few would argue the benefit there.
Unfortunately, anti-intellectualism is rampant, media literacy is in decline, and a lot of people are content to consume content and not think unless they absolutely have to. Dopamine is a helluva drug.
If LLMs reduce the cognitive effort at work, and the people go home to doom scroll on social media or veg out in front of their streaming media of choice, it seems that we're heading down the path of creating a society of mindless automatons. Idiocracy is cited so often today that I hate to do so myself, but it seems increasingly prescient.
Edit: I also don't think that AI will enable a greater work-life harmony. The pandemic showed that a large number of jobs could effectively be done remotely. However, after the pandemic, there was significant "Return to Office" movement that almost seemed like retribution for believing we could achieve a better balance. Corporations won't pass on the time savings to their employees and enable things like 4-day work weeks. They'll simply expect more productivity from the employees they have.
Also, domesticated dogs show indications of lower intelligence and memory than wolves. They don't have to plan complex strategies to find and kill food, anymore.
But humans need jobs, and jobs need to capture value from society. So we do actually still have to stay sharp, whatever form "sharp" takes.
If you're an entrepreneur, your job is to please the customer and to squeeze your vendors and employees. You still take little to no part in directly taking care of yourself, except as a hobby. Unless you want to be congratulated for wiping your own ass or lifting a fork to your mouth.
Wouldn't that be the expected result here? Less knowledge, more questions?
When I use LLMs, it’s less about patching holes in my memory and more about taking an idea a few steps further than I otherwise might. For me it’s expanding the surface area of inquiry, not shrinking it. If the study’s thesis were true in my case, I’d expect to be less curious, not more.
Now that said I also have a healthy dose of skepticism for all output but I find for the general case I can at least explore my thoughts further than what I may have done in the past.
I don't have a dog in this fight, but "asking more questions" could be evidence of cognitive decline if you're having to ask more questions than ever!
It's easy to twist evidence to fit biases, which is why I'd hold judgement to better evidence comes through.
But if I'm teaching a class, and one student keeps asking questions that they feel the material raised, I don't tend to think "brain damage". I think "engaged and interested student".
Personally, I find myself often asking AI about things I wouldn't have been bothered to find out about before.
For example I've always these funny little grates on the outside of houses near me and wondered what they are. Googling "little grates outside houses" doesn't help at all. Give AI a vagueish description and it instantly tells you they are old boot scapers.
Maybe there is a movie in the back of my head or a song. Typical search engine queries would never find it. I can give super vague references to a LLM and with search enabled get an answer that’s correct often enough.
If I’m constantly asking “what does this mean again?” that would signal decline. But if I’m asking “what if I combine this with X?” or “what are the tradeoffs of Y?” that feels like the opposite: more engagement, not less.
That’s why I’m skeptical of blanket claims from one study, the lived experience doesn’t map so cleanly.
My vehicle has a number of self-driving capabilities. When I used them, my brain rapidly stopped attending to the functions I'd given over, to the extent that there was a "gap" before I noticed it was about to do the wrong thing. On resumption of performing that work myself, it was almost as if I had forgotten some elements of it for a moment while my brain sorted it out.
No real reason to think that outsourcing our thinking/writing/etc will cause our brains to respond any differently. Most of the "reasoned" arguments I see against that idea seem based on false equivalences.
What's really bothering me though, is that I enjoy my job less when using an LLM. I feel less accomplished, I learn less, and I overall don't derive the same value out of my work.. But, on the flip side, by not adopting an LLM I'll be slower than my peers, which then also impacts my job negatively.
So it's like being stuck between a rock and a hard place - I don't enjoy the LLM usage but feel somewhat obligated to.
At some point ai will probably be like calculators where once everyone is using them for everything, that will be a new and different normal from today, and the expectations and the way of judging quality etc will be different than today.
Once everyone is doing the same one weird trick as you, it's no longer useful. You can no longer pretend to be a developer or an artist etc.
There will still be a sea of bottom-feeders doing the same thing, but they will just be universally recognized as cheap junk. Annd that's actually fine, kinda. There is a place and a use for cheap junk that just barely does something, the same as a cheap junky screwdriver or whatever.
It wasn't immediately clear what they actually had the subjects do. It seems like they wrote an essay, which...duh? I would bet brain activity would be similar -- if not identical -- as an LLM user if the subjects were asked to have the other cohorts to write their essay.
Their trial design and interpretation of results is not properly done (i.e. they are making unfair comparison of LLM users to non-LLM users), so they can't really make the kind of claims they are making.
This would not stand up to peer review in it's current form.
I'm also saying this as someone who generally does believe these declines exist, but this is not the evidence it claims to be.
Do you have links or citations to people saying these claims?
Comes down to: - Self selection bias - Trial design - Dubious intepretations of neural connectivity
Calculators reduced our capabilities in mental and pencil-paper arithmetic. Graphing calculators later reduced our capacity to sketch curves, and in turn, our intuition in working directly with equations themselves. Power tools and electric mixers reduced our grip strength. Cheap long distance plans and electronic messaging reduced our collective abilities in long-form letter writing. The written word decimated the population of bards who could recite Homer from memory.
It's not that there aren't pitfalls and failure modes to watch out for, but the framing as a "general decline" is tired, moralizing, motivated, clickbait.
And now people make bad decisions in their daily life about money etc. Most people can't do the math in their head but they also aren't using their calculator at the grocery store to avoid being taken advantage of. The math doesn't get done.
The lesson isn't that we survived calculators, it's that they did dull us, and our general thinking and creativity are about to get likewise dulled.
But didn’t pocket calculators present the same risk / panic?
Anecdotally, this is how I felt when I tried out AI agents to help me write code (vibe coding). I always review the code and I ask it to break it down into smaller steps but because I didn't actually write and think of the code myself, I don't have it all in my brain. Sure, yes I can spend a lot of time really going through it and building my mental model but it's not the same (for me).
But this is also how I felt when I managed a small team once. When you start to manage more and code less, you have to let go of the fact that you have more intimate knowledge of the codebase and place that trust in your team. But at least you have a team of humans.
AI agentic coding is like shifting your job from developer to manager. Like the article that was posted yesterday said: 'treating AI like a "junior developer who doesn't learn"' [1,2].
One good thing I like about AI is that it's forcing people to write more documentation. No more complaining about that.
1. https://www.sanity.io/blog/first-attempt-will-be-95-garbage
2. https://news.ycombinator.com/item?id=45107962
All the headings and bullets and phrases like "The findings are clear:" stick out like a sore thumb.
[1] https://www.cell.com/trends/cognitive-sciences/abstract/S136...
AI solves the 2-sigma problem when used correctly.
AI is extremely neurodegenerative when used incorrectly.
The people using it as a research assistant to discover quality sources they can dive into, and as a tutor while working through those resources, are getting smarter.
The people using it as an “oracle made from magic talking sand” are getting dumber.
To be fair, the same thing is true of the web in general, but not to the extreme I’ve been seeing with AI.
I’m predicting the bell curve of IQ is going to flatten quite a bit over the next decade, as people shift two sigma in both directions.
>Everyone Is Cheating Their Way Through College. ChatGPT has unraveled the entire academic project.
https://archive.ph/ZKZiY
Sure you do, and maybe its really an actual benefit for ya. Not for most though. For young folks still going through education, this is devastating. If I didn't have kids I wouldn't care, less quality competition at work, but I do (too young to be affected by it now, and by the time they will be allowed to use these, frameworks for use and restrictions will be in place already).
But since maybe 30% of folks here are directly or indirectly dependent on LLMs to be pushed down every possible throat and then some more, I expect much more denial and resistance to critique of their little pets or investments.
LLMs may end up being both educationally valuable in certain contexts for certain users, and totally unsuitable for developing brains. I would err towards caution for young minds especially.
My optimistic take is that the rise of AI in education could cause more workplaces to move away from "must have xyz degree" and actually determine if the candidate has the skills needed.
For this reason, I don't feel as optimistic as you do. I worry instead that equality gaps will widen significantly: there will be the majority which abuses AI and graduates with empty brains, and there will be the minority who somehow manage to avoid doing that (e.g. lucky enough to have parents with sufficient foresight to take preventative measures with their children).
https://nypost.com/2025/08/19/world-news/china-restricts-ai-...
"That’s because the Chinese Communist Party knows their youth learn less when they use artificial intelligence. Surely, President Xi Jinping is reveling in this leg up over American students, who are using AI as a crutch and missing out on valuable learning experiences as a result.
It’s just one of the ways China protects their youth, while we feed ours into the jaws of Big Tech in the name of progress."
https://www.scmp.com/tech/policy/article/3323959/chinas-soci...
Let's say I'm a writer of no skill who still wants attention. I could spend years learning to write better, but I still might not get any attention.
Or I could use AI to write something today. It won't be all that interesting, because AI still can't write all that well, but it may be better than I can do on my own, and I can get attention today.
If you care about your own growth (or even not dwindling) as a human, that's a trap. But not everyone cares about that...
I have recently been finding it noticeably more difficult to come up with the word I'm thinking of. Is this because I've been spending more time scrolling than reading? I have no idea.
1: https://www.changetechnically.fyi/2396236/episodes/17378968-...
I think maybe they are project managers since the programming is outsourced to Ai, but the idea don't seem to catch on there
Don’t sugarcoat it. Tell us how you really feel.
Probably both are true: you should try them out and then use them where they are useful, not for everything.
None of my professional life reflects that whatsoever. When used well, LLMs are exceptional and putting out large amounts of code of sufficient quality. My peers have switched entire engineering departments to LLM-first development and are reporting that the whole org is moving 2x as fast even after they fired the 50% of devs who couldn't make the switch and didn't hire replacements.
If you think LLM coding is a fad, your head is in the sand.
For this kind of low stakes, easily verifiable task it’s hard to argue against using LLMs for me.
I have no doubt that volumes of code are being generated and LGTM'd.
It used to take me days or even multiple sprints to complete large-scale infrastructure projects, largely because of having to repeatedly reference Terraform cloud provider docs for every step along the way.
Now I use Claude Code daily. I use an .md to describe what I want in as much detail as possible and with whatever idiosyncrasies or caveats I know are important from a career of doing this stuff, and then I go make coffee and come back to 99% working code (sometimes there are syntax errors due to provider / API updates).
I love learning, and I love coding. But I am hired to get things done, and to succeed (both personally and in my role, which is directly tied to our organization's security, compliance, and scalability) I can't spend two weeks on my pet projects for self-edification. I also have to worry about the million things that Claude CAN'T do for me yet, so whatever it can take off of my plate is priceless.
I say the same things to my non-tech friends: don't worry about it 'coming for your job' yet - just consider that your output and perceived worth as an employee could benefit greatly from it. If it comes down to two awesome people but one can produce even 2x the amount of work using AI, the choice is obvious.
https://edition.cnn.com/2025/08/27/us/alaska-f-35-crash-acci...
But for it to be useful, you have to already know what you're doing. You need to tell it where to look. Review what it does carefully. Also, sometimes I find particular hairy bits of code need to be written completely by hand, so I can fully internalise the problem. Only once I've internalised hard parts of codebase can I effectively guide CC. Plus there's so many other things in my day-to-day where next token predictors are just not useful.
In short, its useful but no one's losing a job because it exists. Also, the idea of having non-experts manage software systems at any moderate and above level of complexity is still laughable.
Like any new tool that automates a human process, humans must still learn the manual process to understand the skill.
Students should still learn to write all their code manually and build things from the ground up before learning to use AI as an assistant.
personally I think everyone should shut up
I've found it both helpful and dangerous, it's great for expanding scope obviously, greater search engine.
But I've also significantly noticed further some of the "harmful patterns" I guess that I would not have noticed about... myself? For example, AI is way too eager to "solve things" when given a prompt, even if you give it an abstract one. It's unable to take a step back and just.... think?
And hey, I notice that I do that too! Lol.
It's helped me realize more refined "stages" of thinking I guess, even beyond just "plan" and "solve".
But for sure a lot of the time I'm just lazy and ask AI to just "go do it" and turn off critical thinking, hoping that it can just 1 shot the problem instead of me breaking it down. Sometimes it genuinely works. Often it doesn't.
I think if I stay way more intentional with my thinking, I can use it to good use. Which will probably reduce AI usage - but it's the first principles of real critical thinking, not the usage of AI.
---
These kinds of studies remind me of when my parents told me "stop getting addicted to games" as a kid. Sure, anyone can observe effects, it takes real brains to really try and understand the first principles effects. Addiction went away in a flash once I understood the principles, lol.
Not many people can perform mental arithmetic beyond single-digit numbers. Just plug it into a calculator...
We're at the point of people plugging their thoughts into an LLM and having it do the work for them... what's going to happen to thinking?
I think what you'd want to measure is someone completing a task manually and someone completing n times the tasks with a copilot.
However, I think that take is too short-sighted and doesn't take into account the effect that these products have on minds that have not yet reached maturity. What happens when you've been using ChatGPT since grade school and have effectively offloaded all the hard stuff to AI through college? Those people won't be using it as a force multiplier - they will be using it to perform basic tasks. Ray-Ban sells glasses now with LLMs built in with a camera and microphone so you can constantly interact with it all day. What happens when everyone has one of these devices and use it for everything?
I wouldn't call it "cognitive decline", more "a less deep understanding of the subject".
Try solving bugs from your vibe coded projects... It's pain, you haven't learned anything while you build something. And as a result you don't fully grasp how your creation works.
LLM are tools, but also shortcuts, and humans learn by doing ¯\_(ツ)_/¯
This is pretty obvious to me after using LLMs for various tasks over the past years.
I am offended by coworkers who submit incompletely considered, visibly LLM generated code.
These coworkers are dragging my team down.
What I can comment on is how valuable and energizing it is for me to cooperatively code with LLM's using agents.
I find it sad to hear when someone finds this experience disappointing, and I wonder what could go wrong to make it so.
Maybe it's my natural ADHD tendencies, but having that implementation/process noise removed from my workflow has been transformational. I joke about having gone super saiyan, but it's for real. In the last month, I've gotten 3 papers in pre-print ready state, I'm working on a new model architecture that I'm about to test on ARC-AGI, and I've gotten ~20 projects to initial release or very close (several of which concretely advance SOTA).
Here's what I think: AI causes you to forget how to program but causes you to learn how to plan.
Also, AI enhances who you are. Dummies get dummer. Smarties get smarter.
But that's not proven. It's anecdote. And I don't believe anyone knows what is really happening and those that claim to are counterproductive.
> 83.3% of LLM users were unable to quote even one sentence from the essay they had just written.
> In contrast, 88.9% of Search and Brain-only users could quote accurately.
> 0% of LLM users could produce a correct quote, while most Brain-only and Search users could.
Reminds me of my coworkers who have literally no idea what Chat GPT put into their PR from last week.
Could a person, armed with ChatGPT, come up with a better solution in a real world problem than without ChatGPT? Maybe that's what actually matters.
But how can they discuss any content if even the "writer" does not remember what they wrote.
I think a return to the apprentice style of institution where people try to create the best real world solution as possible with LLMs, 3D printers, etc. Then use recorded college courses like our grandparents used books.
https://youtu.be/omYP8IUXQTs?si=SgehtLWjnNho5MR6
And so it is with many things. I wrote cursive right through the end of my high school years, but while I can type well on a computer, I have trouble even writing block lettering without mistakes now, and cursive is a lost cause.
Ubiquitous electronic calculators have eroded the heroic mental calculation skills of old. And now artificial "thinking machines" to do the thinking for you cause your brain to atrophy. Colour me surprised. The Whispering Earring story was mentioned here just recently but is totally topical.
https://croissanthology.com/earring
The article actually contains the sentence "The machines aren’t just taking over our work—they’re taking over our minds." which reminds me more of Reefer Madness than an honest critique of modern tech.
Rather than coming up with the right answers?
I think a better interpretation would be to say that LLMs gives people the ability to "filter out" certain tasks in our brains. Maybe a good parallel would be to point out that some drivers are able to drive long distances on what is essentially an "auto-pilot". When this happens they are able to drive correctly but don't really register every single action they've taken during the process.
In this study you are asking for information that is irrelevant (to the participant). So, I think it is expected that people would filter it out if given the chance.
[edit] Forgot to link the related xkcd: https://xkcd.com/1414/
I’ll do it once or twice, tell the llm to do it and reference the changes I made and it’s usually passable. It’s not fit for anything more imo.
How is that not utter garbage? You're comparing text that is barely more than a forum comment, and noticing that people who spend the short time thinking and writing are engaging in different activity from people who spend the time using a research tools and different activity from people whow spend the time asking an AI (and waiting for it) to generate content.
"Writing is nature’s way of letting you know how sloppy your thinking is." -- Guindon
I would argue that it helps kids learn how to organize and formulate coherent thoughts and communicate with others. I'm sure it helps them do homework, too.
AI is no different. Most will use it and not learn the fundamentals. There’s still lots of work for those people. Then some of us are doing things like looking at the state machines that rust async code generation produces or inspecting what the Java JIT is producing and still others are hacking ARM assembly. I use AI to take care of the boring bits, just like writing a nice UI in C++ was tedious back in 1990 so we used VB for that.
And, it is something we need to talk about loudly, but I guess it wouldn't crank up a number of followers or valuation of AI grifters.
There will always be people who misuse something, but we should not hurt those who do not. Same with drugs. There are functional junkies who know when to stop, go on a tolerance break, take just enough of a dose and so forth, vs. the irresponsible ones. The situation is quite similar and I do not want AI to be "banned" (assuming it could) because of people who misuse LLMs.
People, let us have nice things.
As for the article... did they not say the same thing about search engines and Wikipedia? Do you remember how cheating actually helps us learn (by writing down the things you want to cheat)? Problem is, people do not even bother reading the output of the LLM and that is on them.
Internet was supposed to be this wonderful free place with all information available and unbiased, not the cesspool of scams and tracking that makes 1984 look like a fairytale for children. Atomic energy was supposed to free mankind from everlasting struggle for energy dependency, end wars and whatnot. LLMs we supposed to be X and not Y and used as Z and not BBCCD.
For what population loses overall, compared to whats gained (really, what? a mild increased efficiency sometimes experienced on individual level, sometimes made up for PR), I consider these LLMs are a net loss for whole mankind.
Above should tell you something about human nature, how naive some of the brightest of us are.
If it is a human nature issue (with which I agree), then we are in a deep shit and this is why we cannot have nice things.
Educate, and if that fails, then punish those who "misuse" it. I do not have a better idea. It works for me quite well for coding, and it will continue to work as long as it is not going to get nerfed.
Well cheers to even bigger gap between elite who can afford good education and upbringing and cheap crappy rest. Number of scifi novels come to mind where poor semi-mindless masses are governed by 'educated' elites. I always thought how such society must have screwed up badly in the past to end up like that. Nope, road to hell is indeed paved with good intentions and small little steps which seem innocent or even beneficial on their own, in their time.
Most importantly, I did not remember anything (which is a good thing because half of the output is wrong). I then switched to Stackoverflow etc. instead of the "AI". Suddenly my mental maps worked again, I recalled what I read, programming was fun again, the results were correct and the process much faster.
"Won't touch it, I'd never infect my codebase with whatever garbage that thing could output" -> ChatGPT for a small function here or there -> Cursor/Copilot style autocomplete -> Claude Code fully automating 90% of my tasks.
It felt like magic at first once reaching that last (current) point. In a lot of ways for certain things it still is. But it's becoming clearer and clearer that this will never be a silver bullet, and I'm ready to evolve further to "It's another tool in the toolbox to be applied judiciously when and where it makes sense, which it usually does not.". I've also come to greatly distrust anything an LLM says that isn't verified by a domain expert.
I've also felt a great amount of joy from my work go away over this time. Much as the artisans of old who were forced to sit back and supervise the automated machines taking over their craft churn out crappier versions of something faster. There's more to this than just being an old fart who doesn't want to change. We all got into this field for a reason, and a huge part of that reason is that it brings us joy. Without that joy we are going to burn out quickly, and quality is going to nosedive.
This makes complete sense though. We're simply trying to automate the human thinking process like we try to use technology to automate/handoff everything else.
Discussion then: https://news.ycombinator.com/item?id=44286277
The age of social media and constant distraction already atrophies the ability to maintain sustained focus. Who reads a book these days, never mind a thick book requiring struggle to master? That requires immersion, sustained engagement, persevering through discomfort, and denying yourself indulgence in all sorts of temptations and enticements to get a cheap fix. It requires postponed gratification, or a gratification that is more subtle and measured and piecemeal rather than some sharp spike. We become conditioned in Pavlovian fashion, more habituated to such behavior, the more we engage in such behavior.
The reliance on AI for writing is partly rooted in the failure to recognize that writing is a form of engagement with the material. Clear writing is a way of developing knowledge and understanding. It helps uncover what you understand and what you don't. If you can't explain something, you don't know it well enough to have clear ideas about it. What good does an AI do you - you as a knowing subject - if it does the "writing" for you? You, personally, don't become wiser or better. You don't become fit by watching others exercise.
This isn't to say AI has no purpose, but our attitude toward technology is often irresponsible. We think that if we have the power to do something, we are missing out by not using it. This is boneheaded. The ultimate measure is whether the technology is good for you in some particular use case. Sometimes, we make prudential allowances for practical reasons. There can be a place for AI to "write" for us, but there are plenty of cases where it is simply senseless to use. You need to be prudent, or you end up abusing the technology.
Thinking about it myself, and looking at the questions and time limits, I'm not sure how I would be able to navigate that distinction given only 20 minutes. The way I would use an LLM to aid me in writing an essay on the topic wouldn't fit within the time limit, so even with an LLM, I would likely stick to brain only except in a few specific case that might occur (forgetting how to spell a word or forgetting a name for a concept).
So this study likely is applicable to similar timed instances, like letting use LLMs on a test, but that's one I would have already seen as extremely problematic for learning to begin with (granted, still worth while to find evidence to back even the 'obvious' conclusions).
Passive AI use where you let something else think for your will obvious cause cognitive decline.
Active use of AI as a thought partner, and learning as you go yourself seem to feel different.
The issue with studying 18-22 year olds is their prefrontal cortex (a center of logic, will power, focus, reasoning, discipline) is not fully developed until 26. But that probably doesn't matter if the study is trying to make a point about technology.
The art of learning fake information from real could also increase cognitive capacity.
https://www.ncbi.nlm.nih.gov/search/research-news/3283/
Because the people around you affect your life. Presumably you don’t want to live in a world of stupid people who are incapable of critical thought or doing anything which are not direct instructions from a machine. Think about it every time you are frustrated by your interaction with a system you have no choice but to use, such as a bank or a government branch.
John Greene has a quote which I think fits, even if it’s about paying taxes for public education rather than LLM use: https://www.goodreads.com/quotes/1390885-public-education-do...
>the gentle, childlike Eloi and the subterranean, predatory Morlocks.
Seems like a nice metaphor for the current two political parties we are provided with.
Wikipedia lists several. Do you recall which you read?
https://en.wikipedia.org/wiki/The_Time_Machine#Comics
But it does highlight that this mind-slop decline is not new in any way even if it may have accelerated with the decline and erosion of standards.
Think of it what you want, but if the standards that led to a state everyone really enjoys and benefits from are done away with, inevitably that enjoyable state everyone benefited from and you really like will start crumbling all around you.
AI is not really unusual in this manner, other than maybe that it is squarely hitting a group and population like public health policy journalists and programmers that previously thought they were immune because they were engaged in writing. Yes, programmers are essentially just writers.
Like everything else in our life, cognition is "use it or lose it". Oursourcing your decision making and critical thinking to a fancy autocomplete with sycopantic tendencies and incapable of reasoning sure is fun, but as the study found, it has its downsides.
Over the last three years or so, I have seen more and more posts where the position just doesn't make sense. I mean, ten years ago, there were posts on HN that I disagreed with that I upvoted anyway, because they made me think. That has become much more rare. An increasing number of posts now are just... weird (I don't know a better word for it). Not thoughtful, not interesting (even if wrong), just weird.
I can't prove that any of them are AI-generated. But I suspect that at least some of them are.
Given that AI is literally just words on a monitor just like the rest of the internet, I have a strong prior it's not "reprogram[ming]" anyone's mind, at least not in some manner that, e.g. heavy Reddit use might.
We have decades of research - brain scans, studies, experiments, imaging, stimuli responses, etc - proving that when a human no longer has to think about performing a skill, that skill immediately begins to atrophy and the brain adapts accordingly. It’s why line workers at McDonalds don’t actually learn how to properly cook food (it’s all been procedured-out and automated where possible to eliminate the need for critical thinking skills, thus lowering the quality of labor needed to function), and it’s why - at present - we’re effectively training a cohort of humans who lack critical thinking and reasoning skills because “that’s what the AI is for”.
This is something I’ve known about long before the current LLM craze, and it’s why I’ve always been wary or hostile to “aggressively helpful” tools like some implementations of autocorrect, or some driving aides: I am not just trying to do a thing quickly, I am trying to do it well, and that requires repeatedly practicing a skill in order to improve.
Studies like these continue to support my anxiety that we’re dumbing down the best technical generation ever into little more than agent managers and prompt engineers who can’t solve their own problems anymore without subscribing to an AI service.
My point is that I don't see LLM's effect on the brain as being anything more than the normal experience we have of living and that the level of drama the headline suggests is unwarranted. I don't believe in infohazards.
Might they result in skill atrophy? For sure! But it's the same kind of atrophy we saw when, e.g. transitioning from paper maps to digital ones, or from memorizing phone numbers to handing out email addresses. We apply the neurons we save by no longer learning paper map navigation and such to other domains of life.
The process has been ongoing since homo erectus figured out that if you bang a rock hard enough, you get a knife. So what?
Now, you could argue that, when we use AI, critical thinking skills are more important, because we have to check the output of a tool that is quite prone to error. But in actual use, many people won't do that. We'll be back at "Computers Do Not Lie" (look for the song on Youtube if you're not familiar with it), only with a much higher error rate.
If the Victorians had scientific studies showing that, you might have a point. Instead, you just have a flawed analogy.
And, why the scare quotes? If you can point to some actual flaws in the study, do so. If not, you're just dismissing a study that you don't agree with, but you have no actual basis for doing so. Whereas the study does give us a basis for accepting its conclusions.
N=54, students and academics only (mostly undergrad), impossible to blind, and, worst of all, the conclusion of the study supports a certain kind of anti-technology moralizing want to do anyway. I'd be shocked if it replicated, and even if it did, it wouldn't mean much concretely.
You could run the same experiment comparing paper maps versus Google Maps in a simulated navigation scenario. I'd bet the paper map group would score higher on various comprehension metrics. So what? Does that make digital maps bad for us? That's the implication of the article, and I don't think the inference is warranted.
Because of studies like this we know the burning of fossil fuels is a dead-end for us and our climate, and due to that have developed alternative methods of generating energy.
And the study actually proved that LLM usage reprograms your brain and makes you a dumbass. Social media usage does as well, those two things are not exclusive, if anything, their effects compound on an already pretty dumb and gullible population. So if your argumemt is 'but what about reddit', thats a non argument called 'whataboutism'. Look it up and hopefully it might give you a hint why you are getting downvoted.
There have been three recent studies showing that:
- 1. 95% LLM projects fail in the enterprise https://fortune.com/2025/08/18/mit-report-95-percent-generat...
- 2. Experienced developers get 19% less productive when using an LLM https://metr.org/blog/2025-07-10-early-2025-ai-experienced-o...
- 3. LLM usage makes you dumber https://publichealthpolicyjournal.com/mit-study-finds-artifi...
We reached a stage where people on the internet mistake their opinion on a subject to be as relevant as a study on the subject.
If you don't have another study or haven't done the science to disprove this study, how come you dismiss so easily a study that actually took time, data and the scientific method to reach to a conclusion? I feel we gotta actively and firmly call out that kind of behavior and ridicule it.