Ask HN: Are LLMs useful or harmful when learning to program?
8 dominicq 16 5/11/2025, 12:07:33 PM
Is there any consensus on whether it makes you a good programmer to study using LLMs? Or should you perhaps do it the old fashioned way, by reading manuals and banging your head against the problem?
Case in point I asked LLM to generate some code for me. It didn’t use generics (a language feature) and gave me some shit code.
I had to prompt it several more times to give me generic code with better typings.
I think it would be helpful for a total nooblet to get a hang of the basics but I think if they rely on the LLM too much beyond a certain point they will face diminishing returns.
Think about it. There is so much knowledge in the world. Anyone can do anything to a satisfactory degree pretty quickly. But to really understand something takes experience and self-discovery. And I’m not speaking about master. Just expertise.
The problem is when you're learning something completely foreign like learning to program in your first language, you don't really have enough context to ask meaningful questions. In that case, it is simply better to do things like read manuals and bang your head against the problem.
For instance, one of the best ways I've found to learn a new language or framework or technique is to find a working example of something, then take it apart piece by piece to see how it all fits together. LLMs can work really well here. They can give you a super basic example of how something works, as well as an explanation you can use as a jumping off point for further research.
And those basic examples can be surprisingly hard to find elsewhere. A lot of open source systems are designed for people with no programming knowledge whatsoever, and in a way that they can handle 52 million possible use cases with all the caveats that brings along. So when you're trying to learn from them, you end up having to untangle hundreds of conditions and feature flags and config options and other things designed for use cases you simply don't have. LLMs can provide a simple, customised example that avoids that.
That said, you have to be willing to try things yourself, and put in the effort needed to figure out why the code the LLM returned does what it does, and how it works on a technical level. If you're just throwing problems at the tool and letting them do all the work (like many vibe coders now), you're not really learning anything.
However, over reliance on it - like with all technologies - doesn't end well.
The more you rely on it as a source of truth and as your mentor or executor of your hi level intentions - the more harmful. Obviously.
When you’re beginner you can’t possibly know good vs bad, right vs wrong decisions.
Whatever mental model and thinking flaws you start with is going to be amplified. And hidden behind false sense of progress (the more you rely on llm the more you trust it with whatever terrible code it spits out).
If you treat and use it just as a sophisticated algorithm to save some time on typing, or exposing alternatives, edge cases - then it’s very useful in speeding up your learning.
No matter what purpose you are trying to achieve, the success of a tool comes from applying the correct tool, to the correct problem, in the correct way. LLMs are kinda cool that they are flexible enough to be a viable tool for many things, but those other two criteria are up to you.
Dig into problems, try to understand why it was solved a specific way, ask what are the con's of doing it another way, and let it know you are a learning and want to understand more of the fundamentals.
LLMs are just a tool, so try and use them to help you learn the fundamentals of the language.
It could also be argued that using StackOverflow to solve a problem doesn't help you understand a problem, and equally, just asking an LLM for answers doesn't help you understand the language.
Giving it a problem statement and just blindly asking it for an answer will always yield the worst result, but I find this is often our first instinct.
Working with it to solve the problem in a "step-by-step" manner obviously yields a much better result as you tend to understand how it got to the answer.
I look at it as similar to rote-memorization vs. learning/understanding.
Most often I now use it to help find the "right question" for me to ask when starting with a new topic or domain or synthesize docs that were difficult for me to understand into simpler or more digestible terms.
But the learning happens when you bang your head. It has to hurt the same way going to the gym hurts. If it doesn't, you're not training and probably you're not really learning.
TL;DR: There are some benefits, but mostly not worth it or actively harmful for students/junior engineers.
1) Students using LLMs to code or getting answers generally learn much slower than those who do it old fashioned way (we call it "natty coding"). A very important of the learning experience is the effort to grok the problem/concept in your own mind, and finding resources to support/challenge your own thinking. Certainly an answer from a chatbot can be one of those resources, but empirically students tend to just accept the "quickest" answer and move on (bad habbit from schooling). But eventually it hurts them down the road, since their "fuzzy" understanding compounds over time. It's similar to the old copy-from-StackOverflow phenomenon, but on steroids. If the students are using these new tools as the new search, then they still need to learn to read from primary sources (ie. the code or at least the docs).
2) I think one of the problems right now is that we're very used to measure learning via productivity. Ie. the ability of a student to produce a thing is a measurement of their learning. The new generation of LLM assistants breaks this model of assessment. And I think a lot of students feel the need to get on the bandwagon because they produce very immediate benefits (like doing better on homework) while incurring long-term costs. What we're trying to do is to actually teach them about learning and education first, so they at least understand the tradeoffs they are making using these new AI tools.
3) Where we've found better uses for these new tools are in situations where the student/engineer understand that it's an adversarial relationship. Ie. there's a 20% chance of bullshit. This positioning puts the accountability in the human operators (can't say the AI "told me so") and also helps them train their critical analysis skills. But it's not how most tools are positioned/designed from a product perspective.
Overall, we've mostly prohibited junior staff/students from using AI coding tools, and they need a sort of "permit" to use it in specific situations. They all have to disclose if they're using AI assistants. There are less restrictions on senior/more experienced engineers, but most of them are using LLMs less due to the uncertainties and complexities introduced. The "fuzzy understanding" problem seems to affect senior folks to a lesser degree, but it's still there and compounds over time.
Personally, I've seen myself be more mindful of the effects of automation from these experiences. So much so that I've turned off things like auto-correct, spellcheck, etc. And it seems like the passing of the torch from senior to junior folks is really strained. I'm not sure how it'll play out. A senior engineer who can properly architect things objectively have less use for junior folks, from a productivity perspective, because they can prompt LLMs to do the manual code generation. Meanwhile, junior folks all have high powered footgun which can slow down their learning. So one is pulling up the ladder behind them, while the other is shooting their feet.
[1] https://www.divepod.to