Ask HN: Are AI Copilots Eroding Our Programming Skills?
But I’ve noticed something unsettling:
* Shallow Understanding: I sometimes accept suggestions without fully understanding them. * Problem-Solving Rust: On hard problems, I feel less confident in reaching a solution independently. * Onboarding New Devs: Junior engineers rely on AI outputs without questioning edge cases, leading to subtle bugs.
Questions for the community:
* Have you experienced skill atrophy or decreased ownership since adopting AI tools? * What practices help you preserve deep understanding while still leveraging AI speed? * Should we treat AI copilots as “draft generators” or as true programming partners?
I’d love to hear anecdotes, strategies, or hard data. Let’s figure out how to use these powerful assistants without becoming their apprentices.
Why? This is a choice, and you can choose to change this behavior. You should! It will feel better and might avoid catastrophe.
> On hard problems, I feel less confident in reaching a solution independently.
I think you need to involve your independent sense of troubleshooting and problem solving, but that naturally involves using tools that help you. Let AI be that, if it helps you reach and verify solutions. Just don't stop involving yourself in a real way or you're risking a lot!
> Junior engineers rely on AI outputs without questioning edge cases, leading to subtle bugs.
See the first statement of yours I quoted. If you struggle with this as an experienced dev and can't lead on the matter by practice, how could you expect a junior dev to not struggle much more in the same shoes? This seems like something you address after dealing with your own shortcomings in your reliance on AI-provided solutions you do not comprehend fully.
> Have you experienced skill atrophy or decreased ownership since adopting AI tools?
Absolutely the opposite. I have made so much progress in areas I was struggling to overcome a lack of understanding. AI does not tire of discussing different angles, metaphors for understanding, etc. There's no reason to not understand AI, it will walk you through it to the point it points out its own errors sometimes.
> What practices help you preserve deep understanding while still leveraging AI speed?
Take copious notes on a whiteboard or paper. Do not take these notes digitally. You will not retain information you copy and paste from AI, you must have a physical note taking process where you synthesize the essentials and deal with them. After you have made paper/whiteboard notes, you can distill them into digital notes (do not directly retype them, summarize and clarify and make the digitized note concise)
> Should we treat AI copilots as “draft generators” or as true programming partners?
Work with AI like you'd share a project with an intern: you're in charge. You keep it on track, redirect it when it is off-course, and have it explain the decisions it makes. The more I treat AI like an intern the happier I have been with the experience, too.
I don't use them, at all. I briefly tried the local tab completion stuff offered in JetBrains products. It lasted an hour or two. The log messages it wrote didn't sound like me, and the "copilot pause" was immediately frustrating.
The boilerplate argument comes up a lot, but I really don't see it as the huge issue that would drive me to try and make Clippy generate it for me. That sort of "boring" work is great for "meditating" on the thing you're doing. Spending time adjacent to the problem putting up the scaffolding makes you mentally examine the places where things are going to interact and gives that little seed of an idea time to grow a bit. Become more robust.
Later, when there's an issue, you can ask the human that wrote something questions about it and they will probably have at least a fuzzy recollection of how it was built (and why it was done that way) that can offer ideas. Best you can do is hope the LLM doesn't hallucinate when you ask it about all the broken stuff.
Ultimately I see neither value nor "power" in the current "assistants." They generate the statistically most median output and often get it wrong. They make stuff up. They have no understanding of anything, and they don't learn from mistakes. If they were a person you'd be asking serious, but nearly rhetorical, questions about whether or not to fire them.
Remember "borrowing" javascript on Geocities to make something work, or finding a library that helped you achieve your AJAX web 2.0 upgrade later on? How is AI different than starting with a ZIP file of some starter created by a person?
To a certain extent, yes absolutely. If you programmed more yourself, you'd be better at programming than the version of you that spends any significant amount of time generating AI code.
But that doesn't mean you'll totally atrophy the skill and magically forget your fundamentals.