Ask HN: Are AI Copilots Eroding Our Programming Skills?
But I’ve noticed something unsettling:
* Shallow Understanding: I sometimes accept suggestions without fully understanding them. * Problem-Solving Rust: On hard problems, I feel less confident in reaching a solution independently. * Onboarding New Devs: Junior engineers rely on AI outputs without questioning edge cases, leading to subtle bugs.
Questions for the community:
* Have you experienced skill atrophy or decreased ownership since adopting AI tools? * What practices help you preserve deep understanding while still leveraging AI speed? * Should we treat AI copilots as “draft generators” or as true programming partners?
I’d love to hear anecdotes, strategies, or hard data. Let’s figure out how to use these powerful assistants without becoming their apprentices.
I don't use them, at all. I briefly tried the local tab completion stuff offered in JetBrains products. It lasted an hour or two. The log messages it wrote didn't sound like me, and the "copilot pause" was immediately frustrating.
The boilerplate argument comes up a lot, but I really don't see it as the huge issue that would drive me to try and make Clippy generate it for me. That sort of "boring" work is great for "meditating" on the thing you're doing. Spending time adjacent to the problem putting up the scaffolding makes you mentally examine the places where things are going to interact and gives that little seed of an idea time to grow a bit. Become more robust.
Later, when there's an issue, you can ask the human that wrote something questions about it and they will probably have at least a fuzzy recollection of how it was built (and why it was done that way) that can offer ideas. Best you can do is hope the LLM doesn't hallucinate when you ask it about all the broken stuff.
Ultimately I see neither value nor "power" in the current "assistants." They generate the statistically most median output and often get it wrong. They make stuff up. They have no understanding of anything, and they don't learn from mistakes. If they were a person you'd be asking serious, but nearly rhetorical, questions about whether or not to fire them.
To a certain extent, yes absolutely. If you programmed more yourself, you'd be better at programming than the version of you that spends any significant amount of time generating AI code.
But that doesn't mean you'll totally atrophy the skill and magically forget your fundamentals.