Ask HN: Are AI Copilots Eroding Our Programming Skills?

3 buscoideais 7 7/2/2025, 2:29:00 PM
Over the last 12 months I’ve integrated AI copilots (GitHub Copilot, Tabnine, etc.) into my daily workflow. They speed up boilerplate, suggest one-line fixes, and even refactor entire functions on demand.

But I’ve noticed something unsettling:

* Shallow Understanding: I sometimes accept suggestions without fully understanding them. * Problem-Solving Rust: On hard problems, I feel less confident in reaching a solution independently. * Onboarding New Devs: Junior engineers rely on AI outputs without questioning edge cases, leading to subtle bugs.

Questions for the community:

* Have you experienced skill atrophy or decreased ownership since adopting AI tools? * What practices help you preserve deep understanding while still leveraging AI speed? * Should we treat AI copilots as “draft generators” or as true programming partners?

I’d love to hear anecdotes, strategies, or hard data. Let’s figure out how to use these powerful assistants without becoming their apprentices.

Comments (7)

MongooseStudios · 10h ago
Yours is not the first post asking about this here. Which in and of itself says something.

I don't use them, at all. I briefly tried the local tab completion stuff offered in JetBrains products. It lasted an hour or two. The log messages it wrote didn't sound like me, and the "copilot pause" was immediately frustrating.

The boilerplate argument comes up a lot, but I really don't see it as the huge issue that would drive me to try and make Clippy generate it for me. That sort of "boring" work is great for "meditating" on the thing you're doing. Spending time adjacent to the problem putting up the scaffolding makes you mentally examine the places where things are going to interact and gives that little seed of an idea time to grow a bit. Become more robust.

Later, when there's an issue, you can ask the human that wrote something questions about it and they will probably have at least a fuzzy recollection of how it was built (and why it was done that way) that can offer ideas. Best you can do is hope the LLM doesn't hallucinate when you ask it about all the broken stuff.

Ultimately I see neither value nor "power" in the current "assistants." They generate the statistically most median output and often get it wrong. They make stuff up. They have no understanding of anything, and they don't learn from mistakes. If they were a person you'd be asking serious, but nearly rhetorical, questions about whether or not to fire them.

jf22 · 9h ago
It's hard for me to understand why someone would comment about AI copilots eroding skills when they've only used code completion tooling for fewer than two hours.
MongooseStudios · 8h ago
To provide a perspective on, and reasons for, not using them. Specifically surrounding concerns about quality, maintainability, and keeping your mind engaged in the process.
jf22 · 7h ago
But the conversation is about people who use them...
NewUser76312 · 9h ago
It's probably worthwhile to compare this to calculators and mental math skills.

To a certain extent, yes absolutely. If you programmed more yourself, you'd be better at programming than the version of you that spends any significant amount of time generating AI code.

But that doesn't mean you'll totally atrophy the skill and magically forget your fundamentals.

ryry · 6h ago
This exactly. I find that I don't remember how to do some of the things I used to have more easily memorized, but I still need the fundamentals when things go horribly wrong and I need to dive into code myself.
KaranSohi · 7h ago
Don't think so, as long as you give a quick read to the code that is being generated and you're using it as an assistant I think they're really helpful. Also, I'm having a hard time picking and sticking to one or few tools given the variety and multiple releases happening in the market.