Ask HN: Why aren't more developers using AI tools?
4 dawie 3 8/14/2025, 10:00:10 PM
I’ve worked in both corporate and startup settings and keep noticing that many talented developers I meet don’t use AI tools at all — not even for small things like boilerplate code, tests, or docs.
Why? Concerns about security or IP? Don’t trust the quality? Slows you down instead of helping? Just don’t see the value?
If you don’t use AI tools (or tried and stopped), I’d love to hear your reasons. If you do use them, what convinced you?
Even prior to AI, I've said many times that code generation is evil [1]. I hated Ruby on Rails for this reason: people generate tons of files and then other people are stuck maintaining lots of code that they fundamentally do not understand. To a lesser extent, you also had this with IntelliJ generating large quantities of code as an attempt to make Java a little less irritating.
I once worked for a company that would generate Protobuf parsers and then monkey-patch extra functionality on top of it, and it was an extremely irritating and error-prone process.
The damage for this used to be limited to very specific code generation tools, but with LLMs, there's effectively no limit to how much code can be generated. This isn't inherently an issue, but like other code generation tools, it runs the risk of creating a lot of shitty code that no one actually understands. It's one thing if this is it's something low-stakes like a game or a TODO list app, but it's much more concerning with regards to banking and medical applications: if a lazy developer generates a large amount of code that looks more or less correct and it seems to more or less work but doesn't understand it, shit can get serious. I certainly would really prefer that the people writing the firmware for an EKG meter actually understand the the code they're writing.
[1] At least code that you're expected to edit and maintain. My opinion changes for stuff that's just an optimization detail.
On personal projects I usually use AI (Zed Zeta) for tab completion, although sometimes I get annoyed by it interfering with the UI or my cursor and turn it off. I will also occasionally feed a bug or error into Gemini if I'm really stuck - this only works occasionally but is worth a shot.
Every couple of months I try the current hotness (Copilot/Cursor/Gemini Code/etc.) for a small or greenfield project; if I stick with the project for more than a few days I inevitably find the AI can't do anything except the most common possible thing, and turn it off.
I think the disconnect is in my ability to explain to the model in English what I want it to create. If it's something common, I can give it the gist and its assumptions as to the rest will be valid, but if it's something difficult my few paragraphs of instruction and outlining probably just doesn't provide enough specificity. This is maybe the same problem low-code tools run into: the thing that fully defines what I want to happen is the code itself; any abstraction you build on top of that will be at least as complex, unless I happen to want all the defaults.
The people who rave about AI tools generally laud their facility with the tedious boilerplate involved in typical web-based business applications, but I have spent years steering my career away from such work, and most of what I do is not the sort of thing you can look up on StackOverflow. Perhaps there are things an AI tool could do to help me, and perhaps someday I will be curious enough to try; but for now they seem to solve problems I don't really have, while introducing difficulties I would find annoying.