Ask HN: Is anyone else sick of AI splattered code
74 throwaway-ai-qs 65 9/17/2025, 5:29:58 PM
Between code reviews, and AI generated rubbish, I've had it. Whether it's people relying on AI to write pull request descriptions (that are crap by the way), or using it to generate tests.. I'm sick of it.
Over the year, I've been doing a tonne of consulting. The last three months I've watched at least 8 companies embrace AI generated pip for coding, testing, and code reviews. Honestly, the best suggestions I've seen are found by linters in CI, and spell checkers. Is this what we've come to?
My question for my fellow HNers.. is this what the future holds? Is this everywhere? I think I'm finally ready to get off the ride.
1. I was a pretty early adopter of LLMs for coding. It got to the point where most of my code was written by an LLM. Eventually this tapered off week by week to the level it is now... which is literally 0. It's more effort to explain a problem to an LLM than it is to just think it through. I can't imagine I'm that special, just a year ahead of the curve.
2. The maintenance burden of code that has no real author is felt months/years after the code in written. Organizations then react a few months/years after that.
3. The quality is not getting better (see gpt 5) and the cost is not going down (see Claude Code, cursor, etc). Eventually the bills will come due and at the very least that will reduce the amount of code generated by an LLM.
I very easily could be wrong, but I think there is hope and if anyone tells me "it's the future" I just hear "it's the present". No one knows what the future holds.
I'm looking for another technical co-founder (in addition to me) to come work on fun hard problems in a hand written Elixir codebase (frontend is clojurescript because <3 functional programming), if anyone is looking for a non-LLM-coded product! https://phrasing.app
I recently used it to write a Sublime Text plugin for me and forked a Chrome extension and added a bunch of features to. Both open source and pretty trivial projects.
However, I rarely use it to write code for me in client projects. I need to know and understand everything going out that we are getting paid for.
what is preventing you from this even if you are not the one typing it up? you can actually understand more when you remove the burden of typing, keep asking questions, iterate on the code, do code review, security review, performance review… if done “right” you can end up not only understanding better but learning a bunch of stuff you didn’t know aling the way
Learn the rules first, then learn when to break them.
So there’s a lot of heuristics in code’s quality. But some time, it’s just plain bad.
At least two parts of the SOLID acronym are basically anachronisms, nonsense in modern coding (O + L). And I is basically handled for you with DI frameworks. D doesn't mean what most people think it does.
S is the only bit left and it's pretty much open to interpretation.
I don't really see them as anything meaningful, these days it's basically just make your classes have a single responsibility. It's on a level of KISS, but less general.
If someone says "Most of my code is AI" there are only 3 reasons for this 1. They do something very trivial on daily basis (and its not a bad thing, you just need to be clear about this). 2. The skill is not there so they have to use AI, otherwise it would be faster to DIY it than to explain the complex case and how to solve it to AI. 3. They prefer to explain to llm rather than write code themselves. Again, no issue with this. But we must be clear here - its not faster. Its just someone else is writing the code for you while you explain it in details what to do.
Like most dopaminergic activities though you end up chasing that original rush, and eventually quit when you can’t replicate it and/or realize it is a poor substitute for the real thing, and likely stunting your growth
here’s 4 - there are senior-level SWEs who spent their entire career automating every thing they had to do more than once. it is one of core traits that differentiates “10x” SWE from “others”
LLMs have taken the automation part to another level and best SWEs I know use them every hour of every day to automate shit that we never had tools to automate before
When an AI generates some nonsense I have zero problem changing or deleting it, but if it's human-written I have to be aware that I may be missing context/understanding and also cognizant of the author's feelings if I just re-write the entire thing without their input.
It's a huge amount of work offloaded on me, the reviewer.
So when there’s some confusion, I’m going back to the author. Because you should know why each line was written and how it contributes to the solution.
But a complete review takes time. So in a lot of places, we only do a quick scan checking for unusual stuff instead of truly reviewing the algorithms. That’s because we trust our colleagues to test and verify their own work. Which AI users usually skip.
The person who created the PR is responsible for it. Period. Nothing changes.
So when you have a tools that can produce things that fits the happy path easily, don’t be surprised that the amount of PRs goes up. Because before, by the time you can write the happy path that easily, experience has taught you all the error cases that you would have skipped.
So code is not code? You’re admitting that provenance matters in how you handle it.
It's like saying physics it's just math. If we read:
F = m*a
There is ton of knowledge encoded in that formula.
We cannot evaluate the formula alone. We need the knowledge behind it to see if it matches reality.
With llms we know for a fact that if the code matches reality, or expectations, it's a happy accident.
LLM-generating code has no unifying theory behind it - every line may as well have been written by a different person, so you get an utterly insane looking codebase with no constant thread tying it together and no reason why. It’s like trying to figure out what the fuck is happening in a legacy codebase, except it’s brand new. I’ve wasted hours trying to understand someone’s MR, only to realize it’s vibe code and there’s no reason for any of it.
oh come on.
That's like saying "food is food" or "an AI howto is the same as a human-written howto".
The problem is that code that looks good is not the same as code that is good, but they are superficially similar to a reviewer.
and... you can absolutely bury reviewers in it.
So no, code is not just code.
You get shamed and dismissed for mentioning that you used AI, so naturally nobody mentions they used AI. They mention AI the first time, see the blow back, and never mention it again. It just shows how myopic group-think can be.
It's a tool. I still have the expectation of people being thoughtful and 'code craftspeople'.
The only caveat is verbosity of code. It drives me up the wall how these models try to one-shot production code and put a lot of cruft in. I am starting to have the expectation of having to go in and pare down overly ambitious code to reduce complexity.
I adopted LLM coding fairly early on (GPT3) and the difference between then and now is immense. It's a fast-moving technology still so I don't have the expectation that the model or tool I use today will be the one I use in 3 months.
I have switched modalities and models pretty regularly to try to keep cutting edge and getting the best results. I think people who refuse to leverage LLMs for code generation to some degree are going to be left behind. It's going to be the equivalent, in my opinion, of keeping hard cover reference manuals on your desk versus using a search engine.
And I say this as a grumpy senior that has found a lot of value in tools like Copilot and specially Claude Code.
I and my coworkers use AI, but the incoming code seems pretty ok. But my view is just my current small employer.
But we're a really small but mature engineering org, I can't imagine the bigger companies with hundreds of less experienced engineers, just using it without car and caution, it must just cause absolutely chaos (or will soon).
When Gutenberg’s press arrived, monks likely thought: “Who would want uniform, soulless copies of the Bible when I can hand-craft one with perfect penmanship and illustrations? I’ve spent my life mastering this craft.”
But most people didn’t care. They wanted access and speed. The same trade-off shows up with mass-market books, IKEA furniture, Amazon basics. A small group still prizes the artisanal version, but the majority just wants something that works.
There's also the "Cottagecore" aesthetic that was popular a few years ago, which is conceptually similar to the Arts and Crafts movement or the earlier Luddites.
[1]https://www.craftbeer.com/craft-beer-muses/craftwashing-happ...
[2]https://www.thekjvstore.com/1611-king-james-bible-regular-fa...
But I don't prompt them, they typically just suggest a completion, usually better than what we had before from pure static analysis.
Anything more it detracts. I learn nothing, and the code is believable crap, which requires mindbogglingly boring and intense code reviews.
It's sometimes fine to prototype throw-away code (specially if you don't to intend to invest in learning the tech deeply), but I don't like what I miss by not doing the thinking by myself.
https://epoch.ai/blog/can-ai-scaling-continue-through-2030
https://epoch.ai/blog/what-will-ai-look-like-in-2030
There's a good chance that eventually reading code will become like inspecting assembly.
We don’t read assembly because we read the higher level code, which deterministically is compiled to lower level code.
The equivalent situation for LLMs would be if we were reviewing the prompts only, and if we had 100% confidence that the prompt resulted in code that does exactly what the prompt asks.
Otherwise we need to inspect the generated code. So the situation isn’t the same, at least not with current LLMs and current LLM workflows.
I think the reason "we" don't read, or write, assembly is that it takes a lot of effort and a detailed understanding of computer architecture that are simply not found in the majority of programmers, e.g. those used to working with javascript frameworks on web apps etc.
There are of course many "we" who work with assembly every day: as a for instance, people working with embedded systems, or games programmers as another.
Agree. But most code already generated won't be improved until many years from now.
> There's a good chance that eventually reading code will become like inspecting assembly.
Also agree, but I believe it will be very inefficient and complex code, unlike most written assembly.
I'm not sure tight code matters to anyone but maybe 0.0001% of us programmers, anymore.
That what makes it seem disrespectful, as if someone is wasting your time when they could have done better
When I see AI code I feel excited that the developer is building stuff beyond their previous limits.
For anyone beyond the most beginner juniors this is absolutely not true
I don't think this is a rational take on the utility of AI. You really are not leveraging it well.
I'm not sick of AI. I'm just sick of people thinking that AI should be everything in our industry. I don't know how many times I can say "It is just a tool." Because it is. We're 3 years deep into LLM-based products, and people are just now starting to even ask... "Hey, where are the strengths and weaknesses of this tool, and best practices for when to use it or not?"
I'm sorry you feel that way. Yes, this is probably the future.
AI is a new tool or really a huge new category of different AI tools that will need time to gain competency on.
AI doesnt eliminate the need for developers, it's just a whole new load of baggage and we will NEVER get to the point where that new pile of problems becomes 0.
A tool that gemini cli really loves if Ruff, I run it often :)
In the short term it's going to make things suck even more, but I'm ready to rip that bandaid off.
P.S. To anyone that is about to reply to this, or downvote it, to tell me that AI is the future, you should be aware that I also hope someone places a rotting trout in your sock drawer each day.
Maybe LLMs can't build you an enterprise back-end for thousands of users. But they sure as shit can make you a bespoke applet that easily tracks your garage sale items. LLMs really shine in greenfield <5k LOC programs.
I think it's largely a mistake for devs to think that LLMs are made for them, rather than for enabling regular people to get far more mileage out of their computers.
https://www.npmjs.com/package/brahma-firelight
c ya, wouldn't wanna b ya.