These claims wouldn't matter if the topic weren't so deadly serious. Tech leaders everywhere are buying into the FOMO, convinced their competitors are getting massive gains they're missing out on. This drives them to rebrand as AI-First companies, justify layoffs with newfound productivity narratives, and lowball developer salaries under the assumption that AI has fundamentally changed the value equation.
This is my biggest problem right now. The types of problems I'm trying to solve at work require careful planning and execution, and AI has not been helpful for it in the slightest. My manager told me that the time to deliver my latest project was cut to 20% of the original estimate because we are "an AI-first company". The mass hysteria among SVPs and PMs is absolutely insane right now, I've never seen anything like it.
rglover · 55m ago
> My manager told me that the time to deliver my latest project was cut to 20% of the original estimate because we are "an AI-first company".
Do not forgive them. We already have a description for them:
"A bunch of mindless jerks who'll be the first against the wall when the revolution comes."
o11c · 9m ago
Remember, the origin of that quote explicitly specifies "marketing department".
The thing about hype cycles (including AI) is that the marketing department manages to convince the purchases to do their job for them.
vkou · 10m ago
I'd like to see those SVPs and PMs, or shit, even a line manager use AI to implement something as simple as a 2-month intern project[1] in a week.
---
[1] We generally budget about half an intern's time for finding the coffee machine, learning how to show up to work on time, going on a fun event with the other interns to play minigolf, discovering that unit tests exist, etc, etc.
rglover · 1h ago
Most of it doesn't exist beyond videos of code spraying onto a screen alongside a claim that "juniors are dead."
I think the "why" for this is that the stakes are high. The economy is trembling. Tech jobs are evaporating. There's a high anxiety around AI being a savior, and so, a demi-religion is forming among the crowd that needs AI to be able to replace developers/competency.
That said: I personally have gotten impressive results with AI, but you still need to know what you're doing. Most people don't (beyond the beginner -> intermediate range), and so, it's no surprise that they're flooding social media with exaggerated claims.
If you didn't have a superpower before AI (writing code), then having that superpower as a perceived equalizer is something that you will deploy all resources (material, psychological, etc) to ensuring that everyone else maintain the position that 1) superpower good, 2) superpower cannot go away 3) the superpower being fallible should be ignored.
Like any other hype cycle, these people will flush out, the midpoint will be discovered, and we'll patiently await the next excuse to incinerate billions of dollars.
SchemaLoad · 6m ago
At least in my experience, it excels in blank canvas projects. Where you've got nothing and want something pretty basic. The tools can probably set up a fresh React project faster than me. But at least every time I've tried them on an actual work repo they get reduced to almost useless.
Which is why they generate so much hype. They are perfect for tech demos, then management wonders why they aren't seeing results in the real world.
herpdyderp · 2m ago
I've had great success with GPT5 in existing projects because its agent mode is very good (the best I've seen so far) at analyzing the existing codebase and then writing code that feels like it fits in already (without prompt engineering on my part). I still agree that AI is particularly good on fresh projects though.
fennecbutt · 54m ago
I mean the truth should be fairly obvious to people given a lot of the talk around AI stuff rings very much like the ifls/mainstream media style "science" articles which always make some outrageous "right around the corner" claim based off some small tidbit out of a paper they only skimmed the abstract of.
captainkrtek · 1h ago
This tracks with my own experience as well. I’ve found it useful in some trivial ways (eg: small refactors, type definition from a schema, etc.) but so far tasks more than that it misses things and requires rework, etc. The future may make me eat my words though.
On the other hand, I’ve lately seen it misused by less experienced engineers trying to implement bigger features who eagerly accept all it churns out as “good” without realizing the code it produced:
- doesn’t follow our existing style guide and patterns.
- implements some logic from scratch where there certainly is more than one suitable library, making this code we now own.
- is some behemoth of a PR trying to do all the things.
nicce · 58m ago
> implements some logic from scratch where there certainly is more than one suitable library, making this code we now own - is some behemoth of a PR trying to do all the things
Depending on the amount of code, I see this only as positive? Too often people pull huge libraries for 50 lines of code.
captainkrtek · 54m ago
I'm not talking about generating a few lines instead of importing left-pad. In recent PRs I've had:
- Implementing a scheduler from scratch (hundreds of lines), when there are many many libraries for this in Go.
- Implementing some complex configuration store that is safe for concurrent access , using generics, reflection, and a whole other host of stuff (additionally hundreds of lines plus more for tests).
While I can't say any of the code is bad, it is effectively like importing a library which your team now owns, but worse in that no one really understands it or supports it.
Lastly, I could find libraries that are well supported, documented, and active for each of these use-cases fairly quickly.
daxfohl · 33m ago
And that may be where the discrepancy comes in. You feel fast because, whoa I created this whole scheduler in ten seconds! But the you also have to spend an hour code reviewing that scheduler, which, still it feels fast to have a good working scheduler in such a short time. But without AI, maybe it feels slow to find and integrate with some existing scheduling library, but in wall clock time it was the same.
SchemaLoad · 2m ago
The trick is that no one is actually carefully reviewing this stuff. Reviewing code is properly extremely hard. I'd say even harder than writing it from scratch. But there's no minimum amount of work you have to do. If you just do a quick skim over the result, no one will know you didn't carefully review every single detail. Then it gets merged to production full of mistakes.
davidcelis · 24m ago
Someone vibe coded a PR on my team where there were hundreds of lines doing complex validation of an uploaded CSV file (which we only expected to have two columns) instead of just relying on Ruby's built-in CSV library (i.e. `CSV.parse` would have done everything the AI produced)
7thpower · 8m ago
I wonder how many times the LLM randomly tried to steer back to that library only to get chastised for not following instructions.
mandeepj · 14m ago
That’s a good example of ‘getting a desired outcome based on prompt’ - use a built-in lib or not.
vkou · 5m ago
And when it hallucinates a non-existant library, what are the magic prompts that you use to tell it to stop?
heavyset_go · 28m ago
Yes, for leftpad-like libraries it's fine, but does your URL or email validation function really handle all valid and invalid cases correctly now and into the future, for example?
adelie · 15m ago
i've seen this fairly often with internal libraries as well - a recent AI-assisted PR i reviewed included a complete reimplementation of our metrics collector interface.
suspect this happened because the reimplementation contained a number of standard/expected methods that we didn't have in our existing interface (because we didn't need them), so it was considered 'different' enough. but none of the code actually used those methods (because we didn't need them), so all this PR did was add a few hundred lines of cognitive overhead.
mcny · 48m ago
> Too often people pull huge libraries for 50 lines of code.
I used to be one of those people. It just made sense to me when I was (I still am to some extent) more naïve than I am today. But then I also used to think "it makes sense for everyone to eat together at a community kitchen of some sort instead of cooking at home because it saves everyone time and money" but that's another tangent for another day. The reason I bring it up is I used to think if it is shared functionality and it is a small enough domain, there is no need for everyone to spend time to implement the same idea a hundred times. It will save time and effort if we pool it together into one repository of a small library.
Except reality is never that simple. Just like that community kitchen, if everyone decided to eat the same nutritious meal together, we would definitely save time and money but people don't like living in what is basically an open air prison.
codebje · 36s ago
Also there are people occasionally poisoning the community pot, don't forget that bit.
fennecbutt · 50m ago
Granted, _discovery_ of such things is something I'm still trying to solve at my own job and potentially llms can at least be leveraged to analyse and search code(bases) rather than just write it.
It's difficult because you need team members to be able to work quite independently but knowledge of internal libraries can get so siloed.
captainkrtek · 18m ago
I do think the discovery piece is hugely valuable. I’m fairly capable with grep and ag, but asking Claude where something is in my codebase is very handy.
lumost · 16m ago
The experience in green field development is very different. In the early days of a project, the LLMs opinion is about as good as the individuals starting the project. The coding standards and other items have not yet been established. The buggy/half nonsense code means that the project is still demo able. Being able to explore 5 projects to demo status instead of 1 is a major boost.
com2kid · 1h ago
Multiple things can be true at the same time:
1. LLMs do not increase general developer productivity by 10x across the board for general purpose tasks selected at random.
2. LLMs dramatically increases productivity for a limited subset of tasks
3. LLMs can be automated to do busy work and although they may take longer in terms of clock time than a human, the work is effectively done in the background.
LLMs can get me up to speed on new APIs and libraries far faster than I can myself, a gigantic speedup. If I need to write a small bit of glue code in a language I do not know, LLMs not only save me time, but they make it so I don't have to learn something that I'll likely never use again.
Fixing up existing large code bases? Productivity is at best a wash.
Setting up a scaffolding for a new website? LLMs are amazing at it.
Writing mocks for classes? LLMs know the details of using mock libraries really well and can get it done far faster than I can, especially since writing complex mocks is something I do a couple times a year and completely forget how to do in-between the rare times I am doing it.
Navigating a new code base? LLMs are ~70% great at this. If you've ever opened up an over-engineered WTF project, just finding where HTTP routes are defined at can be a problem. "Yo, Claude, where are the route endpoints in this project defined at? Where do the dependency injected functions for auth live?"
Right tool, right job. Stop using a hammer on nails.
heavyset_go · 15m ago
> LLMs can get me up to speed on new APIs and libraries far faster than I can myself, a gigantic speedup. If I need to write a small bit of glue code in a language I do not know, LLMs not only save me time, but they make it so I don't have to learn something that I'll likely never use again.
I wax and wane on this one.
I've had the same feelings, but too often I've peaked behind the curtain, read the docs and got familiar with external dependencies and then realize whatever the LLM responds with paradoxically either wasn't following convention or tried to shoehorn your problem to fit code examples found online, used features inappropriately, took a long roundabout path to do something that can be done simply, etc.
It can feel like magic until you look too closely at it, and I worry that it'll make me complacent with the feeling of understanding without actually taking away an understanding.
jfengel · 22m ago
If it can figure out where dependencies come from I'm going to have to look more into this. I really hate the way injection makes other people's code bases impenetrable. "The framework scans billions of lines of code to find the implementation, and so can you!"
iLoveOncall · 25m ago
> Setting up a scaffolding for a new website? LLMs are amazing at it.
So amazing that every single stat showed by the author in the article has been flat at best, despite all being based on new development rather than work on existing code-bases.
daxfohl · 10m ago
Maybe the world has run out of interesting websites to create. That they are created faster doesn't necessarily imply they'll be created more frequently.
Kiro · 38s ago
> If so many developers are so extraordinarily productive using these tools, where is the flood of shovelware?
On my computer. Once I've built something I often realize the problems with the idea and abandon the project, so I'm never shipping it.
atleastoptimal · 1m ago
All these bearish claims about AI coding would hold weight if models were stuck permanently at the capabilities level they are now with no chance at improvement. This is very likely not the case given improvements over the past year, and even with diminishing returns models will be significantly more capable both independently and as a copilot in a year.
jryio · 1h ago
I completely agree with the thesis here. I also have not seen a massive productivity boost with the use of AI.
I think that there will be neurological fatigue occurring whereby if software engineers are not actively practicing problem-solving, discernment, and translation into computer code - those skills will atrophy...
Yee, AI is not the 2x or 10x technology of the future ™ is was promised to be. It may the case that any productivity boost is happening within existing private code bases. Even still, there should be a modest uptick in noticeably improved offer deployment in the market, which does not appear to be there.
In my consulting practice I am seeing this phenomenon regularly, wereby new founders or stir crazy CTOs push the use of AI and ultimately find that they're spending more time wrangling a spastic code base than they are building shared understanding and working together.
I have recently taken on advisory roles and retainers just to reinstill engineering best practices..
heavyset_go · 1m ago
> I think that there will be neurological fatigue occurring whereby if software engineers are not actively practicing problem-solving, discernment, and translation into computer code - those skills will atrophy...
I've found this to be the case with most (if not all) skills, even riding a bike. Sure, you don't forget how to ride it, but your ability to expertly articulate with the bike in a synergistic and tool-like way atrophies.
If that's the case with engineering, and I believe it to be, it should serve as a real warning.
searls · 9m ago
The answer is that we're making it right now. AI didn't speed me up at all until agents got good enough, which was April/May of this year.
Just today I built a shovelware CLI that exports iMessage archives into a standalone website export. Would have taken me weeks. I'll probably have it out as a homebrew formula in a day or two.
I'm working on an iOS app as well that's MUCH further along than it would be if I hand-rolled it, but I'm intentionally taking my time with it.
Anyway, the post's data mostly ends in March/April which is when generative AI started being useful for coding at all (and I've had Copilot enabled since Nov 2022)
NathanKP · 7m ago
I think the explanation is simple: there is a direct correlation between being too lazy and demotivated to write your own code, and being too lazy and demotivated to actually finish a project and publish your work online.
The same people who are willing to go through all the steps to release an application online are also willing to go through the extra effort of writing their own code. The code is actually the easy part compared to the rest of it... always has been.
wrs · 1h ago
This makes some sense. We have CEOs saying they're not hiring developers because AI makes their existing ones 10X more productive. If that productivity enhancement was real, wouldn't they be trying to hire all the developers? If you're getting 10X the productivity for the same investment, wouldn't you pour cash into that engine like crazy?
Perhaps these graphs show that management is indeed so finely tuned that they've managed to apply the AI revolution to keep productivity exactly flat while reducing expenses.
moduspol · 25m ago
A lot of these C-suite people also expect the remaining ones to be replaced by AI. They subscribe to the hockey-stick "AGI is around the corner" narrative.
I don't, but at least it is somewhat logical. If you truly believe that, you wouldn't necessarily want to hire more developers.
quantumcotton · 38m ago
Today you will learn what diminishing returns are :)
You can only utilize so many people or so much action within a business or idea.
Essentially it's throwing more stupid at a problem.
The reason there are so many layoffs is because of AI creating efficiency. The thing that people don't realize is it's not that one AI robot or GPU is going to replace one human at a one to one ratio. It's going to replace the amount of workload one person can do. Which in turn gets rid of one human employee. It's not that you job isn't taken by AI. It's started. But how much human is needed is where the new supply demand lies and how long the job lasts. There will always be more need for more creative minds. The issue is we are lacking them.
It's incredible how many software engineers I see walking around without jobs. Looking for a job making $100,000 to $200,000 a year. Meanwhile, they have no idea how much money they could save a business. Their creativity was killed by school.
They are relying on somebody to tell them what to do and when nobody's around to tell anybody what to do. They all get stuck. What you are seeing isn't a lack of capability. It's a lack of ability to control direction or create an idea worth following.
Nextgrid · 23m ago
I disagree that layoffs are because of AI-mediated productivity improvements.
The layoffs are primarily due to over-hiring during the pandemic and even earlier during the zero-interest-rate period.
AI is used as a convenient excuse to execute layoffs without appearing in a bad position to the eyes of investors. Whether any code is actually generated by AI or not is irrelevant (and since it’s hard to tell either way, nobody will be able to prove anything and the narrative will keep being adjusted as necessary).
mattmanser · 1m ago
The reason there were so many layoffs is because cheap money dried up.
Nothing to do with AI.
Interest rates are still relatively high.
larve · 1h ago
In case the author is reading this, I have the receipts on how there's a real step function in how much software I build, especially lately. I am not going to put any number on it because that makes no sense, but I certainly push a lot of code that reasonably seems to work.
The reason it doesn't show up online is that I mostly write software for myself and for work, with the primary goal of making things better, not faster. More tooling, better infra, better logging, more prototyping, more experimentation, more exploration.
Here's my opensource work: https://github.com/orgs/go-go-golems/repositories . These are not just one-offs (although there's plenty of those in the vibes/ and go-go-labs/ repositories), but long-lived codebases / frameworks that are building upon each other and have gone through many many iterations.
trenchpilgrim · 1h ago
Same. On many days 90% of my code output by lines is Claude generated and things that took me a day now take well under an hour.
Also, a good chunk of my personal OSS projects are AI assisted. You probably can't tell from looking at them, because I have strict style guides that suppress the "AI style", and I don't really talk about how I use AI in the READMEs. Do you also expect I mention that I used Intellisense and syntax highlighting too?
droidjj · 52m ago
The author’s main point is that there hasn’t been an uptick in total code shipped, as you would expect if people are 10x-ing their productivity. Whether folks admit to using AI in their workflow is irrelevant.
larve · 35m ago
Their main point is "AI coding claims don't add up", as shown by the amount of code shipped. I personally do think some of the more incredible claims about AI coding add up, and am happy to talk about it based on my "evidence", ie the software I am building. 99.99% of my code is ai generated at this point, with the occasional one line I fill in because it'd be stupid to wait for an LLM to do it.
For example, I've built 5-6 iphone apps, but they're kind of one-offs and I don't know why I would put them up on the app store, since they only scratch my own itches.
trenchpilgrim · 33m ago
Oh yeah, I love building one off tools with it. I am working on a game mod with a friend, we are hand writing the code that runs when you play it, but we vibe code all sorts of dev tools to help us test and iterate on it faster.
Do internal, narrow purpose dev tools count as shipped code?
Aeolun · 25m ago
I don’t think this is necessarily true. People that didn’t ship before still don’t ship. My ‘unshipped projects’ backlog is still nearly as large. It’s just got three new entries in the past two months instead of one.
trenchpilgrim · 38m ago
The bottleneck on how much I ship has never been how fast I can write and deploy code :)
warkdarrior · 45m ago
Maybe people are working less and enjoying life more, while shipping the same amount of code as before.
If someone builds a faster car tomorrow, I am not going to go to the office more often.
leoc · 38m ago
"In this economy?", as the saying goes.
throwaway13337 · 1h ago
Great angle to look at the releases of new software. I, too, thought we'd see a huge increase by now.
An alternative theory is that writing code was never the bottleneck of releasing software. The exploration of what it is you're building and getting it on a platform takes time and effort.
On the other hand, yeah, it's really easy to 'hold it wrong' with AI tools. Sometimes I have a great day and think I've figured it out. And then the next day, I realize that I'm still holding it wrong in some other way.
It is philosophically interesting that it is so hard to understand what makes building software products hard. And how to make it more productive. I can build software for 20 years and still feel like I don't really know.
balder1991 · 46m ago
Also when vou create a product you can’t speed up the iterative process of seeing how users want it, fixing edge cases that you only realized later etc. these are the things that make a product good and why there’s that article about software taking 10 years to mature: https://www.joelonsoftware.com/2001/07/21/good-software-take...
Nextgrid · 21m ago
This is the answer. Programming was never the bottleneck in delivering software, whether free-range, organic, grass-fed human-generated code or AI-assisted.
AI is just a convenient excuse to lay off many rounds of over-hiring while also keeping the door open for potential investors to throw more money into the incinerator since the company is now “AI-first”.
whiterook6 · 24m ago
What the author is missing is the metric that matters more than shipping product: how much happier am I when my AI auto complete saves me typing and figures out what I'm trying to articulate for me. If devs using copilot are happier--and I am, at least--then that's value right there.
kenjackson · 1h ago
Shovelware may not be a good way to track additional productivity.
That said, I’m skeptical that AI is as helpful for commercial software. It’s been great for in automating my workflow because I suck at shell scripting and AI is great at it. But most of the code I write I honestly halfway don’t know what I’m going to write until I write it. The prompt itself is where my thinking goes - so the time savings would be fairly small, but I also think I’m fairly skilled (except at scripting).
Assembly programmers mocked C: “It hides what the CPU is doing, you’ll never optimize properly!”
C/C++ programmers mocked Java & Python: “Garbage collection? That’s for people who don’t understand memory management!”
Web developers mocked JavaScript frameworks: “Real engineers write everything in raw JS!”
2025 Developer: I'm furious and angry because the narrative of AI coding tools dramatically boosting developer productivity is false
giantg2 · 16m ago
Until AI can understand business requirements and how they are implemented in code (including integrating with existing systems), it will continue to be overhyped. Devs will hate it, but in 10-15 years someone will figure out that the proper paradigm is to train the AI to build based off of something similar to Cucumber TDD with comprehensive example tables.
bastawhiz · 1h ago
The amount of shovelware is not a reliable signal. You know what's almost empty for the first time in almost a decade? My backlog. Where AI tools shine is taking an existing codebase and instructions, and going to town. It's not dreaming up whole games from scratch. All the engineers out there didn't quit their jobs to build new stuff, they picked up new tools to do their existing jobs better (or at least, to hate their jobs less).
The shovelware was always there. And it always will be. But that's doesn't mean it's splurting out faster, because that's not what AI does. Hell, if anything I expect that there's less visible shovelware because when it does get created, it's less obvious (and perhaps higher quality).
At some point, the quality of uninspired projects will be lifted up by the baseline of quality that mainstream AI allows. At what point is that "high enough that we can't tell what's garbage"? We've perhaps found ourselves at or around that point.
benjiro · 57m ago
I need to agree with the author, with a caveat. He is a well developed developer. For somebody like him, churning out good quality code is probably easy.
Where i expect to see a lot of those metrics of feeling fast come from, is from people who may have less coding experience, and with AI are coding way above their level.
My brother in law asks for a nice product website, i just feed his business plan into a LLM, do some fine tuning on the results, and have a good looking website in a hour time. If i did it myself manually, just take me behind a barn as those jobs are so boring and take for ages. But i know that website design is a weakness of mine.
That is the power of LLMs. Turn out quick code, maybe offer some suggestion you did not think about, but ... it also eats time! Making your prompts so that the LLM understands, waiting for the result, ... waiting ... ok, now check the result, can you use it? O no, it did X, Y, Z wrong. Prompt again ... and again. And this is where your productivity goes to die.
So when you compare a pool of developer feedback, your going to get a broad "it helps a lot", "some", "is worse then my code", ... mix in with the prompting, result delays etc...
It gets even worse with Agent / Vibe coding, as you just tend to be waiting, 5, 10min for changed to be done. You need to review them, test them, ... o no, the LLM screwed something up again. O no, it removed 50% of my code. Hey, where did my comments go. And we are back to a loss of time.
LLMs are a tool... But after a lot of working with them, my opinion is to use them when needed but do not depend on them for everything. I sometimes look with cow eyes when people say they are coding so much with LLMs and spending 200, or more bucks per month.
They can be powerful tools, but i feel that some folks become so over dependent on them. And worst is my feeling that our juniors are going to be in a world of hurt, if their skills are more LLM monkey coding (or vibe coding), then actually understanding how to code (and the knowledge behind the actual programming languages and systems).
goalieca · 55m ago
There's a relatively monotonous task in software engineering that pretty much everyone working no a legacy c/c++ code base has had to face: static analysis and compiler warnings. That seems about as boring and routine of an operation that exists. As simple as can be. I've seen this task farmed out to interns paid barely anything just to get it done.
My question to HN is... can LLMs do this? Can they convert all the unsafe c-string invocations to safe. Can they replace system calls with posix calls. Can they wrap everything in a smart pointer and make sure that mutex locks are added where needed.
smjburton · 44m ago
I generally agree with the sentiment of the article, but the OP should also be looking at product launch websites like ProductHunt, where there are tens to hundreds of vibe coded SaaS apps listed daily.
From my experience, it's much easier to get an LLM to generate code for a React/Tailwind CSS web app than a mobile app, and that's why we're seeing so many of these apps showing up in the SaaS space.
timdiller · 49m ago
I haven't found ChatGPT helpful in speeding up my coding because I don't want to give up understanding the code. If I let ChatGPT do it, then there are inevitable mistakes, and it sometimes hallucinates libraries, etc. I have found it very useful in guiding me through the dev-ops of working with and configuring AWS instances for a blog server, for a git server, etc. As a small business owner, that has been a big time saver.
thewarrior · 1h ago
While I agree with the points he’s raising let me play devils advocate.
There’s a lot more code being written now that’s not counted in these statistics. A friend of mine vibe coded a writing tool for himself entirely using Gemini canvas.
I regularly vibe code little analyses or scripts in ChatGPT which would have required writing code earlier.
None of these are counted in these statistics.
And yes AI isn’t quite good enough to super charge app creation end to end. Claude has only been good for a few months. That’s hardly enough time for adoption !
This would be like analysing the impact of languages like Perl or Python on software 3 months after their release.
mysterydip · 1h ago
Good article, gave me some points I hadn't considered before. I know there are some AI generated games out there, but maybe the same people were using asset flips before?
I'd also be curious how the numbers look for AI generated videos/images, because social media and youtube seem absolutely flooded with the stuff. Maybe it's because the output doesn't have to "function" like code does?
Grammatical nit: The phrase is "neck and neck", like where two race horses are very close in progress
ge96 · 29m ago
I've already experienced being handed a vibe coded app, which so far it's been a communication problem/code cleanliness eg. don't leave two versions of an app and not say which one is active. And the docs man so many docs/redundant/conflicting.
iamkd · 48m ago
My hunch is that the amount of shovelware (or really, any software) is mostly proportional to the number of engineers wishing to work on that.
Even if AI made them more productive, it's on a person to decide what to build and how to ship, so the number (and desire) of humans is a bottleneck. Maybe at some point AI will start buying up domains and spinning up hundreds of random indiehacker micro-SaaS, but we're not there. Yet.
kmnc · 1h ago
No one wants it? If there is no demand, then no one is going to become a supplier. You don’t even want the apps you’re dreaming of building, you wouldn’t use them. If you would use them, you would already be using apps that are available. It’s why developers claim huge benefits but the output is the same, there isn’t much demand for your average software company to push more output, the bottleneck is customer demand. If anything customer demand is falling because of AI. There is no platform that is blowing up for people to shovel shit to. Everything is saturated, there is no room for shovelware.
balder1991 · 42m ago
The argument isn’t only applied to creating new todo apps. If the speed up was true, we’d be existing open source tools with more and more features, more polished than ever etc.
Instead I’m not waiting for something like Linux on smartphones to come so soon.
Vanclief · 1h ago
While I like the self reflection from this article, I don't think his methodology adds up (pun intended). First there are two main axis where LLMs can make you more productive: speed & code quality. I think everyone is obsessed about the first one, but its less relevant.
My personal hypothesis is that when using LLMs, you are only faster if you would be doing things like boilerplate code. For the rest, LLMs don't really make you faster but can make your code quality higher, which means better implementation and caching bugs earlier. I am a big fan of giving the diff of a commit to an LLM that has a file MCP so he can search for files in the repo and having it point any mistakes I have made.
ksenzee · 1h ago
This doesn’t match my experience. I needed a particularly boilerplate module the other day, for a test implementation of an API, so I asked Gemini to magic one up. It was fairly solid code; I’d have been impressed if it had come from a junior engineer. Unfortunately it had a hard-to-spot defect (an indentation error in an annotation, which the IDE didn’t catch on paste), and by the time I had finished tracking down the issue, I could have written the module myself. That doesn’t seem to me like a code quality improvement.
malfist · 1h ago
I don't know what world you're living in, but quality code isn't a forte of ai
Aeolun · 27m ago
Hmm, I definitely have more issues with AI generated code that I wouldn’t have if I did it all manually, but the lack of typing may make up for the lost time itself.
bjackman · 1h ago
There is actually a lot of AI shovelware on Steam. Sort by newest releases and you'll see stuff like a developer releasing 10 puzzle games in one day.
I have the same experience as OP, I use AI every day including coding agents, I like it, it's useful. But it's not transformative to my core work.
I think this comes down to the type of work you're doing. I think the issue is that most software engineering isn't in fields amenable to shovelware.
Most of us either work in areas where the coding is intensely brownfield. AI is great but not doubling anyone's productivity. Or, in areas where the productivity bottlenecks are nowhere near the code.
NooneAtAll3 · 11m ago
I wish that first bar graph was log scale...
back2dafucha · 1h ago
I could give a rats ass what his industry thinks about me or my skills.
I can build whole systems. They cant.
paulhodge · 49m ago
I think different things are happening...
For experienced engineers, I'm seeing (internally in our company at least) a huge amount of caution and hesitancy to go all-in with AI. No one wants to end up maintaining huge codebases of slop code. I think that will shift over time. There are use cases where having quick low-quality code is fine. We need a new intuition about when to insist on handcrafted code, and when to just vibecode.
For non-experienced engineers, they currently hit a lot of complexity limits with getting a finished product to actually work, unless they're building something extremely simple. That will also shift - the range of what you can vibecode is increasing every year. Last year there was basically nothing that you could vibecode successfully, this year you can vibecode TODO apps and stuff like that. I definitely think that the App Store will be flooded in the coming future. It's just early.
Personally I have a side project where I'm using Claude & Codex and I definitely feel a measurable difference, it's about a 3x to 5x productivity boost IMO.
The summary.. Just because we don't see it yet, doesn't mean it's not coming.
flyinglizard · 14m ago
I get excellent productivity gains from AI. Not everywhere, and not linearly. It makes the bad stuff about the work (boilerplate, dealing with things outside my specialties) tolerable and the good stuff a bit better. It makes me want to create more. Business guys missing some visualization? Hell why not, few minutes on Aider and it's there. Let's improve our test suites. And let's migrate away from that legacy framework or runtime!
But my workflow is anything but "let her rip". It's very calculated, orderly, just like mastering any other tool. I'm always in the loop. I can't imagine someone without serious experience getting good stuff, and when things go bad, oh boy you're bringing a firehose of crap into your org.
I have a junior programmer who's a bright kid but lacking a lot of depth. Got him a Cursor subscription, tracking his code closely via PRs and calling out the BS but we're getting serious work done.
I just can't see how this new situation calls for less programmers. It will just bring about more software, surely more capable software after everyone adjusts.
vFunct · 15m ago
From the post, if AI was supposed to make everyone 25% more productive, then a 4 month project becomes a 3 month project. It doesn't become a 1 day project.
Was the author making games and other apps in 30 hours? Because that seems like a 4 month project?
wewewedxfgdf · 1h ago
Maybe if the number of "Show HNs" has gone up that might be a data point.
stillpointlab · 56m ago
> We all know that the industry has taken a step back in terms of code quality by at least a decade. Hardly anyone tests anymore.
I see pseudo-scientific claims from both sides of this debate but this is a bit too far for me personally. "We all know" sounds like Eternal September [1] kind of reasoning. I've been in the industry about as long as the article author and I think he might be looking with rose-tinted glasses on the past. Every aging generation looks down at the new cohort as if they didn't go through the same growing pains.
But in defense of this polemic, and laying out my cards as an AI maximalist and massive proponent of AI coding, I've been wondering the same. I see articles all the time about people writing this and that software using these new tools and it so often is the case they never actually share what they built. I mean, I can understand if someone is heads-down cranking out amazing software using 10 Claude Code instances and raking in that cash. But not even to see one open source project that embraces this and demonstrates it is a bit suspicious.
I mean, where is: "I rewrote Redis from scratch using Claude Code and here is the repo"?
I find LLMs useful to decide what is the best option to solve a problem and see some example code.
groby_b · 1h ago
I think the author misses a few points
* METR was at best a flawed study. Repo-familiarity and tool-unfamiliarity being the biggest points of critique, but far from the only one
* they assume that all code gets shipped as a product. Meanwhile, AI code has (at least in my field of view) led to a proliferation of useful-but-never-shipped one-off tools. Random dashboards to visualize complex queries, scripts to drive refactors, or just sheer joy like "I want to generate an SVG of my vacation trip and consume 15 data sources and give it a certain look".
* Their own self-experiment is not exactly statistically sound :)
That does leave the fact that we aren't seeing AI shovelware. I'm still convinced that's because commercially viable software is beyond the AI complexity horizon, not because AI isn't an extremely useful tool
huflungdung · 52m ago
Stupid article. Can be summed up into one sentence “ai creates fantastic code but can not market it for you or find a novel usp”
This is my biggest problem right now. The types of problems I'm trying to solve at work require careful planning and execution, and AI has not been helpful for it in the slightest. My manager told me that the time to deliver my latest project was cut to 20% of the original estimate because we are "an AI-first company". The mass hysteria among SVPs and PMs is absolutely insane right now, I've never seen anything like it.
Lord, forgive them, they know not what they do.
"A bunch of mindless jerks who'll be the first against the wall when the revolution comes."
The thing about hype cycles (including AI) is that the marketing department manages to convince the purchases to do their job for them.
---
[1] We generally budget about half an intern's time for finding the coffee machine, learning how to show up to work on time, going on a fun event with the other interns to play minigolf, discovering that unit tests exist, etc, etc.
I think the "why" for this is that the stakes are high. The economy is trembling. Tech jobs are evaporating. There's a high anxiety around AI being a savior, and so, a demi-religion is forming among the crowd that needs AI to be able to replace developers/competency.
That said: I personally have gotten impressive results with AI, but you still need to know what you're doing. Most people don't (beyond the beginner -> intermediate range), and so, it's no surprise that they're flooding social media with exaggerated claims.
If you didn't have a superpower before AI (writing code), then having that superpower as a perceived equalizer is something that you will deploy all resources (material, psychological, etc) to ensuring that everyone else maintain the position that 1) superpower good, 2) superpower cannot go away 3) the superpower being fallible should be ignored.
Like any other hype cycle, these people will flush out, the midpoint will be discovered, and we'll patiently await the next excuse to incinerate billions of dollars.
Which is why they generate so much hype. They are perfect for tech demos, then management wonders why they aren't seeing results in the real world.
On the other hand, I’ve lately seen it misused by less experienced engineers trying to implement bigger features who eagerly accept all it churns out as “good” without realizing the code it produced:
- doesn’t follow our existing style guide and patterns.
- implements some logic from scratch where there certainly is more than one suitable library, making this code we now own.
- is some behemoth of a PR trying to do all the things.
Depending on the amount of code, I see this only as positive? Too often people pull huge libraries for 50 lines of code.
- Implementing a scheduler from scratch (hundreds of lines), when there are many many libraries for this in Go.
- Implementing some complex configuration store that is safe for concurrent access , using generics, reflection, and a whole other host of stuff (additionally hundreds of lines plus more for tests).
While I can't say any of the code is bad, it is effectively like importing a library which your team now owns, but worse in that no one really understands it or supports it.
Lastly, I could find libraries that are well supported, documented, and active for each of these use-cases fairly quickly.
suspect this happened because the reimplementation contained a number of standard/expected methods that we didn't have in our existing interface (because we didn't need them), so it was considered 'different' enough. but none of the code actually used those methods (because we didn't need them), so all this PR did was add a few hundred lines of cognitive overhead.
I used to be one of those people. It just made sense to me when I was (I still am to some extent) more naïve than I am today. But then I also used to think "it makes sense for everyone to eat together at a community kitchen of some sort instead of cooking at home because it saves everyone time and money" but that's another tangent for another day. The reason I bring it up is I used to think if it is shared functionality and it is a small enough domain, there is no need for everyone to spend time to implement the same idea a hundred times. It will save time and effort if we pool it together into one repository of a small library.
Except reality is never that simple. Just like that community kitchen, if everyone decided to eat the same nutritious meal together, we would definitely save time and money but people don't like living in what is basically an open air prison.
It's difficult because you need team members to be able to work quite independently but knowledge of internal libraries can get so siloed.
1. LLMs do not increase general developer productivity by 10x across the board for general purpose tasks selected at random.
2. LLMs dramatically increases productivity for a limited subset of tasks
3. LLMs can be automated to do busy work and although they may take longer in terms of clock time than a human, the work is effectively done in the background.
LLMs can get me up to speed on new APIs and libraries far faster than I can myself, a gigantic speedup. If I need to write a small bit of glue code in a language I do not know, LLMs not only save me time, but they make it so I don't have to learn something that I'll likely never use again.
Fixing up existing large code bases? Productivity is at best a wash.
Setting up a scaffolding for a new website? LLMs are amazing at it.
Writing mocks for classes? LLMs know the details of using mock libraries really well and can get it done far faster than I can, especially since writing complex mocks is something I do a couple times a year and completely forget how to do in-between the rare times I am doing it.
Navigating a new code base? LLMs are ~70% great at this. If you've ever opened up an over-engineered WTF project, just finding where HTTP routes are defined at can be a problem. "Yo, Claude, where are the route endpoints in this project defined at? Where do the dependency injected functions for auth live?"
Right tool, right job. Stop using a hammer on nails.
I wax and wane on this one.
I've had the same feelings, but too often I've peaked behind the curtain, read the docs and got familiar with external dependencies and then realize whatever the LLM responds with paradoxically either wasn't following convention or tried to shoehorn your problem to fit code examples found online, used features inappropriately, took a long roundabout path to do something that can be done simply, etc.
It can feel like magic until you look too closely at it, and I worry that it'll make me complacent with the feeling of understanding without actually taking away an understanding.
So amazing that every single stat showed by the author in the article has been flat at best, despite all being based on new development rather than work on existing code-bases.
On my computer. Once I've built something I often realize the problems with the idea and abandon the project, so I'm never shipping it.
I think that there will be neurological fatigue occurring whereby if software engineers are not actively practicing problem-solving, discernment, and translation into computer code - those skills will atrophy...
Yee, AI is not the 2x or 10x technology of the future ™ is was promised to be. It may the case that any productivity boost is happening within existing private code bases. Even still, there should be a modest uptick in noticeably improved offer deployment in the market, which does not appear to be there.
In my consulting practice I am seeing this phenomenon regularly, wereby new founders or stir crazy CTOs push the use of AI and ultimately find that they're spending more time wrangling a spastic code base than they are building shared understanding and working together.
I have recently taken on advisory roles and retainers just to reinstill engineering best practices..
I've found this to be the case with most (if not all) skills, even riding a bike. Sure, you don't forget how to ride it, but your ability to expertly articulate with the bike in a synergistic and tool-like way atrophies.
If that's the case with engineering, and I believe it to be, it should serve as a real warning.
Just today I built a shovelware CLI that exports iMessage archives into a standalone website export. Would have taken me weeks. I'll probably have it out as a homebrew formula in a day or two.
I'm working on an iOS app as well that's MUCH further along than it would be if I hand-rolled it, but I'm intentionally taking my time with it.
Anyway, the post's data mostly ends in March/April which is when generative AI started being useful for coding at all (and I've had Copilot enabled since Nov 2022)
The same people who are willing to go through all the steps to release an application online are also willing to go through the extra effort of writing their own code. The code is actually the easy part compared to the rest of it... always has been.
Perhaps these graphs show that management is indeed so finely tuned that they've managed to apply the AI revolution to keep productivity exactly flat while reducing expenses.
I don't, but at least it is somewhat logical. If you truly believe that, you wouldn't necessarily want to hire more developers.
You can only utilize so many people or so much action within a business or idea.
Essentially it's throwing more stupid at a problem.
The reason there are so many layoffs is because of AI creating efficiency. The thing that people don't realize is it's not that one AI robot or GPU is going to replace one human at a one to one ratio. It's going to replace the amount of workload one person can do. Which in turn gets rid of one human employee. It's not that you job isn't taken by AI. It's started. But how much human is needed is where the new supply demand lies and how long the job lasts. There will always be more need for more creative minds. The issue is we are lacking them.
It's incredible how many software engineers I see walking around without jobs. Looking for a job making $100,000 to $200,000 a year. Meanwhile, they have no idea how much money they could save a business. Their creativity was killed by school.
They are relying on somebody to tell them what to do and when nobody's around to tell anybody what to do. They all get stuck. What you are seeing isn't a lack of capability. It's a lack of ability to control direction or create an idea worth following.
The layoffs are primarily due to over-hiring during the pandemic and even earlier during the zero-interest-rate period.
AI is used as a convenient excuse to execute layoffs without appearing in a bad position to the eyes of investors. Whether any code is actually generated by AI or not is irrelevant (and since it’s hard to tell either way, nobody will be able to prove anything and the narrative will keep being adjusted as necessary).
Nothing to do with AI.
Interest rates are still relatively high.
The reason it doesn't show up online is that I mostly write software for myself and for work, with the primary goal of making things better, not faster. More tooling, better infra, better logging, more prototyping, more experimentation, more exploration.
Here's my opensource work: https://github.com/orgs/go-go-golems/repositories . These are not just one-offs (although there's plenty of those in the vibes/ and go-go-labs/ repositories), but long-lived codebases / frameworks that are building upon each other and have gone through many many iterations.
Also, a good chunk of my personal OSS projects are AI assisted. You probably can't tell from looking at them, because I have strict style guides that suppress the "AI style", and I don't really talk about how I use AI in the READMEs. Do you also expect I mention that I used Intellisense and syntax highlighting too?
For example, I've built 5-6 iphone apps, but they're kind of one-offs and I don't know why I would put them up on the app store, since they only scratch my own itches.
Do internal, narrow purpose dev tools count as shipped code?
If someone builds a faster car tomorrow, I am not going to go to the office more often.
An alternative theory is that writing code was never the bottleneck of releasing software. The exploration of what it is you're building and getting it on a platform takes time and effort.
On the other hand, yeah, it's really easy to 'hold it wrong' with AI tools. Sometimes I have a great day and think I've figured it out. And then the next day, I realize that I'm still holding it wrong in some other way.
It is philosophically interesting that it is so hard to understand what makes building software products hard. And how to make it more productive. I can build software for 20 years and still feel like I don't really know.
AI is just a convenient excuse to lay off many rounds of over-hiring while also keeping the door open for potential investors to throw more money into the incinerator since the company is now “AI-first”.
That said, I’m skeptical that AI is as helpful for commercial software. It’s been great for in automating my workflow because I suck at shell scripting and AI is great at it. But most of the code I write I honestly halfway don’t know what I’m going to write until I write it. The prompt itself is where my thinking goes - so the time savings would be fairly small, but I also think I’m fairly skilled (except at scripting).
Assembly programmers mocked C: “It hides what the CPU is doing, you’ll never optimize properly!”
C/C++ programmers mocked Java & Python: “Garbage collection? That’s for people who don’t understand memory management!”
Web developers mocked JavaScript frameworks: “Real engineers write everything in raw JS!”
2025 Developer: I'm furious and angry because the narrative of AI coding tools dramatically boosting developer productivity is false
The shovelware was always there. And it always will be. But that's doesn't mean it's splurting out faster, because that's not what AI does. Hell, if anything I expect that there's less visible shovelware because when it does get created, it's less obvious (and perhaps higher quality).
At some point, the quality of uninspired projects will be lifted up by the baseline of quality that mainstream AI allows. At what point is that "high enough that we can't tell what's garbage"? We've perhaps found ourselves at or around that point.
Where i expect to see a lot of those metrics of feeling fast come from, is from people who may have less coding experience, and with AI are coding way above their level.
My brother in law asks for a nice product website, i just feed his business plan into a LLM, do some fine tuning on the results, and have a good looking website in a hour time. If i did it myself manually, just take me behind a barn as those jobs are so boring and take for ages. But i know that website design is a weakness of mine.
That is the power of LLMs. Turn out quick code, maybe offer some suggestion you did not think about, but ... it also eats time! Making your prompts so that the LLM understands, waiting for the result, ... waiting ... ok, now check the result, can you use it? O no, it did X, Y, Z wrong. Prompt again ... and again. And this is where your productivity goes to die.
So when you compare a pool of developer feedback, your going to get a broad "it helps a lot", "some", "is worse then my code", ... mix in with the prompting, result delays etc...
It gets even worse with Agent / Vibe coding, as you just tend to be waiting, 5, 10min for changed to be done. You need to review them, test them, ... o no, the LLM screwed something up again. O no, it removed 50% of my code. Hey, where did my comments go. And we are back to a loss of time.
LLMs are a tool... But after a lot of working with them, my opinion is to use them when needed but do not depend on them for everything. I sometimes look with cow eyes when people say they are coding so much with LLMs and spending 200, or more bucks per month.
They can be powerful tools, but i feel that some folks become so over dependent on them. And worst is my feeling that our juniors are going to be in a world of hurt, if their skills are more LLM monkey coding (or vibe coding), then actually understanding how to code (and the knowledge behind the actual programming languages and systems).
My question to HN is... can LLMs do this? Can they convert all the unsafe c-string invocations to safe. Can they replace system calls with posix calls. Can they wrap everything in a smart pointer and make sure that mutex locks are added where needed.
From my experience, it's much easier to get an LLM to generate code for a React/Tailwind CSS web app than a mobile app, and that's why we're seeing so many of these apps showing up in the SaaS space.
There’s a lot more code being written now that’s not counted in these statistics. A friend of mine vibe coded a writing tool for himself entirely using Gemini canvas.
I regularly vibe code little analyses or scripts in ChatGPT which would have required writing code earlier.
None of these are counted in these statistics.
And yes AI isn’t quite good enough to super charge app creation end to end. Claude has only been good for a few months. That’s hardly enough time for adoption !
This would be like analysing the impact of languages like Perl or Python on software 3 months after their release.
I'd also be curious how the numbers look for AI generated videos/images, because social media and youtube seem absolutely flooded with the stuff. Maybe it's because the output doesn't have to "function" like code does?
Grammatical nit: The phrase is "neck and neck", like where two race horses are very close in progress
Even if AI made them more productive, it's on a person to decide what to build and how to ship, so the number (and desire) of humans is a bottleneck. Maybe at some point AI will start buying up domains and spinning up hundreds of random indiehacker micro-SaaS, but we're not there. Yet.
Instead I’m not waiting for something like Linux on smartphones to come so soon.
My personal hypothesis is that when using LLMs, you are only faster if you would be doing things like boilerplate code. For the rest, LLMs don't really make you faster but can make your code quality higher, which means better implementation and caching bugs earlier. I am a big fan of giving the diff of a commit to an LLM that has a file MCP so he can search for files in the repo and having it point any mistakes I have made.
I have the same experience as OP, I use AI every day including coding agents, I like it, it's useful. But it's not transformative to my core work.
I think this comes down to the type of work you're doing. I think the issue is that most software engineering isn't in fields amenable to shovelware.
Most of us either work in areas where the coding is intensely brownfield. AI is great but not doubling anyone's productivity. Or, in areas where the productivity bottlenecks are nowhere near the code.
For experienced engineers, I'm seeing (internally in our company at least) a huge amount of caution and hesitancy to go all-in with AI. No one wants to end up maintaining huge codebases of slop code. I think that will shift over time. There are use cases where having quick low-quality code is fine. We need a new intuition about when to insist on handcrafted code, and when to just vibecode.
For non-experienced engineers, they currently hit a lot of complexity limits with getting a finished product to actually work, unless they're building something extremely simple. That will also shift - the range of what you can vibecode is increasing every year. Last year there was basically nothing that you could vibecode successfully, this year you can vibecode TODO apps and stuff like that. I definitely think that the App Store will be flooded in the coming future. It's just early.
Personally I have a side project where I'm using Claude & Codex and I definitely feel a measurable difference, it's about a 3x to 5x productivity boost IMO.
The summary.. Just because we don't see it yet, doesn't mean it's not coming.
But my workflow is anything but "let her rip". It's very calculated, orderly, just like mastering any other tool. I'm always in the loop. I can't imagine someone without serious experience getting good stuff, and when things go bad, oh boy you're bringing a firehose of crap into your org.
I have a junior programmer who's a bright kid but lacking a lot of depth. Got him a Cursor subscription, tracking his code closely via PRs and calling out the BS but we're getting serious work done.
I just can't see how this new situation calls for less programmers. It will just bring about more software, surely more capable software after everyone adjusts.
Was the author making games and other apps in 30 hours? Because that seems like a 4 month project?
I see pseudo-scientific claims from both sides of this debate but this is a bit too far for me personally. "We all know" sounds like Eternal September [1] kind of reasoning. I've been in the industry about as long as the article author and I think he might be looking with rose-tinted glasses on the past. Every aging generation looks down at the new cohort as if they didn't go through the same growing pains.
But in defense of this polemic, and laying out my cards as an AI maximalist and massive proponent of AI coding, I've been wondering the same. I see articles all the time about people writing this and that software using these new tools and it so often is the case they never actually share what they built. I mean, I can understand if someone is heads-down cranking out amazing software using 10 Claude Code instances and raking in that cash. But not even to see one open source project that embraces this and demonstrates it is a bit suspicious.
I mean, where is: "I rewrote Redis from scratch using Claude Code and here is the repo"?
1. https://en.wikipedia.org/wiki/Eternal_September
* METR was at best a flawed study. Repo-familiarity and tool-unfamiliarity being the biggest points of critique, but far from the only one
* they assume that all code gets shipped as a product. Meanwhile, AI code has (at least in my field of view) led to a proliferation of useful-but-never-shipped one-off tools. Random dashboards to visualize complex queries, scripts to drive refactors, or just sheer joy like "I want to generate an SVG of my vacation trip and consume 15 data sources and give it a certain look".
* Their own self-experiment is not exactly statistically sound :)
That does leave the fact that we aren't seeing AI shovelware. I'm still convinced that's because commercially viable software is beyond the AI complexity horizon, not because AI isn't an extremely useful tool