Vibe coding is the fast fashion industry of software engineering

42 pdelboca 36 8/1/2025, 9:27:46 AM pdelboca.me ↗

Comments (36)

yoaviram · 9m ago
It seems to me that the ongoing “vibe coding” debate on HN, about whether AI coding agents are helpful or harmful, often overlooks one key point: the better you are as a coder, the less useful these agents tend to be.

Years ago, I was an amazing C++ dev. Later, I became a solid Python dev. These days, I run a small nonprofit in the digital rights space, where our stack is mostly JavaScript. I don’t code much anymore, and honestly, I’m mediocre at it now. For us, AI coding agents have been a revelation. We are a small team lacking resources and agent let us move much faster, especially when it comes to cleaning up technical debt or handling simple, repetitive tasks.

That said, the main lesson I learned about vibe coding, or using AI for research and any other significant task, is that you must understand the domain better than the AI. If you don’t, you’re setting yourself up for failure.

naiv · 4m ago
I think it's the opposite , the better you are as a coder and know your domain, the better you can use ai tools. someone with no expertise is set up for failure
mettamage · 5m ago
> That said, the main lesson I learned about vibe coding, or using AI for research and any other significant task, is that you must understand the domain better than the AI. If you don’t, you’re setting yourself up for failure.

Only if you fully trust it works. You can also first take time to learn about the domain and use AI to assist you in learning it.

This whole thing is really about assistance. I think in that sense, OpenAI's marketing was spot on. LLMs are good at assisting. Don't expect more of them.

injidup · 14m ago
I knocked up a VSCode plugin in a few hours that extracted a JSON file from a zip file generated by the clang C++ static analyzer, parsed it into the VSCode problems and diagnostic view and provided quick fixes for simple things. All of this without hardly knowing or caring about java script or how the NPM tool chains work. Just kept taking screen shots of VSCode and saying what I wanted to go where and what the behaviour should be. When I was happy with certain aspects such as code parsing and patching I got it to write unit tests to lock in certain behaviours. If anyone tells you LLM's are just garbage generators they are not using the tools correctly.
wobfan · 12m ago
I feel like no one has read the article, instead anyone jumps on the defense and says "but my AI code is good!!!!". It's not even about the quality, and no one said that AI just produces garbage, especially with newer models.
danielbln · 6m ago
From the top of the article:

> My take on AI for programming and "vibe coding" is that it will do to software engineering what fast fashion did to the clothing industry: flood the market with cheap, low-quality products and excessive waste.

wobfan · 3m ago
Where is this saying that LLMs code quality is bad?

> cheap, low-quality products

Product quality != code quality.

Flavius · 6m ago
I feel like I don't need to read articles with inflammatory headlines.
Flavius · 7m ago
LLM naysayers don't care about the real world use cases or logic. They care about their agenda.
koakuma-chan · 35m ago
Why do you think AI is producing low quality code? Before I started using AI, my code was often rejected as "didn't use thing X" or "didn't follow best practice Y" but ever since I started coding with AI, that was gone. Works especially well when the code is being reviewed by a person who is clueless about AI.
tgv · 6m ago
My experience is that it does produce low quality code. Perhaps I tried some unusual stuff, but the other day, I had a simple problem in JavaScript: you have an image, add a gray border around it, and keep it within given width/height limits. I figured that should be common enough for whatever OpenAI model I was using to generate useable code. It started with doing something straightforward with a good-looking Math.min operation, and return a promise to a canvas. I asked "why is this returning a promise?", and of course it answered that I was right and removed the promise. Then it turned out that if the image was larger than limits, it would simply draw over the borders. Now I had to add that it should scale the image. It made an error in the scaling. IIRC, I had to tell it that both dimensions should be scaled identically, which led a few more trials before it looked like decent code. It's a clueless junior that has been kicked out of boot camp.

What it does do perfectly: convert code from one language to another. It was a fairly complex bit, and the result was flawless.

mettamage · 4m ago
> My experience is that it does produce low quality code. Perhaps I tried some unusual stuff, but the other day

I've seen both happen. Sometimes it produced fairly good quality code on small problem domains. Sometimes it produced bad code on small problem domains.

The code is always not that great to bad at big problem domains.

ben_w · 29m ago
I've seen plenty of mediocre, even bad, code from real humans who didn't realise they were bad coders. Yet: while LLMs often beat those specific humans, I do also see LLMs doing their own mistakes.

LLMs are very useful tools, but if they were human, they'd be humans with sleep deprivation or early stage dementia of some kind.

mrweasel · 19m ago
That's a good point. The majority of human programmers aren't exactly super talented either, and now due to AI many have now lost all hope for personal development, but that's their choice.

All code needs to be carefully scrutinized, AI generated or not. Maybe always prefix your prompt with: "Your operations team consists of a bunch and middel aged angry Unix fans, who will call you at 3:00AM if your service fails and belittle your abilities at the next incidents review meeting.".

As for the 100% vibe coders, please let them. There's plenty of good money to be made cleaning up after them and I do love refactoring, deleting code and implementing monitoring and logging.

smartmic · 8m ago
This is because AI-generated code will always be mediocre. There is so much poor-quality code in the training base that it will dilute the high-quality sources. AI is not a craftsman who builds on his own high standards, but rather a token grinder tool calibrated to your prompts. Even if you describe your problem (prompt) to a high standard, there is no way it can deliver a solution of the same standard. This will be true in 2025 as it was in 2023 and probably always will be.
wobfan · 4m ago
100%. Generative AI is and will always be trained on more or less all open source code that is out there, and by definition, from the training data, it will create a mix of this, which will statistically be mediocre.

Which is fine, as long as people are aware of it.

thefz · 14m ago
> Why do you think AI is producing low quality code?

I asked for a very, very simple bash script to test code generation abilities once. The AI got it spectacularly wrong. So wrong that it was ridiculous. Here's my reason why I think it does produce low quality code; because it does.

svantana · 15m ago
I think the analogy still holds; fast fashion is generally of higher quality than "random-person-sewed-a-shirt-at-home". At least superficially.

What the vibe-coded software usually lacks is someone (man or machine) who thought long and hard about the purpose of the code, along with extended use and testing leading to improvements.

varjag · 28m ago
An AI agent would never diss the work of a fellow AI agent!
lemiffe · 26m ago
Except when it does... CodeRabbit will review my PRs sometimes complaining about code written by Copilot or Augment. They should just fight it off between themselves.
WesolyKubeczek · 22m ago
Guess what, your code may still be low quality, you just have had a misfortune of having bad reviewers.
lemiffe · 32m ago
I've been using Augment lately in dbt, PHP, and Typescript codebases, and it has been producing production-level code, it has been creating (and running!) tests automatically, and always goes through multiple levels of review before merge.

Posts like these will always be influenced by the author's experience with specific tools, in addition to what languages they use (as I can imagine lesser-used languages/frameworks will have less training material, thus lower quality output), as well as the choice of LLM that powers it behind the scenes.

I think it is a 'your mileage may vary' situation.

mastazi · 22m ago
Many of the statements in the article would have been correct in 2023. OP sounds like he is judging stuff he doesn't have a lot of experience with, a bit like my grandma when she used to tell me how bad hip hop music is.
wobfan · 13m ago
This doesn't make any sense, a lot of the statements are especially true now, and would've been wrong in 2023. Your comment sounds like a weak defense instead, like saying "ahh you just don't get hip hop, you're too old" to your grandma.
kmac_ · 21m ago
Vibe coding is not a 0/1 skill. LLMs generate code as the prompt says, so when you ask what you want, you get it. If you want a specific pattern or architecture, explicitly ask for that. It works really well when you (not the LLM) drive the development.
serial_dev · 13m ago
This has not been my experience. Many times I ask explicitly what I want, and the tools fail to deliver. Of course, don’t throw the baby out with the bathwater, plenty of times they are helpful and can really deliver 10x improvements. The key is recognizing the patterns when these tools can deliver and when they can’t. Don’t forget to reevaluate every couple of weeks, as the tools improve all the time.
exitb · 14m ago
Wasn't the original definition of vibe coding asking us to "forget that the code even exists"?
ApeWithCompiler · 15m ago
Prompt: "Please make the code secure and apply best practices where possible..."
dncornholio · 18m ago
This is true, but also made me realize I usually spend more time telling the AI what to do then if I would have figured it out myself.
mettamage · 8m ago
> TL;DR: My take on AI for programming and "vibe coding" is that it will do to software engineering what fast fashion did to the clothing industry: flood the market with cheap, low-quality products and excessive waste.

This metaphor is too limiting though. You can do so much more with software than you can with clothes. Take a look at what injidup wrote. People are creating small home brewed projects for personal use.

So a lot of "fast fashion software" is going to be used at home. And let's face it, for our own home brewed projects for personal use, standards have always been lower because we know our own requirements.

I think in this "shadow economy of personal software use", LLMs are a boon.

zhaohan_dong · 25m ago
Everybody loathes fast fashion but look at their revenue.
klabb3 · 13m ago
Clothes (today) are fungible. Some products are, like prototype experiences. Those are so uncommon that developers who got to do green field coding were making their peers jealous. This is because most software lasts for a long time. So long, in fact, that prototypes ”turning into” production systems for many years is a meme. If you look at the value built in large companies today, it’s not code, but predominantly data. If we didn’t have to care about data from legacy systems, we’d be a lot faster too.

Is there a market for such fast rotating code? Yes. And that market will probably grow, due to it getting flooded with cheap labor and attention – I’m sure people will find new use cases as well. But, crucially, this is not the market we have. You can bet all you want on AI, but in all likelihood the market needs will largely remain the same.

FugeDaws · 22m ago
Whenever i try to use any of the LLMs to do code i need for work they fail miserably or sometimes it works and i scan over what theyve done to make it work and it looks absolutely like a mad man has written it.

The only use i seem to get out of LLMs with my work is writing mundane brainless stuff like arrays for json responses etc which saves me 5 minutes so i can browse ycombinator and write these comments

danielbln · 21m ago
What's "LLMs"?

Free ChatGPT?

Codex?

Jules?

Cline?

Cursor?

Claude Code with Opus?

Tight leash with conventions, implementation plan?

YOLO vibe coding?

dncornholio · 20m ago
This might be the first article in the history of AI that I can truly stand behind.
anonzzzies · 24m ago
I don't know what fast fashion is, but we definitely get production grade high quality code from AI.