Vibe coding is the fast fashion industry of software engineering

50 pdelboca 62 8/1/2025, 9:27:46 AM pdelboca.me ↗

Comments (62)

yoaviram · 11h ago
It seems to me that the ongoing “vibe coding” debate on HN, about whether AI coding agents are helpful or harmful, often overlooks one key point: the better you are as a coder, the less useful these agents tend to be.

Years ago, I was an amazing C++ dev. Later, I became a solid Python dev. These days, I run a small nonprofit in the digital rights space, where our stack is mostly JavaScript. I don’t code much anymore, and honestly, I’m mediocre at it now. For us, AI coding agents have been a revelation. We are a small team lacking resources and agent let us move much faster, especially when it comes to cleaning up technical debt or handling simple, repetitive tasks.

That said, the main lesson I learned about vibe coding, or using AI for research and any other significant task, is that you must understand the domain better than the AI. If you don’t, you’re setting yourself up for failure.

naiv · 11h ago
I think it's the opposite , the better you are as a coder and know your domain, the better you can use ai tools. someone with no expertise is set up for failure
simonw · 11h ago
Totally agree. I see LLM assistance as a multiplier on top of your existing expertise. The more experience you have the more benefit you can get.
d4rkp4ttern · 11h ago
Indeed, I can predict a huge gulf between pre-vibe senior engs and post-vibe lazy learners: the seniors get massive amplification and meanwhile those on the ground floor are not learning, and even gradually loose what little they did learn
CompoundEyes · 10h ago
Domain knowledge is key I agree. I think we’re going to see waterfall development come back. Domain experts, project managers and engineers gathering requirements and planning architecture up front in order to create the ultra detailed spec needed for the agents to succeed. Between them they can write a CLAUDE.md file, way of working (“You will do TDD, update JIRA ticket like so”) and all the supporting context docs. There isn’t the penalty anymore for waterfall since course corrections aren’t as devastating or wasteful of dev hours.
yoaviram · 9h ago
TDD seems to be a good strategy if you trust the AI not to cheat by writing tests that always pass.
mettamage · 11h ago
> That said, the main lesson I learned about vibe coding, or using AI for research and any other significant task, is that you must understand the domain better than the AI. If you don’t, you’re setting yourself up for failure.

Only if you fully trust it works. You can also first take time to learn about the domain and use AI to assist you in learning it.

This whole thing is really about assistance. I think in that sense, OpenAI's marketing was spot on. LLMs are good at assisting. Don't expect more of them.

cityofplain · 6h ago
The only "overlooked" part of "vibe coding" conversations on HN appear to be providing free training for these orgs that host the models, and the environmental and social impact of doing so.
koakuma-chan · 12h ago
Why do you think AI is producing low quality code? Before I started using AI, my code was often rejected as "didn't use thing X" or "didn't follow best practice Y" but ever since I started coding with AI, that was gone. Works especially well when the code is being reviewed by a person who is clueless about AI.
ben_w · 12h ago
I've seen plenty of mediocre, even bad, code from real humans who didn't realise they were bad coders. Yet: while LLMs often beat those specific humans, I do also see LLMs doing their own mistakes.

LLMs are very useful tools, but if they were human, they'd be humans with sleep deprivation or early stage dementia of some kind.

mrweasel · 12h ago
That's a good point. The majority of human programmers aren't exactly super talented either, and now due to AI many have now lost all hope for personal development, but that's their choice.

All code needs to be carefully scrutinized, AI generated or not. Maybe always prefix your prompt with: "Your operations team consists of a bunch and middel aged angry Unix fans, who will call you at 3:00AM if your service fails and belittle your abilities at the next incidents review meeting.".

As for the 100% vibe coders, please let them. There's plenty of good money to be made cleaning up after them and I do love refactoring, deleting code and implementing monitoring and logging.

tgv · 11h ago
My experience is that it does produce low quality code. Perhaps I tried some unusual stuff, but the other day, I had a simple problem in JavaScript: you have an image, add a gray border around it, and keep it within given width/height limits. I figured that should be common enough for whatever OpenAI model I was using to generate useable code. It started with doing something straightforward with a good-looking Math.min operation, and return a promise to a canvas. I asked "why is this returning a promise?", and of course it answered that I was right and removed the promise. Then it turned out that if the image was larger than limits, it would simply draw over the borders. Now I had to add that it should scale the image. It made an error in the scaling. IIRC, I had to tell it that both dimensions should be scaled identically, which led a few more trials before it looked like decent code. It's a clueless junior that has been kicked out of boot camp.

What it does do perfectly: convert code from one language to another. It was a fairly complex bit, and the result was flawless.

mettamage · 11h ago
> My experience is that it does produce low quality code. Perhaps I tried some unusual stuff, but the other day

I've seen both happen. Sometimes it produced fairly good quality code on small problem domains. Sometimes it produced bad code on small problem domains.

The code is always not that great to bad at big problem domains.

jeltz · 10h ago
From using it, working with people who use it and from reviewing AI generated code. The AI generated code is typically on par with people like QA or sysadmins who do not code as their primary job.

If someone on my team who was a software engineer and not very junior consistently produced such low quality code I would put them on a performance improvement plan.

svantana · 11h ago
I think the analogy still holds; fast fashion is generally of higher quality than "random-person-sewed-a-shirt-at-home". At least superficially.

What the vibe-coded software usually lacks is someone (man or machine) who thought long and hard about the purpose of the code, along with extended use and testing leading to improvements.

thefz · 11h ago
> Why do you think AI is producing low quality code?

I asked for a very, very simple bash script to test code generation abilities once. The AI got it spectacularly wrong. So wrong that it was ridiculous. Here's my reason why I think it does produce low quality code; because it does.

KronisLV · 11h ago
I feel like sooner or later the standard for these types of discussions should become:

> "Here's a link to the commits in my GitHub repo, here's the exact prompts and models that were used that generated bad output. This exact example proves my point beyond a doubt."

I've used Claude Sonnet 4 and Google Gemini 2.5 Pro to pretty good results otherwise, with RooCode - telling it what to look for in a codebase, to come up with an implementation plan, chatting with it about the details until it fills out a proper plan (sometimes it catches edge cases that I haven't thought of), around 100-200k tokens in usually it can knock out a decent implementation for whatever I have in mind, throw in another 100-200k tokens and it has made the tests pass and also written new ones as needed.

Another 200k-400k for reading the codebase more in depth and doing refactoring (e.g. when writing Go it has a habit of doing a lot of stuff inline instead of looking at the utils package I have, less of an issue with Spring Boot Java apps for example cause there the service pattern is pretty common in the code it's been trained on I'd reckon) although adding something like AI.md or a gradually updated CODEBASE.md or indexing the whole codebase with an embedding model and storing it in Qdrant or something can help to save tokens there somewhat.

Sometimes a particular model does keep messing up, switching over to another and explaining what the first one was doing wrong can help get rid of that spiraling, other times I just have to write all the code myself anyways because I have something different in mind, sometimes stopping it in the middle of editing a file and providing additional instructions. On average, still faster than doing everything manually and sometimes overlooks obvious things, but other times finds edge cases or knows syntax I might not.

Obviously I use a far simpler workflow for one off data transformations or knocking out Bash scripts etc. Probably could save a bunch of tokens if not for RooCode system prompt, that thing was pretty long last I checked. Especially good as a second set of eyes without human pleasantries and quick turnaround (before actual human code review, when working in a team), not really nice for my wallet but oh well.

simonw · 11h ago
How long ago? What model? How did you prompt it?
smartmic · 11h ago
This is because AI-generated code will always be mediocre. There is so much poor-quality code in the training base that it will dilute the high-quality sources. AI is not a craftsman who builds on his own high standards, but rather a token grinder tool calibrated to your prompts. Even if you describe your problem (prompt) to a high standard, there is no way it can deliver a solution of the same standard. This will be true in 2025 as it was in 2023 and probably always will be.
simonw · 11h ago
I just don't think that's true.

The LLM vendors are all competing on how well their models can write code, and the way they're doing that is to refine their training data - they constantly find new ways to remove poor quality code from the training data and increase the volume of high quality code.

One way they do this is by using code that passes automated tests. That's a unique characteristic of code - you can't do that for regular prose, or legal analysis or whatever.

"Even if you describe your problem (prompt) to a high standard, there is no way it can deliver a solution of the same standard."

My own experience doesn't match that. I can describe my problems to a good LLM and get back code that I would have been proud to have written myself.

wobfan · 11h ago
100%. Generative AI is and will always be trained on more or less all open source code that is out there, and by definition, from the training data, it will create a mix of this, which will statistically be mediocre.

Which is fine, as long as people are aware of it.

varjag · 12h ago
An AI agent would never diss the work of a fellow AI agent!
lemiffe · 12h ago
Except when it does... CodeRabbit will review my PRs sometimes complaining about code written by Copilot or Augment. They should just fight it off between themselves.
WesolyKubeczek · 12h ago
Guess what, your code may still be low quality, you just have had a misfortune of having bad reviewers.
_seiryuu_ · 10h ago
The 'vibe coding as fast fashion' analogy is interesting, and the article makes some valid points about code quality, maintenance burden, and the 'don't build it' philosophy. As an OSS maintainer, the 'who's going to maintain it?' question hits home.

However, I find the analogy a bit off the mark. LLMs are, fundamentally, tools. Their effectiveness and the quality of output depend on the user's expertise and domain knowledge. For prototyping, exploring ideas, or debugging (as the author's Docker Compose example illustrates), they can be incredibly powerful (not to mention time-savers).

The risk of producing bloated, unmaintainable code isn't new. LLMs might accelerate the production of it, but the ultimate responsibility for the quality and maintainability still rests with the person pressing the proverbial "ship" button. A skilled developer can use LLMs to quickly iterate on well-defined problems or discard flawed approaches early.

I do agree that we need clearer definitions of 'good quality' and 'maintainable' code, regardless of AI's role. The 'YMMV' factor is key here: it feels like the tool amplifies the user's capabilities, for better or worse.

andrenotgiant · 11h ago
I think it's revealing that a group that historically values making decisions based on verifiable and accurate information is now jumping to discredit "Vibe Coding" based on rumors that are easily disproven.

1. Tea App wasn't vibe coding - It was built before vibe coding and the leak was incorrectly secured Firebase https://simonwillison.net/2025/Jul/26/official-statement-fro...

2. Replit "AI Deleted my Database" drama was caused by guy getting inaccurate AI support. All he needed to do was click a "Rollback Here" button to instantly recover all code and data. https://x.com/jasonlk/status/1946240562736365809

What does this eagerness to discredit vibe coding say about us?

wand3r · 11h ago
It's just human nature. Technology advances exponentially and huge leaps forward are increasingly being compressed into very short amounts of time. Just like how sages were worried about memory after the invention of writing or luddites were worried about job displacement with the industrial revolution; it is natural. What would the Italian Renaissance artists think of Photoshop? People whose livelihood and identity are inherently tied to this discipline can't help but be dismissive. "Vibe Coding" will be "coding" or "programming" in the near future, likely in just a few years as tools evolve. Just like we use text editors and GUIs now to do computing instead of punch cards and a single CLI.
lemiffe · 12h ago
I've been using Augment lately in dbt, PHP, and Typescript codebases, and it has been producing production-level code, it has been creating (and running!) tests automatically, and always goes through multiple levels of review before merge.

Posts like these will always be influenced by the author's experience with specific tools, in addition to what languages they use (as I can imagine lesser-used languages/frameworks will have less training material, thus lower quality output), as well as the choice of LLM that powers it behind the scenes.

I think it is a 'your mileage may vary' situation.

xnx · 5h ago
Novice programmer: AI is really useful

Middling programmer: Don't use AI. It creates bad legacy code that no one understands and is hard to debug. Machines will never write code as beautiful as true human artisans. Even if your saving time, you're actually wasting time.

Advanced programmer: AI is really useful

injidup · 11h ago
I knocked up a VSCode plugin in a few hours that extracted a JSON file from a zip file generated by the clang C++ static analyzer, parsed it into the VSCode problems and diagnostic view and provided quick fixes for simple things. All of this without hardly knowing or caring about java script or how the NPM tool chains work. Just kept taking screen shots of VSCode and saying what I wanted to go where and what the behaviour should be. When I was happy with certain aspects such as code parsing and patching I got it to write unit tests to lock in certain behaviours. If anyone tells you LLM's are just garbage generators they are not using the tools correctly.
wobfan · 11h ago
I feel like no one has read the article, instead anyone jumps on the defense and says "but my AI code is good!!!!". It's not even about the quality, and no one said that AI just produces garbage, especially with newer models.
danielbln · 11h ago
From the top of the article:

> My take on AI for programming and "vibe coding" is that it will do to software engineering what fast fashion did to the clothing industry: flood the market with cheap, low-quality products and excessive waste.

wobfan · 11h ago
Where is this saying that LLMs code quality is bad?

> cheap, low-quality products

Product quality != code quality.

danielbln · 11h ago
"Product" is referring to the fast fashion analogy. The author is clearly saying that AI for programming and "vibe coding" will lead to bad code. Otherwise the analogy doesn't make sense. Why would AI programming lead to bad products but not also bad code?
injidup · 11h ago
I don't claim the AI code is good or bad. It generated me a tool I needed in a short time in a domain where I don't have enough knowledge. I have other high value work to do. I got the tool I wanted and moved on. I didn't even bother code reviewing it. It's like you ask an intern to knock you up a tool to do a job and instead of two days waiting you get it in a few hours.
Flavius · 11h ago
I feel like I don't need to read articles with inflammatory headlines.
wobfan · 11h ago
That's your choice, but then you also shouldn't take part in the comments, IMO.
occz · 11h ago
Nor do you really need to comment on them, in that case.
Flavius · 11h ago
LLM naysayers don't care about the real world use cases or logic. They care about their agenda.
mastazi · 12h ago
Many of the statements in the article would have been correct in 2023. OP sounds like he is judging stuff he doesn't have a lot of experience with, a bit like my grandma when she used to tell me how bad hip hop music is.
wobfan · 11h ago
This doesn't make any sense, a lot of the statements are especially true now, and would've been wrong in 2023. Your comment sounds like a weak defense instead, like saying "ahh you just don't get hip hop, you're too old" to your grandma.
spectaclepiece · 11h ago
On a plane from Sydney to Tokyo. Just "vibe coded" a tool we've needed for years in a matter of hours. Web workers, OPFS file management, e2e tests via playwright, Effect service encapsulation and mocking etc.

If you know the domain it's a 3-6X efficiency improvement.

Amazing how well LLMs work on airplane wifi. Just text after all.

kmac_ · 12h ago
Vibe coding is not a 0/1 skill. LLMs generate code as the prompt says, so when you ask what you want, you get it. If you want a specific pattern or architecture, explicitly ask for that. It works really well when you (not the LLM) drive the development.
anonzzzies · 9h ago
And how many times do you have to correct that? Because I have not seen that in practice. With Opus / Claude Code it will follow instructions until it doesn't and then suddenly, in the SAME prompt/todo agent blabber, it creates a web worker instead of using Bull which it did first for exactly the same case. Sure, context overload, but this is the same prompt. And the models claiming larger context like Gemini don't fare any better (well, worse actually).

Now you might tell me; make the tasks smaller and more focused, yes, that's true, it performs better, but then i'm just faster myself coding. Most what I use CC for is exploration & frontend; generating slabs of mind numbing crap react and tailwind is fantastic. But for most other stuff, we have patterns and libs in place where i'm much faster and better than AI as it's not much code, just more logic/thinking.

serial_dev · 11h ago
This has not been my experience. Many times I ask explicitly what I want, and the tools fail to deliver. Of course, don’t throw the baby out with the bathwater, plenty of times they are helpful and can really deliver 10x improvements. The key is recognizing the patterns when these tools can deliver and when they can’t. Don’t forget to reevaluate every couple of weeks, as the tools improve all the time.
exitb · 11h ago
Wasn't the original definition of vibe coding asking us to "forget that the code even exists"?
simonw · 11h ago
It was, but annoyingly a lot of people now use "vibe coding" to mean any form of AI-assisted development.

It's got to the point where if someone talks about "vibe coding" you have to confirm with them which definition they are using, because otherwise you risk people talking right past each other because they're not actually talking about the same thing.

kmac_ · 9h ago
Yes, the term isn't universally defined yet. Also, the number of available tools doesn't help here (e.g., my workflow in Cursor and CC differ). My "vibe coding" is probably different than blog post authors' "vibe coding." At my work, we have "AI in dev" meetings to discuss the topic, the variety of opinions resembles comments here, and we try to boil down where the issue is.
ninkendo · 10h ago
> people talking right past each other because they're not actually talking about the same thing.

Just like every HN thread about vibe coding!

ApeWithCompiler · 11h ago
Prompt: "Please make the code secure and apply best practices where possible..."
dncornholio · 12h ago
This is true, but also made me realize I usually spend more time telling the AI what to do then if I would have figured it out myself.
Argonaut998 · 11h ago
I refuse to believe “__vibe__ coding” (I hate that word) is even a thing outside of startups and hobbyists. It just seems like a cute phrase coined by Karpathy now solely exists to create articles for clicks.
zhaohan_dong · 12h ago
Everybody loathes fast fashion but look at their revenue.
klabb3 · 11h ago
Clothes (today) are fungible. Some products are, like prototype experiences. Those are so uncommon that developers who got to do green field coding were making their peers jealous. This is because most software lasts for a long time. So long, in fact, that prototypes ”turning into” production systems for many years is a meme. If you look at the value built in large companies today, it’s not code, but predominantly data. If we didn’t have to care about data from legacy systems, we’d be a lot faster too.

Is there a market for such fast rotating code? Yes. And that market will probably grow, due to it getting flooded with cheap labor and attention – I’m sure people will find new use cases as well. But, crucially, this is not the market we have. You can bet all you want on AI, but in all likelihood the market needs will largely remain the same.

FugeDaws · 12h ago
Whenever i try to use any of the LLMs to do code i need for work they fail miserably or sometimes it works and i scan over what theyve done to make it work and it looks absolutely like a mad man has written it.

The only use i seem to get out of LLMs with my work is writing mundane brainless stuff like arrays for json responses etc which saves me 5 minutes so i can browse ycombinator and write these comments

danielbln · 12h ago
What's "LLMs"?

Free ChatGPT?

Codex?

Jules?

Cline?

Cursor?

Claude Code with Opus?

Tight leash with conventions, implementation plan?

YOLO vibe coding?

FugeDaws · 9h ago
All of the above really. I try multiple I have claude 4 sonnet subscription chatgpt pro subscription. Ive got a cursor subscription some of them are better than others but most of the time the code they write are so long winded i just do it myself. Its not bad for when i need a bit of php though
mettamage · 11h ago
> TL;DR: My take on AI for programming and "vibe coding" is that it will do to software engineering what fast fashion did to the clothing industry: flood the market with cheap, low-quality products and excessive waste.

This metaphor is too limiting though. You can do so much more with software than you can with clothes. Take a look at what injidup wrote. People are creating small home brewed projects for personal use.

So a lot of "fast fashion software" is going to be used at home. And let's face it, for our own home brewed projects for personal use, standards have always been lower because we know our own requirements.

I think in this "shadow economy of personal software use", LLMs are a boon.

dncornholio · 12h ago
This might be the first article in the history of AI that I can truly stand behind.
anonzzzies · 12h ago
I don't know what fast fashion is, but we definitely get production grade high quality code from AI.
Argonaut998 · 11h ago
But if it’s production code then surely you are scrutinising it heavily. That’s hardly “vibe coding” it’s just using a LLM to aid development.
anonzzzies · 9h ago
Yeah, I agree. The term vibe code seems to be heavily bastardised by now and I did it as well. I should not. For me it's english to code without ever seeing the code really. So I take back my comment in context of this article but it was more also a comment to the author who seems to just have not used the newer systems?