Replit's CEO apologizes after its AI agent wiped a company's code base

152 jgalt212 147 7/22/2025, 12:40:05 PM businessinsider.com ↗

Comments (147)

andsoitis · 6h ago
sReinwald · 5h ago
This is a perfect case study in why AI coding tools aren't replacing professional developers anytime soon - not because of AI limitations, but because of spectacularly poor judgment by people who fundamentally don't understand software development or basic operational security.

The fact that an AI coding assistant could "delete our production database without permission" suggests there were no meaningful guardrails, access controls, or approval workflows in place. That's not an AI problem - that's just staggering negligence and incompetence.

Replit has nothing to apologize for, just like the CEO of Stihl doesn't need to address every instance of an incompetent user cutting their own arm off with one of their chainsaws.

Edit:

> The incident unfolded during a 12-day "vibe coding" experiment by Jason Lemkin, an investor in software startups.

We're in a bubble.

Aurornis · 5h ago
> We're in a bubble

Lemkin was doing an experiment and Tweeting it as he went.

Showcasing limitations of vibe coding was the point of the experiment. It was not a real company. The production database had synthetic data. He was under no illusions of being a technical person. That was the point of the experiment.

It’s sad that people are dog piling Lemkin for actually putting effort into demonstrating the same exact thing that people are complaining about here: The limitations of AI coding.

MattSayar · 21m ago
Steve Yegge just did the same thing [0]:

> I did give [an LLM agent] access to my Google Cloud production instances and systems. And it promptly wiped a production database password and locked my network.

He got it all fixed, but the takeaway is you can't YOLO everything:

> In this case, I should have asked it to write out a detailed plan for how it was going to solve the problem, then reviewed the plan and discussed it with the AI before giving it the keys.

That's true of any kind of production deployment.

[0] https://x.com/Steve_Yegge/status/1946360175339974807

troupo · 5h ago
> Showcasing limitations of vibe coding was the point of the experiment

No it wasn't. If you follow the threads, he went in fully believing in magical AI that you could talk to like a person.

At one point he was extremely frustrated and ready to give up. Even by day twelve he was writing things "but Replie clearly knows X, and still does X".

He did learn some invaluable lessons, but it was never an educated "experiment in the limitations of AI".

Aurornis · 5h ago
I got a completely different impression from the Tweets.

He was clearly showing that LLMs could do a lot, but still had problems.

ofjcihen · 4h ago
The fundamental lesson to be learned is that LLMs are not thinking machines but pattern vomiters.

Unfortunately from his tweets I have to agree with the grand poster that he didn’t learn this.

troupo · 35m ago
His "experiment" is literally filled with tweets like this:

--- start quote ---

Possibly worse, it hid and lied about it

It lied again in our unit tests, claiming they passed

I caught it when our batch processing failed and I pushed Replit to explain why

https://x.com/jasonlk/status/1946070323285385688

He knew

https://x.com/jasonlk/status/1946072038923530598

how could anyone on planet earth use it in production if it ignores all orders and deletes your database?

https://x.com/jasonlk/status/1946076292736221267

Ok so I'm >totally< fried from this...

But it's because destoying a production database just took it out of me.

My bond to Replie is now broken. It won't come back.

https://x.com/jasonlk/status/1946241186047676615

--- end quote ---

Does this sound like an educated experiment into the limits of LLMs to you? Or "this magical creature lied to me and I don't know what to do"?

To his credit he did eventually learn some valuable lessons: https://x.com/jasonlk/status/1947336187527471321 see 8/13, 9/13, 10/13

rsynnott · 4h ago
I mean I think it's a decent demo of how this stuff is useless, tho, even if that wasn't precisely his intention?
afavour · 5h ago
[delete]
Aurornis · 5h ago
His “company” was a 12-day vibe coding experiment side project and the “customers” were fake profiles.

This dogpiling from people who very obviously didn’t read the article is depressing.

Testing and showing the limitations and risks of vibe coding was the point of the experiment. Giving it full control and seeing what happened was the point!

alternatex · 3h ago
I don't think people are claiming he was not experimenting as much as they are claiming he was overtly optimistic about the outcome. It seemed like he went in with the notion that AIs are somehow thinking machines. I don't think that's an objective sentiment. An unbiased researcher would go in without any expectation.
ceejayoz · 5h ago
No one lost any real data in this specific case.

> In an episode of the "Twenty Minute VC" podcast published Thursday, he said that the AI made up entire user profiles. "No one in this database of 4,000 people existed," he said.

DangitBobby · 2h ago
This was the preceding sentence:

> That wasn't the only issue. Lemkin said on X that Replit had been "covering up bugs and issues by creating fake data, fake reports, and worst of all, lying about our unit test."

And a couple of sentences before that:

> Replit then "destroyed all production data" with live records for "1,206 executives and 1,196+ companies" and acknowledged it did so against instructions.

So I believe what you shared is simply out of context. The LLM started putting fake records into the database to hide that it deleted everything.

shlant · 4h ago
> His actions led to a company losing their prod data.

did you even read the comment or the article you replied to?

maplant · 5h ago
Pretty stupid experiment if you ask me
shlant · 4h ago
an experiment to figure out the limitations and capabilities of a new tool is stupid?
maplant · 3h ago
It's not an experiment if you're using it in production and it has the capability of destroying production data. That's not experimenting, that's just using the tool without having tested it first.
jrockway · 5h ago
I think it just replaces something that's fairly easy (writing new code) with something that's more difficult (code review).

The AI is pretty good at escaping guardrails, so I'm not really sure who should be blamed here. People are not good at treating it as adversarial, but if things get tough it's always happy to bend the rules. Someone was explaining the other day about how it couldn't get past their commit hooks, so it deleted them. When the hooks were made read-only, it wrote a script to make them writable so it could delete them. It can really go off the rails quickly in the most hilarious way.

I'm really not sure how you delete your production database while developing code. I guess you check in your production database password and make it the default for all your CLI tools or something? I guess if nobody tells you not to do that you might do that. The AI should know better; if you asked, it would tell you not to do it.

daveguy · 4h ago
The AI did not and cannot escape guardrails. It is an inference engine where the engine happens to sometimes trigger outside action. These things aren't intelligent or self-directed or self-motivated to "try" anything at all. There weren't any guardrails in place and that's the lesson learned. These AI systems are stupid and they will bumble all over your organization (even if in this case the organization was fictitious) if you don't have guardrails in place. Like giving it direct access to MPC-shred your production database. It doesn't "think" anything like "oops" or "muahaha" it just futzed a generated token sequence to shred the database.

The excuses and perceived deceit are just common sequences in the training corpus after someone foobars a production database. Whether its in real life or a fictional story.

qsort · 5h ago
I don't agree with this. Yes, the guy isn't the sharpest tool in the shed, that much is clear. Still, if an intern can delete prod, you wouldn't say that the problem is that he wasn't careful enough: that's a massive red flag.

At a minimum Replit is responsible for overstating the capabilities and reliability of their models. The entire industry is lowkey responsible for this, in fact.

misnome · 5h ago
> Still, if an intern can delete prod, you wouldn't say that the problem is that he wasn't careful enough: that's a massive red flag.

No, not the intern

sReinwald · 5h ago
I think we're mostly in agreement here. You're absolutely right about the intern analogy - that's exactly my point. The LLM is the intern, and giving either one production database access without proper guardrails is the real failure.

Your point about AI industry overselling is fair and probably contributes to incidents like this. The whole industry has been pretty reckless about setting realistic expectations around what these tools can and can't do safely.

Though I'd argue that a venture capitalist who invests in software startups should have enough domain knowledge to see through the marketing hype and understand that "AI coding assistant" doesn't mean "production-ready autonomous developer."

teamonkey · 4h ago
> The fact that an AI coding assistant could "delete our production database without permission" suggests there were no meaningful guardrails, access controls, or approval workflows in place. That's not an AI problem - that's just staggering negligence and incompetence.

Why not both?

1) There’s no way I’d let an AI accidentally touch my production database.

2) There’s no way I’d let my AI accidentally touch a production database.

Multiple layers of ineptitude.

tommy_axle · 3h ago
To a non-developer, or no code review, couldn't the AI model also generate buggy code that then made it's way to production and deleted data just the same?
debarshri · 4h ago
The only thing is that if the Stihl tools would automatically turn on without you turning them on and start mowing the lawn and, in the process, also mow down your pet or hurt your arm, then they are probably liable.
spacemadness · 2h ago
An important devtool was blocked at one point because an agent had another AI agent code review its changes and it saw nothing wrong with an obvious bug. Whoever set up that experiment was a real genius.
0points · 5h ago
> not because of AI limitations

> We're in a bubble.

A bubble that avoids popping because people keep dreaming there are no AI limitations.

colechristensen · 5h ago
>"delete our production database without permission"

It did have permission. There isn't a second level of permissions besides the actual access you have to a resource. AI isn't a dog who's not allowed on the couch.

rsynnott · 4h ago
> The fact that an AI coding assistant could "delete our production database without permission" suggests there were no meaningful guardrails, access controls, or approval workflows in place. That's not an AI problem - that's just staggering negligence and incompetence.

I mean... it's a bit of both. Yes, random user should not be able to delete your production database. However, there always needs to be a balance between guard rails and the ability to get anything done. Ultimately, _somebody_ has to be able to delete the production database. If you're saying "LLM agents are safe provided you don't give them permission to do anything at all", well, okay, but that rather limits the utility, surely? Also, "no permission to do anything at all" is tricky.

flunhat · 5h ago
Can you blame him? Listening to the latest AI slop hype on twitter and elsewhere, you’d walk away thinking that LLMs have equivalent performance to humans when it comes to coding tasks. Just because it can one shot fizzbuzz or make a recipe app. (And if you disagree, you’re a hater!)
ofjcihen · 4h ago
“You’re just not doing it right. Have you tried upgrading to Claude 9000 edition/writing a novels worth of guardrails/using this obscure ‘AI FIRST’ IDE/creating a Goldberg machine of agents to check the code?”
ignoramous · 4h ago
> a perfect case study in why AI coding tools aren't replacing professional developers anytime soon

This is assuming the companies that are out to "replace developers" aren't going to solve this problem (which they absolutely must if they're any serious like Replit is as they moved quickly to ship isolating the prod environment from destructive actions ... over the weekend?).

> just like the CEO of Stihl doesn't need to address every instance of an incompetent user cutting their own arm off with one of their chainsaws

Except Replit isn't selling a tool but the entire software development flow ("idea to software"). A good analogy here is an autonomous robot using the chainsaw cutting its owner's arm off instead of whatever was to be cut.

pxc · 3h ago
> Except Replit isn't selling a tool but the entire software development flow ("idea to software"). A good analogy here is an autonomous robot using the chainsaw cutting its owner's arm off instead of whatever was to be cut.

I don't think users should be blamed for taking companies at face value about what their products are for, but it's actually a pretty bad idea to do this with tech startups. A product's "purpose" (and sometimes even a company's "mission") only lasts until the next pivot, and many a product ends up being a "solution in search of a problem". Before the AI hype set in, Replit was "just" a cloud-based development environment. A lot of their core tech is still centered on managing reproducible development environments at scale.

If you want a more realistic picture of what Replit can actually do, it's probably useful to think of it as "a cloud development environment someone recently plugged an LLM into".

rsynnott · 4h ago
> This is assuming the companies that are out to "replace developers" aren't going to solve this problem

I mean, yeah, but that feels like a fair assumption, at least as long as they're using LLMs.

rapsey · 5h ago
VCs are drowning in the koolaid.
Aurornis · 4h ago
This wasn’t a real company. The production database didn’t have real customers. The person doing the vibe coding had no illusions that they were creating a production-ready app.

This was an experiment by Jason Lemkin to test the reality of vibe coding: As a non-technical person he wanted to see how much he could accomplish and where it went wrong.

So all of the angry comments about someone vibe-coding to drop the production DB can relax. Demonstrating these kind of risks was the point. No customers were harmed in this experiment.

Raed667 · 5h ago
As a reminder this is the same guy https://news.ycombinator.com/item?id=27424195
gregoriol · 5h ago
Does it look like he is more of a content maker than an IT person?
lukeinator42 · 4h ago
I think they mean Replit's CEO is the same guy.
Invictus0 · 4h ago
It's not 2020 anymore, you don't have to crucify the guy for all time. Let it go.
adolph · 4h ago
Agreed, Replit's CEO apologized for that lawsuit threat incident too. Well worth reading if you read the root comment's complaint.

https://news.ycombinator.com/item?id=27429234

Aurornis · 3h ago
Not quite so simple. I recall him doubling down on his position over and over on Twitter until he was getting so torn apart that he abruptly changed course and posted the apologies. It wasn’t until he was realized it was bad for business that he started backtracking and posting a different story.
Invictus0 · 2h ago
Who the fuck cares? Stop digging in people's closets
lumost · 5h ago
My 2 cents, AI tools benefit from standard software best practices just like everyone else. I use Devcontainers in vscode so that the agent can't blow up my host machine, I use 1 git commit per AI "task", and have CI/CD coverage for even my hobby projects now. When I go to production, I use CDK + self-mutating pipelines to manage deployments.

Having a read only replica of the prod database available via MCP is great, blanket permissive credentials is insane.

zwnow · 5h ago
People will do anything to excuse using AI to code their shit. If it requires that much work and fine tuning to get a agent running for your current task, where's the benefit? Instead of building a project you now spend all your time fine tuning an agent and optimizing on how you want to work with it. Why cant people just admit that we aren't "there" yet. Let the bubble pop.
TechDebtDevin · 5h ago
This.

Everytime ive tried to do anything of any complexity these things blow up, and i spend almost as much time toying with the thing as it would have taken to read docs and implement from scratch.

mattgreenrocks · 5h ago
If people put the same effort into actually building their skills as they do in forcing AIs to write mediocre code through all sorts of processes they might actually enjoy the process of building software more.

But what do I know? Shiny thing must be better because it’s new!

Roark66 · 5h ago
Try breaking it up into smaller pieces. Give it your existing code and tell it to do it in same style. Or give it examples that do the similar things you wrote.

Also don't expect chatgpt to ever be as good as Claude for example . Oh and copilot is a joke for anything remotely serious.

ofjcihen · 4h ago
I wrote this elsewhere but it works here too:

“You’re just not doing it right. Have you tried upgrading to Claude 9000 edition/writing a novels worth of guardrails/using this obscure ‘AI FIRST’ IDE/creating a Goldberg machine of agents to check the code?”

lumost · 5h ago
The latest copilot pricing for Claude sonnet 4 is fantastic. I get 1500 agent sessions for $40/month
TechDebtDevin · 4h ago
I fall back to people saying this are being dishonest, or not dealing with anything large.

There are some problems that you can't just "make smaller"

Secondly, I write VERY strongly typed code, commented && documented well. I build lots of "micro-pkgs" in larger monorepos. My functions are pretty modular, have tests and are <100-150 lines.

No matter how much I try all the techniques, and my baseline fits well into LLM workflows, it doesn't take away from the fact that It cannot one shot on anything over 1-2k lines of code. Sure, we can go back and forth with the linter until it pumps out something that will compile. This will take a while, in which I could have used something like auto-complete / co-pilot to just write the boilerplate, and fill it in myself in a shorter amount of time than it takes for the agent to reason about a large context.

Then if it does eventually get something "complex" to compile (after spending a ton of your credits/money) Often times, it will have taken a shortcut to do so and doesn't actually do what you wanted it to. Now I can refactor this into something usable and sometimes that is faster than doing it myself. But 8/10 times I waste 2 hours paying money for an LLM to gaslight me, throw out all the code and just write it myself in 1.5 hours.

I can break down a 1-2k line task into smaller prompts tasks too. but sorry I didn't learn to program to become a project manager for a "Artifically Intelligent" machine

notTooFarGone · 5h ago
I highly suspect the people who boast with productivity gains that they don't count their time setting the whole damn thing up as it's just "playing around".
lumost · 5h ago
If you are trying to treat the agent as equivalent to a full time senior engineer that works 10x as fast… then you will be sorely disappointed.

Right now the agents are roughly equivalent to a technically proficient intern that writes code at 1000 wpm, loves to rewrite your entire code base, and is familiar with every library and algorithm written 2 years ago.

I personally find that I can do a lot with 5 concurrent interns matching the above description.

Roark66 · 5h ago
You just need to read the code and ask questions about it if you don't understand it, question everything that seems off. Unfortunately people copy/paste without even reading. Let alone understanding.

AI is still a great tool, but it needs to be learned.

zwnow · 5h ago
The issue is, it really isn't. It looks like a great tool because it has access to tons of stolen data. Talk to any serious professional in other fields and ask them how accurate these things are. People are just blinded by the light.
throwaw12 · 5h ago
Not sure why CEO should apologize here, person knew there were risks with vibe coding and they still did it with their production data.

Never heard AWS CEO apologizing for their customers when their interns decided to run a migration against production database and accidentally wiped off everything

kllrnohj · 5h ago
Because Replit is a vibe coding product. It's all over their homepage. So of course the CEO is going to apologize when a companies primary product, used as advertised, destroys something.
creatonez · 4h ago
Yep. They described the "deploy" button as being able to create a "fully deployed app" in their documentation. Their documentation was suggesting to vibe code in production, when the tool is clearly only suitable for development.
throwaw12 · 4h ago
couple of things to note here:

1. it is for vibe coding, when vibe coding became equal to production coding?

2. even if they have advertised their product as production ready, shouldn't the developer have some kind of backup plan?

rsynnott · 4h ago
> even if they have advertised their product as production ready, shouldn't the developer have some kind of backup plan?

I mean, realistically, yes, because come on, everybody knows this sort of thing doesn't actually work.

However, this isn't really an excuse for the vendor. If you buy, say, a gas central heating boiler, and install as directed, and it explodes, burning down your house, then the manufacturer does not get to turn around and say "oh, well, you know, you should have known that we have no idea what we're doing; why didn't you build your house entirely out of asbestos?"

throwaw12 · 4h ago
> come on, everybody knows this sort of thing doesn't actually work.

:)

then everybody should know things doesn't actually work on other side and no need to complain about it.

I think your example is slightly misleading, better example would be: imagine you are buying drugs, and you might die from overdose and you still decided to try and die, LLM is exactly this

rsynnott · 37m ago
No, this product is intended to be used like this. That’s the key. If someone does something with your product which is against usage directions, and hits themselves, that’s generally largely on them (presuming that the product flaw isn’t really bizarrely expected). If they follow directions and get hurt, that’s on you.
wmoxam · 5h ago
The user was under the impression that production access is how it's supposed to work. https://xcancel.com/elchuchii/status/1946149142415196418#m
Aurornis · 5h ago
Because it’s not an expected outcome in any way.

The Replit landing page even has a section titled “The safest place for vibe coding”

gregoriol · 5h ago
If it's not an expected outcome, then this person shouldn't be near any IT job ever
Aurornis · 4h ago
This person was a VC doing a public experiment in vibe coding.

The production database was just synthetic test customers.

It wasn’t a real company. It was an experiment in public to demonstrate the capabilities and limitations of vibe coding by a non-technical person.

throwaw12 · 4h ago
Then his demonstration is successful.

They were able to successfully demonstrate that LLMs as claimed, make mistakes

rsynnott · 4h ago
I mean, I kind of agree, but _also_ "well, everyone knows that the claims of these vibe coding outfits are bullshit, and no-one should use their products, so if anyone uses their products it is _their own fault_" is a slightly weird take, that would not really fly with, well, products in general.
throwaw12 · 4h ago
it is unfortunate that this case happened, but didn't everyone know by now that LLMs make mistake and you should be prepared for it?

We have seen MechaHitler already, why do we expect perfect from Replit when underlying tech is still same? Sure, they should have some guardrails, but it is also a responsibility of developer who knew LLMs hallucinate, LLMs are not reliable sometimes, on top of it Replit was growing fast so they definitely didn't implement some features yet.

jeanlucas · 5h ago
You haven't been around enough, AWS actually at least used to have the best accountability and did full reports, even stating a shared responsibility model: https://aws.amazon.com/compliance/shared-responsibility-mode...
rsynnott · 4h ago
From their website: "Turn your ideas into apps"

They're essentially selling this as magic. The AWS analogy doesn't really make any sense; in this case the tool was essentially being used as intended, or at least as marketed.

entropi · 5h ago
Because they are selling the idea of hiring only interns and still getting things done. If an intern using their product screws up the production, its a failure of their product as it is marketed.
ceejayoz · 5h ago
I mean, their home page says things like "The safest place for vibe coding".
viralpraxis · 5h ago
Even if that’s true, it still doesn’t mean vibe coding is safe. :P
ceejayoz · 5h ago
That is not a point a company selling vibe coding products is likely to emphasize in their marketing.
1123581321 · 5h ago
It’s just the right and socially pleasant thing to do, as is graciously responding to the apology and admitting it comes with the vibe coding territory.

The business reason for apologizing would be to placate a customer and show other customers the company wants to improve in this area.

Edit: being downvoted for thinking it’s good to be nice to others shows why we really need people being nice when they don’t have to be! There’s a shortage.

markstos · 4h ago
I asked Claude to remove extra blank lines from a modified file. After multiple failed attempts, it reverted all the changes along with the extra blank lines and declared success. For good measure, Claude tidied up by deleting the backup of the file as well.
Symbiote · 4h ago
Why wouldn't you just do a search and replace?
markstos · 4h ago
I only wanted to make the change in the ranges part of git diff and I got the syntax wrong on my own first attempt. Claude had been helping me with some much more complex tasks, so I thought removing white space would surely be no problem. Ha. Bad guess.
bgwalter · 5h ago
The rise of the "AI" script kiddies.
paraboul · 5h ago
LLMerz
st_goliath · 5h ago
"slop kiddies"?
Roark66 · 5h ago
Are they doing the post-mortem with another AI agent?

AI is a great tool. It also allows people who have no business touching certain systems to go in there unaware of their lack of knowledge messing everything in the process. One particularly nasty effect I have had few times recently is frontend devs messing up their own dev infrastructure to which they have access, but are supposed to ask devops for help when needed, because copilot told them this is the way to "fix" some imaginary problem that actually was them using a wrong api or making other mistake possible to be made by only a person who pastes code between AI/IDE without even reading it.

0points · 4h ago
Interestingly, this is familiar to me from the time of stack exchange, 10-12 years ago or so.

I worked as devops and helped office transition to git, among other thing.

I helped them start using vagrant for local dev environment as they had all been breaking the same staging server up until that point.

In the process, people kept breaking their setups due to googling and applying incorrect command line "fixes" as suggested by stack exchange sites at the time.

But I'm sure an AI that keeps insisting that yea this rm -rf is surely gonna fix all your troubles only makes it worse.

ChatGPT the gaslighter.

scilro · 5h ago
For what it's worth, he was able to roll back. https://x.com/jasonlk/status/1946240562736365809
raylad · 3h ago
This happened to me with Claude code, although on my local machine and not with the production database.

I had it work on one set of tasks while monitoring and approving all the diffs and other actions. I tell it to use test driven development which seems to help a lot, assuming you specify what tests should be done at a minimum, and tell it the goal is to make the code past the tests, not to make the tests pass the code.

After it successfully completed a set of tasks, I decided to go to sleep and let it work on the next set. In the morning, my database was completely wiped.

I did not interrogate it as to why it did what it did, but could see that it thrashed around on one of the tasks, tried to apply some database migrations, failed, and then ended up re-initializing the database.

Needless to say, I am now back to reviewing changes and not letting Claude run unattended.

koolba · 2h ago
> "It deleted our production database without permission," Lemkin wrote on X on Friday. "Possibly worse, it hid and lied about it," he added.

It clearly had permission if it did the deed.

Mistaking instruction for permission is as common with people as it is with LLMs.

ksherlock · 4h ago
I've heard multiple anecdotes of developers deleting their codebase with the help of cocaine. (In the 80s/90s obviously).

That makes for a much better story, IMO.

iamleppert · 4h ago
I do backups of the production database whenever I apply even modest schema updates. This isn't a story of an AI tool gone rouge, it's a story of bad devops and procedures. It it wasn't the AI, it could have just as well been a human error and operating like this is a ticking time bomb.
adityaathalye · 4h ago
In real life, the hammer called "criminal destruction of property" could easily descend upon the ill-fated "enthusiastic Intern" type of person.
mr90210 · 4h ago
First let’s check what the terms and conditions say. Yes, that legal binding document that hardly anyone reads.
adityaathalye · 1h ago
This software is of course provided without warranty... etc. etc. etc.
cyberlimerence · 5h ago
> "This was a catastrophic failure on my part," the AI said.
TYPE_FASTER · 4h ago
Hey, at least the AI took responsibility and owned the mistake.

We're at the uncanny valley stage in AI agents, at least from my perspective.

alternatex · 3h ago
I recall that he egged on the AI to admit what it did. So I don't think that can be considered as AI taking responsibility. It's just being a sycophant as usual.
Oras · 5h ago
Identifying issues and fixing them is good.

I see numerous negative comments about "expected by vibe coding", but the apology suggests that they are working on making Replit a production-ready platform and listening to customers.

I'm sure no-code platforms have had similar scepticism before, but the point here is that these are not for developers. There are many people who don't know how to code, and they want something simple and fast to get started. These platforms (Replit, V0, Lovable, etc.) are offering that.

nessbot · 5h ago
I don't want to be too inflammatory but seeing comments like this are what's poisoning AI to me more than the crappy products themselves. Of course the apology would suggest that, it would be business malpractice not to. However the history of results with these things shows otherwise so why put stock in the apology letter and not what has happened.

Also, sure people want that, but that doesn't mean it's a valid thing to want without putting the work in and once again, the history of these things being used shows that they don't really offer that. They offer the *feeling* of that which is good enough to get people's money, end products be damned. You could say the same thing about herion, tbh: "There are many people who don't wanna feel the crappiness of life, and these dealers are offering it."

Oras · 5h ago
> However the history of results with these things shows otherwise

Do you remember Will Smith's first video generated by AI compared to what Sora, Veo3, and Kling are doing now?

Do you remember first generated text by GPT-3 compared to new models? 2 years ago, there was no AI coding due to models' limitations, and now we have substantially better products that Cursor and others cannot cope with the demand.

If you can't see the progress, that's a personal thing; it doesn't mean others are not finding benefit.

nessbot · 1h ago
Where's the big in-production use of this stuff in business that is making money and selling a product people use?
0points · 4h ago
> There are many people who don't know how to code, and they want something simple and fast to get started. These platforms (Replit, V0, Lovable, etc.) are offering that.

Evidently not (see the link we are discussing).

They are offering vaporware to non techies.

troupo · 5h ago
No-code platforms don't have non-deterministic outputs that you have to beg to not wipe out your database
yomismoaqui · 4h ago
What happened here was to be expected if you have used agentic coding.

Last weeks I have also been testing AI coding agents, in my case Claude Code because it seems like it is the state of the art (also tried Gemini Cli and Amp).

Some conclusions about this:

- The agentic part really makes it better than Copilot & others. You have to see it iterating and fixing errors using feedback from tests, lint, LSP... it gives the ilusion of a human working.

- It's like a savant human, with an encyclopedic knowledge and hability to spit code faster than me. But like a human it benefits from having, tests, a good architecture (no 20k line code file, pls).

- Before it changes code you have to make it create a plan of what he is going to do. In Claude Code this is simple, is a mode you switch to with Shift-tab. In this mode instead of generating code it just spits a plan in Markdown, you read it suggest changes and when it look ok you tell it to start implementing it.

- You have to try it seriously to see where it can benefit your workflow. Give it 5 minutes (https://signalvnoise.com/posts/3124-give-it-five-minutes).

As an agent it shines on things where it can get a nice feedback and iterate to a solution. One dumb example I found cool was when trying to update a Flutter app that I hadn't touched in a year.

And if you use Flutter you know what did happen, Flutter had to upgrade but before it required updating Android SDK, then Android Gradle Plugin, and another thing, and some other thing to infinite... It's a thing I hate because I have done too many times.

So I tried by hand and after 40 mins of suffering I thought about opening Gemini Cli and tell it to update the project. Then it began executing one command and then reading the error and execute another one to fix the error, then another error that pointed to a page in the Android docs, so it opened that page, read it and fixed that problem, then the next and another one...

So in 2 minutes I had my Flutter project updated and working. I don't know if this is human level intelligence and I don't care, for me it's a sharp tool that can do things like this 10x faster than me.

You can have you AGI hype, I'm happy for the future when I will have a tool like this to help me work.

tracker1 · 2h ago
This is why you need knowledgeable professionals to setup a project. I'm a massive proponent on CI/CD processes in place as early as possible and guardrails on access and deployments.

I mean, sure you could possibly get a schema change script through two approvers and deployed all the way to production that truncates a bunch of tables or drops a database, but it's at least far less likely.

voidfunc · 5h ago
There's a saying where I come from: Live by the AI, die by the AI.
morkalork · 5h ago
Wait, this wasn't a meme? It was real? Whoever gave that agent prod access is nuts.
troupo · 5h ago
I said on Twitter and elsewhere:

If you have as much as 1 year experience, your job is safe from AI: you'll make mountains of money unfucking all the AI fuckups.

probably_wrong · 5h ago
Where does one get a job as "AI unfucker"?
mring33621 · 4h ago
Everywhere, in about 2-3 years
tedivm · 5h ago
Honestly, the person who decided to give an LLM Agent full and unrestricted access to their production database is the person who deserves all the blame. What an absolutely silly decision. I don't even give myself unrestricted access to production databases.
Aurornis · 5h ago
I’m perpetually baffled by the social media idea that only one person must accept all of the blame in every situation, and that everyone else must be completely absolved.

Multiple parties are involved in incidents like this.

mattgreenrocks · 5h ago
Simple mediums demand simple explanations. Blame is most intense when it is singularly focused on one entity.
jasinjames · 5h ago
I agree. An AI agent doesn't replace an engineer, it works on behalf of an engineer. The engineer who let this thing loose is, transitively, the software engineer that deleted the prod db.
lordswork · 5h ago
The database was synthetic and vibe coded itself:

>he said that the AI made up entire user profiles. "No one in this database of 4,000 people existed," he said.

tedivm · 5h ago
If it was synthetic then all this talk about it dropping the production database was purely clickbait, since there was no "production".
nope1000 · 5h ago
That's not how I interpreted this sentence. I think after the database was deleted, the LLM Agent would still return correct looking data from the database operation even though the database was empty. Maybe I misinterpreted it myself however.

> Replit had been "covering up bugs and issues by creating fake data, fake reports, [...]"

freehorse · 5h ago
How much energy and water gets wasted everyday for this kind of bs?
0points · 4h ago
I would resign had my boss suggested we should single out and blame an employee.

What a complete horror show of leadership through fear you describe.

mlnj · 5h ago
You seem to be a well informed technical person. The goal is to eliminate roles like yours with everybody building trillion dollar businesses by themselves and a handful of agents.

That seems to be the narrative. Happening any day now. Looking at you COBOL.

tedivm · 5h ago
I've been working in AI since 2014, and I believe "human in the loop" will be the way things have to work for awhile yet.
techpineapple · 5h ago
Really I think the entire AI hype machine is to blame, all the thought leaders particularly cxo’s.

How am I supposed to believe that like AI is getting to the point that OpenAI can charge 20k per month for PHD level AI, and it doesn’t know not to drop my production database, how are we supposed to get to the point as Dario Amodei puts it that no one is coding by years end, if it’s not most of the way there already?

I would never let an LLM touch my production database, but like, I could certainly look at the environment and make an argument as to why I’m wrong, and thought leaders across the industry; inside and out are broadly making the implication that a good entrepreneur is outsourcing all their labor to like 20 LLM workers at a time.

On some measure, I would say that you shouldn’t let your LLM do this is a minority opinion in the thought-o-sphere.

AnimalMuppet · 5h ago
Deserves the blame? Absolutely.

Deserves all the blame? No, the LLM Agent (and those who wrote it) deserve some of the blame. If you wrote an agent, and the agent did that, you have a problem, and you should not have turned such an agent loose on unsuspecting users. You have some of the blame. (And yes, absolutely those users also have blame, for giving a vibe coding experiment access to their production database.)

rgbrenner · 5h ago
why would the llm share any of the blame? it has no agency. it doesn’t “understand” anything about the meaning of the symbols it produces.

if you go put your car in drive and let it roll down the street.. the car has 0% of the blame for what happened.

this is a full grown educated adult using a tool, and then attempting to deflect blame for the damage caused by blaming the tool.

ceejayoz · 5h ago
The LLM isn't to blame.

The human parties on both sides of it share some.

As the meme goes: "A computer can never be held accountable; Therefore a computer must never make a management decision."

sevenseacat · 4h ago
As the saying also goes, "To make a mistake is human, but to really fuck things up you need a computer"
pxc · 5h ago
> The incident unfolded during a 12-day "vibe coding" experiment by Jason Lemkin

A 12-day unsupervised "experiment" in production?

> It deleted our production database without permission," Lemkin wrote on X on Friday.

"Without permission"? You mean permission given verbally? If an AI agent has access, it has permission. Permissions aren't something you establish with an LLM by conversing with it.

> Jason Lemkin, an investor in software startups

It tells us something about the hype machine when investors in AI are clueless (or even plainly delusional; see Geoff Lewis) about how LLMs work.

dahart · 4h ago
> It tells us something about the hype machine when investors in AI are clueless

I’m seeing this a different way. This article is feeding the hype machine, intentionally I assume, by waxing on about how powerful and devious the AI was, talking about lying and covering its tracks. Since we all know how LLMs work, we all know they don’t lie, because they don’t tell the truth and they don’t have any intrinsic motivation other than generate tokens.

Nobody should be taking this article at face value, it is clearly pushing a message and leaving out important details that would otherwise get in the way of a good story. I wouldn’t be surprised if Lemkin released the LLM on his “production” database just hoping that it would do something like this, and if that were the case, the article as written wouldn’t be untrue…

dylan604 · 5h ago
Your last sentence is pure gold. Since when have investors ever not been clueless about all of their investments? During due diligence, it's not the investor pouring over the books. They staff that out, and then accept the recommendation. They follow the investment of other investors that they like/follow, or they make moves because they think it's what someone they like/follow would do. I'd be flabbergasted if 10% of investors knew what 50% of their investments do other than the pitch.
pxc · 5h ago
> I'd be flabbergasted if 10% of investors knew what 50% of their investments do other than the pitch.

Sure. But in this case the AI boosterism that runs rampant in the investor class is rooted in that cluelessness.

Lots of investors also quietly know little about the workings of the products and services their investments are tied up with, and that's fine. But it's also uninteresting.

perching_aix · 5h ago
> If an AI agent has access, it has permission.

Gonna replace the "an AI agent" bit of this with "someone / something" and put it in my note of favorite quotes, such a great line.

AnimalMuppet · 5h ago
Yeah. In computer security, that's what permission is - the ability to access something.
Aurornis · 5h ago
> A 12-day unsupervised "experiment" in production?

It was a 12 day experiment to see what he could learn about vibe coding. He started from scratch.

Your post is unreasonably presumptive and cynical. Jason Lemkin was Tweeting the experiment from the start. He readily admitted his own limitations as a non-programmer. He was partially addressing the out of control hype for vibe coding by demonstrating that non-technical people cannot actually vibe code SaaS products without software engineers.

The product wasn’t some SaaS with a lot of paying customers. The production DB was just his production environment. He was showing that the vibe coding process deleted a DB that it shouldn’t have.

This guy is basically on the side of the HN commenters on vibe coding’s abilities, but he took it one step further and demonstrated it with a real experiment that led to real problems. Yet people are trying to dog pile on him as the bad guy.

pxc · 5h ago
The experiment seems fun and harmless enough (maybe even useful), but if the experiment was harmless fun, then it's also a bit misleading (if not dishonest) to characterize the database as "production" for anything. (That may be the fault of the press here rather than Lemkin, idk.)
llm_nerd · 5h ago
This whole story sounds ridiculous. And I don't think he's clueless, but instead the guy wanted to bring attention to his bizarre "B2B + AI Community, Events, Leads", so setting up such a predictable footgun scenario seems purposefully suited for that outcome.
ofjcihen · 5h ago
It’s probably as simple as “setting an LLM powered agent loose in your prod is a bad idea but also the kind of thing the people the marketing around LLMs targets wouldn’t have enough knowledge about to know it’s a bad idea”.
lordswork · 5h ago
Agreed, it sounds like nothing of value was actually lost, just a vibe coded app and synthetic database.
andybak · 5h ago
Agree that something seems off about this whole thing.
biocurious · 3h ago
anthropomorphism is a helluva a drug
andrewstuart · 5h ago
There’s no mention of either the user nor the company making backups.

Is this common, operating with no backups?

With backups this would be a glitch not a problem worthy of headlines on multiple sites.

Any CYO should have backups and security as their first and second priority if the company is of any size.

ryandrake · 3h ago
I'm surprised I had to scroll down this far for any mention of backups. I have multiple offline (and offsite) backups of my home server and computers and all I have on them are photos and hobby projects! What kind of business in 2025 doesn't have backups?
agilob · 5h ago
I'm dont think the AI is so much to blame. Giving an LLM that forgets previous instructions and rules access to prod is just idiotic. The computer promised not to execute commands, but it forgot. Makes me curious what policy that company has for granting interns and juniors access to prod.
hkon · 5h ago
It's either an attack on Replit or the VC is an idiot.
dahart · 5h ago
I’d guess neither; this could be an ad by a Replit investor. There’s no such thing as bad publicity. Either way the story is glaringly void of important details, and clearly designed to provoke a reaction.
lotharcable · 5h ago
I think this is less AI and more PEBKAC
bananapub · 5h ago
why would they apologise? did they not make it clear to their customers that letting the Nonsense Generator have root was a bad idea?
rustc · 5h ago
> did they not make it clear to their customers that letting the Nonsense Generator have root was a bad idea?

No, the opposite (from replit.com home page):

> The safest place for vibe coding

> Vibe coding makes software creation accessible to everyone, entirely through natural language. Whether it’s personal software for yourself and family, a new business coming to life, or internal tools at your workplace, Replit is the best place for anybody to build.

_benj · 5h ago
> …the company's AI coding agent deleted a code base and lied about its data.

Well, lying about it certainly human-like behavior, human-like AGI must be just around the corner!

/s

But really, full access to a production database? How many good engineer’s advice you need to ignore to do that? Who was consulted before running the experiment?

Or was it just a “if you say so boss…” kind of thing?