Friends don't let friends run random untrusted code from the Internet. All code is presumed hostile until proven otherwise, even generated code. Giving an LLM write access to a production database is malpractice. On a long enough timeline, the likelihood of the LLM blowing up production approaches 1. This is the result you should expect.
Ecstatify · 51m ago
These AI-focused Twitter threads feel like they’re just recycling the same talking points for likes and retweets. When AI systems make mistakes, it doesn’t make sense to assign blame the way we would with human errors - they’re tools operating within their programming constraints, not autonomous agents making conscious choices.
mjr00 · 22m ago
> When AI systems make mistakes, it doesn’t make sense to assign blame the way we would with human errors - they’re tools operating within their programming constraints, not autonomous agents making conscious choices.
It's not really "assigning blame", it's more like "acknowledging limitations of the tools."
Giving an LLM or "agent" access to your production servers or database is unwise, to say the least.
ayhanfuat · 48m ago
I think at this point it is like rage-baiting. “AI wiped out my database”, “AI leaked my credentials”, “AI spent 2 million dollars on AWS” etc create interaction for these people.
phkahler · 10m ago
The message reads like "AI did this bad thing" but we should all see it as "Another stupid person believed the AI hype and discovered it isn't trustworth" or whatever. You usually don't see them admit "gee that was dumb. What was I thinking?"
consumer451 · 55m ago
I use LLM dev tools, and even have Supabase MCP running. I love these tools. They allowed me to create a SaaS product on my own, that I had no chance of creating otherwise as a long out of practice dev.
However, we are nowhere near the reliability of these tools to be able to:
1. Connect an MCP to a production database
2. Use database MCPs without a --read-only flag set, even on non-prod DBs
3. Doing any LLM based dev on prod/main. This obviously also applies to humans.
It's crazy to me that basic workflows like this are not enforced by all these LLM tools as they will save our mutual bacon. Are there any tools that do enforce using these concepts?
It feels like decision makers at these orgs are high on their own marketing, and are not putting necessary guardrails on their own tools.
It's not really "assigning blame", it's more like "acknowledging limitations of the tools."
Giving an LLM or "agent" access to your production servers or database is unwise, to say the least.
However, we are nowhere near the reliability of these tools to be able to:
1. Connect an MCP to a production database
2. Use database MCPs without a --read-only flag set, even on non-prod DBs
3. Doing any LLM based dev on prod/main. This obviously also applies to humans.
It's crazy to me that basic workflows like this are not enforced by all these LLM tools as they will save our mutual bacon. Are there any tools that do enforce using these concepts?
It feels like decision makers at these orgs are high on their own marketing, and are not putting necessary guardrails on their own tools.
> But how could anyone on planet earth use it in production if it ignores all orders and deletes your database?
Someday we'll figure out how to program computers deterministically. But, alas.