LLMs are not a security barrier. LLMs cannot be a security barrier. They cannot form part of a security barrier. You must place the security barrier between the LLM and the backend systems, the same as you would place it between your web or mobile app and your backend systems. Assume that if the LLM agent can use a service, the human interacting with the agent can also call that service with arbitrary parameters.
The tools you're providing to your LLM agent must never have privileges greater than those you intend to afford to the user who is prompting / interacting with the agent.
You want to use an LLM to make a customer service bot? Sure, you can do that. But that bot MUST NOT UNDER ANY CIRCUMSTANCES be allowed to perform any action you wouldn't let the customer do himself. If it can read your CRM, you need to scope that access to exactly the same access you'd be willing to give the customer directly. Can it cancel orders? That tool must not be able to cancel any order you wouldn't let the customer cancel himself through your app or website.
Don't treat an LLM as if it could replace a human customer service agent, or a human researcher, or a human underwriter, or a human manager. Never make the mistake of believing that the LLM, with any level of clever prompt engineering or attempts at input sanitization, will be "good enough" at not getting fooled. If you trust it with the keys to the kingdom, in the same way that you'd trust a human with those keys, it's a matter of when—not if—you're going to get pwn3d.
Of course, holding this principle, if your autonomous agent can access the web, you must assume that literally anyone on the internet can call any of that agent's tools with arbitrary parameters.
brabel · 2h ago
This should be obvious to anyone who has ever developed an AI application. How are these companies deploying LLMs that have access to their full CRM Database and can just email that to anyone who asks nicely?! It truly is the 90's again.
nicce · 1h ago
Companies should think LLM just as an user interface, which is operating with the backend; the same principles apply. But the problem is that even today with traditional user interfaces, some companies will forget that the intended user interface is not the only part which should be secured.
sublimefire · 3h ago
A lot of folks understood that LLMs are just a system that can operate on fuzzy input and convert it or derive some possible actions, and you need data to ground the results. This has led to the “actions” and “mcp” and other stuff. But security as always was left at the bus stop and the bus has left. Completely ridiculous absence of any security in these systems shows how much stupid money is in this business. Ignorance is a bliss. Until someone finds a way to distribute worms in training data which will break the world, not much will change.
cobbal · 3h ago
One of the downsides of the SQL injection analogy is that it suggests it's a fixable problem. To avoid SQL injection, you just need to be careful and quote your strings correctly.
AI is like a SQL without quotation marks though. You just have to assume the attacker gets complete DB access.
nicce · 2h ago
> AI is like a SQL without quotation marks though.
Or with more broader definiton - AI is like a Schrödinger’s protocol — it has rules and no rules at the same time. If you can’t find the edges, you can’t define the security. Nobody undertands it completely, yet they try to constraint it.
bgwalter · 3h ago
And the fully interesting and on-topic article has been flagged off the front page.
brabel · 2h ago
What the hell? It was a very interesting post. I hope it gets back, it should be up there.
Mountain_Skies · 2h ago
After two decades of software development, I switched over to cybersecurity and did that for about five years. My general impression is that while most companies would be happy to prevent data breaches, what they're really after is giving themselves a shield against legal liability. Because of this, they're far more concerned with having the appearance of security than having actual security. Being able to dodge charges of negligence appears to be what they're paying for. If they can reduce headcount by using AI tools, even if the tools aren't all that effective, that will be a business win if it still provides the same legal liability shield.
Things might be different on the business secrets side of things but where I was on the PII/PCI data protection side of things, cybersecurity = legal risk management.
bgwalter · 3h ago
Fun article:
"Known vulnerabilities are showing up at alarming rates because of these tools. It's just getting worse due to vibe coding and AI coding assistants," he said. "If you wanted to know what it was like to hack in the '90s, now's your chance."
I agree with everything except I assume hacking was more fun in the 1990s.
"It's the '90s all over again," said Bargury with a smile. "So many opportunities."
Hacking has been democratized!
benmmurphy · 3h ago
the difference is in the 90s everyone was ignorant. beyond some small communities i don't think there were many people who had the knowledge to exploit a lot of the issues. but now the knowledge is very accessible and there a lot more people who are already capable of exploiting these issues and don't need to do any further research.
neuroelectron · 4h ago
AI is ideal for cybersecurity, if it's implemented correctly. Unknown Zero days could be prevented and detected. Now if you're plugging in an LLM with MCP, just give up. In fact, they've been around for a long time: ML classifiers for malware detection (static + behavioral), anomaly detection for network traffic and endpoint behavior, predictive models for fraud prevention in e-retail, automated correlation of SIEM events for professional review.
Then came LLMs and now you have VCs trying to plug in logs and Jira to their product... researchers say.
jsnider3 · 3h ago
> AI is ideal for cybersecurity, if it's implemented correctly.
I can tell you right now, that it's not going to be implemented correctly.
nicce · 1h ago
My code has always perfect security if I just write it correctly.
zwnow · 4h ago
Because LLMs 100% work correct everytime. They would never lie to you or make stuff up (Replit).
Im so tired of LLM enthusiasts not seeing the issues with them and trying to make them solve everything. We need another todo app, this time with LLMs though!!! Fuck the energy consumption of this technology. Private companies want to build nuclear power plants? What could go wrong? Yall need a reality check.
Mountain_Skies · 2h ago
False positives undermine support for tools, LLM or not. I suspect that's what's going to kill off lots of these products, as humans endlessly chase after LLM hallucinations like the very early days of virus scanners that flagged every disk write as something that needed human approval. While it's possible AI security tools can discover patterns of infiltration behavior that a humans would overlook, it doesn't go much good if the reports are buried in mountains of intrusion fantasies. But lots of money will be made before the decision makers realize that the vendor promises don't come to fruition and headcount has to increase to manage use of the tools.
The tools you're providing to your LLM agent must never have privileges greater than those you intend to afford to the user who is prompting / interacting with the agent.
You want to use an LLM to make a customer service bot? Sure, you can do that. But that bot MUST NOT UNDER ANY CIRCUMSTANCES be allowed to perform any action you wouldn't let the customer do himself. If it can read your CRM, you need to scope that access to exactly the same access you'd be willing to give the customer directly. Can it cancel orders? That tool must not be able to cancel any order you wouldn't let the customer cancel himself through your app or website.
Don't treat an LLM as if it could replace a human customer service agent, or a human researcher, or a human underwriter, or a human manager. Never make the mistake of believing that the LLM, with any level of clever prompt engineering or attempts at input sanitization, will be "good enough" at not getting fooled. If you trust it with the keys to the kingdom, in the same way that you'd trust a human with those keys, it's a matter of when—not if—you're going to get pwn3d.
Of course, holding this principle, if your autonomous agent can access the web, you must assume that literally anyone on the internet can call any of that agent's tools with arbitrary parameters.
AI is like a SQL without quotation marks though. You just have to assume the attacker gets complete DB access.
Or with more broader definiton - AI is like a Schrödinger’s protocol — it has rules and no rules at the same time. If you can’t find the edges, you can’t define the security. Nobody undertands it completely, yet they try to constraint it.
Things might be different on the business secrets side of things but where I was on the PII/PCI data protection side of things, cybersecurity = legal risk management.
"Known vulnerabilities are showing up at alarming rates because of these tools. It's just getting worse due to vibe coding and AI coding assistants," he said. "If you wanted to know what it was like to hack in the '90s, now's your chance."
I agree with everything except I assume hacking was more fun in the 1990s.
"It's the '90s all over again," said Bargury with a smile. "So many opportunities."
Hacking has been democratized!
Then came LLMs and now you have VCs trying to plug in logs and Jira to their product... researchers say.
I can tell you right now, that it's not going to be implemented correctly.
Im so tired of LLM enthusiasts not seeing the issues with them and trying to make them solve everything. We need another todo app, this time with LLMs though!!! Fuck the energy consumption of this technology. Private companies want to build nuclear power plants? What could go wrong? Yall need a reality check.