> The design patterns we propose share a common guiding principle: once an LLM agent has ingested untrusted input, it must be constrained so that it is impossible for that input to trigger any consequential actions—that is, actions with negative side effects on the system or its environment.
This is the key thing people need to understand about why prompt injection is such a critical issue, especially now everyone is wiring LLMs together with tools and MCP servers and building "agents".
babyshake · 57m ago
One of the tricky things is untrusted input somehow making its way into what is otherwise considered trusted input. There are obviously untrusted inputs like a customer support chatbot. And there are maybe trusted inputs, like a codebase that probably doesn't contain harmful instructions, but there's always a chance that harmful instructions might be able to make their way into it.
senko · 1d ago
This reminds me of the Perl concept of taint. Once an agent touches tainted input, it becomes tainted as well (as you mention in the article), the same in Perl (when you operate on tainted data, the result becomes tainted).
wunderwuzzi23 · 23h ago
Yeah, I think taint tracking was one of the early ideas here also.
The problems is that the chat context typically is immediately tainted as for the AI to do something useful it needs to operate on untrained data.
I wonder if maybe there could be tags mimicking data classification - to enable more fine grained decision making and human in the loop prompts.
Still a lot of unknowns and a lot more research needed.
For instance with Google Gemini I observed last year that certain sensitive tools can only be invoked in the first conversation turn / or until untrusted data is brought into the chat context. Then for the next conversation turn these sensitive tools are disabled.
I thought that was a neat idea. It can be bypassed with what I called "delayed tool invocation" and usage of a trigger action, but it becomes a lot more difficult to exploit.
seanhunter · 12h ago
It seems to me that the only robust solution has to be some sort of split-brain dual model where tainted data can only ever be input to a model which is only trained for sentence completion, not instruction-tuned.
Untainted data is the only data that can be input into the instruction-tuned half of the dual model.
In an architecture like this, any attempt to prompt inject would just find their injection harmlessly sentence-completed rather than turned into instructions and used to override other prompt instructions.
Onawa · 1d ago
Very helpful Simon! I have definitely been hesitant to spin up any agents even in sandboxes that have access to potentially destructive tools, but this guiding principle does ease my concerns a bit.
potatolicious · 1d ago
Great summary. Also, some of these seem like they can be combined. For example, "Plan-Then-Execute" is compatible with "Dual LLM".
Take the article's example "send today’s schedule to my boss John Doe" where the product isn't entirely guarded by the Plan-Then-Execute model (injections can still mutate email body).
But if you combine it with the symbolic data store that is blind, it becomes more like:
"send today's schedule to my boss John Doe" -->
$var1 = find_contact("John Doe")
$var2 = summarize_schedule("today")
send_email(recipient: $var1, body: $var2)
`find_contact` and `summarize_schedule` can both be quarantined, and the privileged LLM doesn't get to see the results directly.
It simply invokes the final tool, which is deterministic and just reads from the shared var store. In this case you're pretty decently protected from prompt injection.
I suppose though this isn't that different from the "Code-Then-Execute" pattern later on...
I need to have a closer look at this.
Mostly because I was surprised recently while experimenting with making a dieting advice agent. I built a prompt to guide the recommendations "only healthy foods, low purines, low inflammation blah blah" and then gave it simple tools to have a memory of previous meals, ingredient availability, grocery ticket input and so on.
The main interface was still chat.
The surprise was that when I tried to talk about anything else in that chat, the LLM (gemini2.5) flatly refused to engage, telling me something like "I will only assist with healthy meal recommendations".
I was surprised because nothing in the prompt was so restrictive, in no way I had told it to do that, just gave it mainly positive rules in the form of "when this happens do that".
simonw · 1d ago
That's interesting. Maybe the Gemini 2.5 models have been trained such that, in the presence of system instructions, they assume that anything outside of those instructions isn't meant to be part of the conversations.
Adding "You can talk about anything else too" to the system prompt may be all it takes to fix that.
tough · 1d ago
you should try just to give an instruction like, when you're inquired about non-dietary related questions, you might entertain chit-chat and barter but try to steer the conversation back to dietary / healthy lifestyle, at the end of the day the context is king, if something is not in context the llm can infer by the lack of it, that its not -programmed- to do anything else.
these are funny systems to work with indeed
ntonozzi · 1d ago
This approach is so limiting it seems like it would be better to change the constraints. For example, in the case of a software agent you could run everything in a container, only allow calls you trust to not exfiltrate private and make the end result a PR you can review.
No comments yet
swyx · 1d ago
ooh this is a dense and useful paper. i like that they took the time to apply it to a bunch of case studies and its all in 30 pages.
i think basically all of them involve reducing the "agency" of the agents though - which is a fine tradeoff - but i think one should be aware that the Big Model folks dont try to engineer any of these and just collect data to keep reducing injection risk. the tradeoff of capability maxxing vs efficiency/security often tends to be won by the capabilitymaxxers in terms of product adoption/marketing.
eg the SWE Agent case study recommends Dual LLM with strict data formatting - would like to see this benchmarked in terms of how much of a perfomance an agent like this would be, perhaps doable by forking openai codex and implementing the dual llm.
simonw · 1d ago
Yeah, this paper is refreshingly conservative and practical: it takes the position that robust protection against prompt injection requires very painful trade-offs:
These patterns impose intentional
constraints on agents, explicitly
limiting their ability to perform
arbitrary tasks.
That's a bucket of cold water in a lot of things people are trying to build. I imagine a lot of people will ignore this advice!
NoMoreNicksLeft · 1d ago
LLMs are too useful to allow the commoner access to them. The question remains, how best to fleece those commoners with perceived utility while providing them none?
hooverd · 1d ago
What if we could define what a computer could do via some symbolic notation? Perhaps program it in some kind of language?
ofirg · 1d ago
"The Context-Minimization pattern"
You can copy the injection into the text of the query.
SELECT "ignore all previous instructions" FROM ...
Might need to escape it in a wya that the LLM will pick up on like "---" for new section.
simonw · 1d ago
My interpretation of that pattern is that it wouldn't work like that, because you restrict the SQL queries to things like:
select title, content from articles where content matches ?
So the user's original prompt is used as part of the SQL search parameters, but the actual content that comes back is entirely trusted (title and content from your articles database).
Won't work for `select body from comments` though, you could only do this against tables that contain trusted data as opposed to UGC.
JSR_FDED · 1d ago
Clever. It’s like parameterized queries for SQL.
simonw · 1d ago
If only it were as easy as that!
The problem with prompt injection is that the attack itself is the same as SQL injection - concatenation trusted and untrusted strings together - but so far all of our attempts at implementing a solution similar to parameterized queries (such as system prompts and prompt delimiters) have failed.
Terr_ · 1d ago
Even worse, all outputs become inputs, at least in the most interesting use-cases. So to continue the SQL analogy, you can be 100% confident that in the legitimacy of:
SELECT messages.content FROM messages WHERE id = 123;
Yet the system is in danger anyway, because that cell happens to be a string of:
DROP TABLE customers;--
... Which becomes appended to the giant pile-of-inputs.
_____
Long ago I encountered a predecessor's "web scripting language" product... it worked based on repeatedly evaluating a string and substituting the result, until it stopped mutating. Injection was its lifeblood, Even an if-else was really just a decision between one string to print and one string to discard.
As much as it horrified me, in retrospect it was still marginally more secure than an LLM, because at least it had definite (if ultimately unworkable) rules for matching/escaping things, instead of statistical suggestions.
deadbabe · 1d ago
If someone SQL injects into your database and exfiltrates all the data, there would be legal repercussions, so should there be legal repercussions for prompt injecting someone’s LLM?
simonw · 1d ago
I think there are. If you use a prompt injection attack to steal commercially sensitive data and then profit from it you're likely breaking things like the Computer Fraud and Abuse Act https://en.wikipedia.org/wiki/Computer_Fraud_and_Abuse_Act - and presumably a bunch of other federal and state laws as well, depending on exactly what you did with the stolen information.
Pretty sure existing law already covers this - malicious misuse of a computer to cause damages to someone is already illegal, and the relevant statutes aren't opinionated about how this is done.
I suspect a SQL injection attack, a XSS attack, and a prompt injection attack are not viewed as legally distinct matters. Though of course, this is not a matter of case law... yet ;)
> The design patterns we propose share a common guiding principle: once an LLM agent has ingested untrusted input, it must be constrained so that it is impossible for that input to trigger any consequential actions—that is, actions with negative side effects on the system or its environment.
This is the key thing people need to understand about why prompt injection is such a critical issue, especially now everyone is wiring LLMs together with tools and MCP servers and building "agents".
The problems is that the chat context typically is immediately tainted as for the AI to do something useful it needs to operate on untrained data.
I wonder if maybe there could be tags mimicking data classification - to enable more fine grained decision making and human in the loop prompts.
Still a lot of unknowns and a lot more research needed.
For instance with Google Gemini I observed last year that certain sensitive tools can only be invoked in the first conversation turn / or until untrusted data is brought into the chat context. Then for the next conversation turn these sensitive tools are disabled.
I thought that was a neat idea. It can be bypassed with what I called "delayed tool invocation" and usage of a trigger action, but it becomes a lot more difficult to exploit.
Untainted data is the only data that can be input into the instruction-tuned half of the dual model.
In an architecture like this, any attempt to prompt inject would just find their injection harmlessly sentence-completed rather than turned into instructions and used to override other prompt instructions.
Take the article's example "send today’s schedule to my boss John Doe" where the product isn't entirely guarded by the Plan-Then-Execute model (injections can still mutate email body).
But if you combine it with the symbolic data store that is blind, it becomes more like:
`find_contact` and `summarize_schedule` can both be quarantined, and the privileged LLM doesn't get to see the results directly.It simply invokes the final tool, which is deterministic and just reads from the shared var store. In this case you're pretty decently protected from prompt injection.
I suppose though this isn't that different from the "Code-Then-Execute" pattern later on...
The main interface was still chat.
The surprise was that when I tried to talk about anything else in that chat, the LLM (gemini2.5) flatly refused to engage, telling me something like "I will only assist with healthy meal recommendations". I was surprised because nothing in the prompt was so restrictive, in no way I had told it to do that, just gave it mainly positive rules in the form of "when this happens do that".
Adding "You can talk about anything else too" to the system prompt may be all it takes to fix that.
these are funny systems to work with indeed
No comments yet
i think basically all of them involve reducing the "agency" of the agents though - which is a fine tradeoff - but i think one should be aware that the Big Model folks dont try to engineer any of these and just collect data to keep reducing injection risk. the tradeoff of capability maxxing vs efficiency/security often tends to be won by the capabilitymaxxers in terms of product adoption/marketing.
eg the SWE Agent case study recommends Dual LLM with strict data formatting - would like to see this benchmarked in terms of how much of a perfomance an agent like this would be, perhaps doable by forking openai codex and implementing the dual llm.
You can copy the injection into the text of the query. SELECT "ignore all previous instructions" FROM ...
Might need to escape it in a wya that the LLM will pick up on like "---" for new section.
Won't work for `select body from comments` though, you could only do this against tables that contain trusted data as opposed to UGC.
The problem with prompt injection is that the attack itself is the same as SQL injection - concatenation trusted and untrusted strings together - but so far all of our attempts at implementing a solution similar to parameterized queries (such as system prompts and prompt delimiters) have failed.
_____
Long ago I encountered a predecessor's "web scripting language" product... it worked based on repeatedly evaluating a string and substituting the result, until it stopped mutating. Injection was its lifeblood, Even an if-else was really just a decision between one string to print and one string to discard.
As much as it horrified me, in retrospect it was still marginally more secure than an LLM, because at least it had definite (if ultimately unworkable) rules for matching/escaping things, instead of statistical suggestions.
(It's probably securities fraud. Everything is securities fraud. https://www.bloomberg.com/opinion/articles/2019-06-26/everyt...)
I suspect a SQL injection attack, a XSS attack, and a prompt injection attack are not viewed as legally distinct matters. Though of course, this is not a matter of case law... yet ;)