Weaponizing image scaling against production AI systems

166 tatersolid 40 8/21/2025, 12:20:53 PM blog.trailofbits.com ↗

Comments (40)

patrickhogan1 · 7m ago
This issue arises only when permission settings are loose. But the trend is toward more agentic systems that often require looser permissions to function.

For example, imagine a humanoid robot whose job is to bring in packages from your front door. Vision functionality is required to gather the package. If someone leaves a package with an image taped to it containing a prompt injection, the robot could be tricked into gathering valuables from inside the house and throwing them out the window.

Good post. Securing these systems against prompt injections is something we urgently need to solve.

escapecharacter · 7s ago
[delayed]
Liftyee · 3h ago
I was initially confused: the article didn't seem to explain how the prompt injection was actually done... was it manipulating hex data of the image into ASCII or some sort of unwanted side effect?

Then I realised it's literally hiding rendered text on the image itself.

Wow.

bogdanoff_2 · 1h ago
I didn't even notice the text in the image at first...

This isn't even about resizing, it's just about text in images becoming part of the prompt and a lack of visibility about what instruction the agent is following.

Martin_Silenus · 2h ago
Wait… that's the specific question I had, because rendered text would require OCR to be read by a machine. Why would an AI do that costly process in the first place? Is it part of the multi-modal system without it being able to differenciate that text from the prompt?

If the answer is yes, then that flaw does not make sense at all. It's hard to believe they can't prevent this. And even if they can't, they should at least improve the pipeline so that any OCR feature should not automatically inject its result in the prompt, and tell user about it to ask for confirmation.

Damn… I hate these pseudo-neurological, non-deterministic piles of crap! Seriously, let's get back to algorithms and sound technologies.

saurik · 1h ago
The AI is not running an external OCR process to understand text any more than it is running an external object classifier to figure out what it is looking at: it, inherently, is both of those things to some fuzzy approximation (similar to how you or I are as well).
Martin_Silenus · 1h ago
That I can get, but anything that’s not part of the prompt SHOULD NOT become part of the prompt, it’s that simple to me. Definitely not without triggering something.
daemonologist · 1h ago
_Everything_ is part of the prompt - an LLM's perception of the universe is its prompt. Any distinctions a system might try to draw beyond that are either probabilistic (e.g., a bunch of RLHF to not comply with "ignore all previous instructions") or external to the LLM (e.g., send a canned reply if the input contains "Tiananmen").
IgorPartola · 1h ago
From what I gather these systems have no control plane at all. The prompt is just added to the context. There is no other program (except maybe an output filter).
pixl97 · 1h ago
>it’s that simple to me

Don't think of a pink elephant.

evertedsphere · 1h ago
i'm sure you know this but it's important not to understate the importance of the fact that there is no "prompt"

the notion of "turns" is a useful fiction on top of what remains, under all of the multimodality and chat uis and instruction tuning, a system for autocompleting tokens in a straight line

the abstraction will leak as long as the architecture of the thing makes it merely unlikely rather than impossible for it to leak

pjc50 · 1h ago
There's no distinction in the token-predicting systems between "instructions" and "information", no code-data separation.
electroly · 53m ago
It's that simple to everyone--but how? We don't know how to accomplish this. If you can figure it out, you can become very famous very quickly.
echelon · 2h ago
Smart image encoders, multimodal models, can read the text.

Think gpt-image-1, where you can draw arrows on the image and type text instructions directly onto the image.

Martin_Silenus · 1h ago
I did not ask about what AI can do.
noodletheworld · 1h ago
> Is it part of the multi-modal system without it being able to differenciate that text from the prompt?

Yes.

The point the parent is making is that if your model is trained to understand the content of an image, then that's what it does.

> And even if they can't, they should at least improve the pipeline so that any OCR feature should not automatically inject its result in the prompt, and tell user about it to ask for confirmation.

That's not what is happening.

The model is taking <image binary> as an input. There is no OCR. It is understanding the image, decoding the text in it and acting on it in a single step.

There is no place in the 1-step pipeline to prevent this.

...and sure, you can try to avoid it procedural way (eg. try to OCR an image and reject it before it hits the model if it has text in it), but then you're playing the prompt injection game... put the words in a QR code. Put them in french. Make it a sign. Dial the contrast up or down. Put it on a t-shirt.

It's very difficult to solve this.

> It's hard to believe they can't prevent this.

Believe it.

Martin_Silenus · 1h ago
Now that makes more sense.

And after all, I'm not surprised. When I read their long research PDFs, often finishing with a question mark about emerging behaviors, I knew they don't know what they are playing with, with no more control than any neuroscience researcher.

This is too far from hacking spirit to me, sorry to bother.

Qwuke · 2h ago
Yea, as someone building systems with VLMs, this is downright frightening. I'm hoping we can get a good set of OWASP-y guidelines just for VLMs that cover all these possible attacks because it's every month that I hear about a new one.

Worth noting that OWASP themselves put this out recently: https://genai.owasp.org/resource/multi-agentic-system-threat...

koakuma-chan · 2h ago
What is VLM?
pwatsonwailes · 2h ago
Vision language models. Basically an LLM plus a vision encoder, so the LLM can look at stuff.
echelon · 2h ago
Vision language model.

You feed it an image. It determines what is in the image and gives you text.

The output can be objects, or something much richer like a full text description of everything happening in the image.

VLMs are hugely significant. Not only are they great for product use cases, giving users the ability to ask questions with images, but they're how we gather the synthetic training data to build image and video animation models. We couldn't do that at scale without VLMs. No human annotator would be up to the task of annotating billions of images and videos at scale and consistently.

Since they're a combination of an LLM and image encoder, you can ask it questions and it can give you smart feedback. You can ask it, "Does this image contain a fire truck?" or, "You are labeling scenes from movies, please describe what you see."

echelon · 2h ago
Holy shit. That just made it obvious to me. A "smart" VLM will just read the text and trust it.

This is a big deal.

I hope those nightshade people don't start doing this.

koakuma-chan · 1h ago
I don't think this is any different from an LLM reading text and trusting it. Your system prompt is supposed to be higher priority for the model than whatever it reads from the user or from tool output, and, anyway, you should already assume that the model can use its tools in arbitrary ways that can be malicious.
pjc50 · 1h ago
> I hope those nightshade people don't start doing this.

This will be popular on bluesky; artists want any tools at their disposal to weaponize against the AI which is being used against them.

K0nserv · 3h ago
The security endgame of LLMs terrifies me. We've designed a system that only supports in-band signalling, undoing hard learned lessons from prior system design. There are ampleattack vectors ranging from just inserting visible instructions to obfuscation techniques like this and ASCII smuggling[0]. In addition, our safeguards amount to nicely asking a non deterministic algorithm to not obey illicit instructions.

0: https://embracethered.com/blog/posts/2024/hiding-and-finding...

robin_reala · 3h ago
The other safeguard is not using LLMs or systems containing LLMs?
GolfPopper · 2h ago
But, buzzword!

We need AI because everyone is using AI, and without AI we won't have AI! Security is a small price to pay for AI, right? And besides, we can just have AI do the security.

IgorPartola · 1h ago
You wouldn’t download an LLM to be your firewall.
volemo · 3h ago
It’s serial terminals all over again.
EvanAnderson · 23m ago
I like the analogy to blue boxes and phone phreaking even more.
_flux · 3h ago
Yeah, it's quite amazing how none of the models seem to be any "sudo" tokens that could be used to express things normal tokens cannot.
pjc50 · 3h ago
As you say, the system is nondeterministic and therefore doesn't have any security properties. The only possible option is to try to sandbox it as if it were the user themselves, which directly conflicts with ideas about training it on specialized databases.

But then, security is not a feature, it's a cost. So long as the AI companies can keep upselling and avoid accountability for failures of AI, the stock will continue to go up, taking electricity prices along with it, and isn't that ultimately the only thing that matters? /s

aaroninsf · 41m ago
Am I missing something?

Is this attack really just "inject obfuscated text into the image... and hope some system interprets this as a prompt"...?

K0nserv · 30m ago
That's it. The attack is very clever because it abuses how downscaling algorithms work to hide the text from the human operator. Depending on how the system works the "hiding from human operator" step is optional. LLMs fundamentally have no distinction between data and instructions, so as long as you can inject instructions in the data path it's possible to influence their behaviour.

There's an example of this in my bio.

ambicapter · 2h ago
> This image and its prompt-ergeist

Love it.

SangLucci · 1h ago
Who knew a simple image could exfiltrate your data? Image-scaling attacks on AI systems are real and scary.
cubefox · 2h ago
It seems they could easily fine-tune their models to not execute prompts in images. Or more generally any prompts in quotes, if they are wrapped in special <|quote|> tokens.
helltone · 16m ago
No amount of fine-tuning can prevent models from doing anything. All it can do is reduce the likelihood of exploits happening, while also increasing the surprise factor when they inevitably do. This is a fundamental limitation.
jdiff · 1h ago
It may seem that way, but there's no way that they haven't tried it. It's a pretty straightforward idea. Being unable to escape untrusted input is the security problem with LLMs. The question is what problems did they run into when they tried it?
bogdanoff_2 · 1h ago
Just because "they" tried that and it didn't work, doesn't mean doing something of that nature will never work.

Plenty of things we now take for granted did not work in their original iterations. The reason they work today is because there were scientists and engineers who were willing to persevere in finding a solution despite them apparently not working.

phyzome · 12m ago
But that's not how LLMs work. You can't actually segregate data and prompts.