We Are Still Unable to Secure LLMs from Malicious Inputs

3 danaris 1 8/27/2025, 1:05:45 PM schneier.com ↗

Comments (1)

m-hodges · 7h ago
I think¹ we will always be unable to secure LLMs from malicious inputs unless we drastically limit the types of inputs LLMs can work with.

¹ https://matthodges.com/posts/2025-08-26-music-to-break-model...