Ask HN: Why hasn't x86 caught up with Apple M series?
421 points by stephenheron 2d ago 607 comments
Ask HN: Best codebases to study to learn software design?
100 points by pixelworm 4d ago 90 comments
We Are Still Unable to Secure LLMs from Malicious Inputs
3 zdw 1 8/27/2025, 9:49:03 PM schneier.com ↗
Comments (1)
simonw · 3h ago
Bruce Schneier:"We simply don’t know to defend against these attacks. We have zero agentic AI systems that are secure against these attacks. Any AI that is working in an adversarial environment—and by this I mean that it may encounter untrusted training data or input—is vulnerable to prompt injection. It’s an existential problem that, near as I can tell, most people developing these technologies are just pretending isn’t there."