We published "Prompt Injection 2.0: Hybrid AI Threats" on arXiv and released our Prompt Injector tool as open source.
Key findings from the research:
- Modern prompt injection attacks now combine with traditional web vulnerabilities (XSS, CSRF) to create hybrid threats
- Traditional security controls (WAFs, input sanitization) fail against AI-enhanced attacks
- Agentic AI systems create new attack surfaces
- New taxonomy for prompt injections
Open Source Tool:
- Prompt Injector v1 now available under Apache 2.0 license
- Desktop app for testing AI systems against prompt injection attacks
- Supports OpenAI, Anthropic, Google, Grok, and Ollama models
- 150+ payloads
- GitHub: https://github.com/preambleai/prompt-injector
Background: We first discovered prompt injection vulnerabilities in GPT-3 back in May 2022 and responsibly disclosed to OpenAI. This new research shows how the threat landscape has evolved with agentic AI systems.
Key findings from the research: - Modern prompt injection attacks now combine with traditional web vulnerabilities (XSS, CSRF) to create hybrid threats - Traditional security controls (WAFs, input sanitization) fail against AI-enhanced attacks - Agentic AI systems create new attack surfaces - New taxonomy for prompt injections
Open Source Tool: - Prompt Injector v1 now available under Apache 2.0 license - Desktop app for testing AI systems against prompt injection attacks - Supports OpenAI, Anthropic, Google, Grok, and Ollama models - 150+ payloads - GitHub: https://github.com/preambleai/prompt-injector
Background: We first discovered prompt injection vulnerabilities in GPT-3 back in May 2022 and responsibly disclosed to OpenAI. This new research shows how the threat landscape has evolved with agentic AI systems.