Show HN: DeepTeam – Open-Source Red-Teaming Framework for LLM Security

4 sidmurali23 0 6/18/2025, 5:45:39 AM github.com ↗
Hi HN, I’m part of the Confident AI team and we’re excited to share DeepTeam, an open-source framework that makes it trivial to penetration-test your LLM applications for 40+ security and safety risks. It has gained 400 on GitHub over the last month, and we’d love your feedback!

Quick Introduction

- Detect vulnerabilities such as bias, misinformation, PII leakage, over-reliance on context, harmful content, and more - Simulate adversarial attacks with 10+ methods (jailbreaks, prompt injection, ROT13, automated evasion, data extraction, etc.) - Customize assessments to OWASP Top 10 for LLMs, NIST AI Risk Management, or your own security guidelines - Leverage DeepEval under the hood for robust metric evaluation, so you can run both regular and adversarial tests

Getting Started

    # Install DeepTeam
    pip install -U deepteam

    # Clone the repo and run the example
    git clone https://github.com/confident-ai/deepteam.git
    cd deepteam
    python3 -m venv venv && source venv/bin/activate
    python examples/red_teaming_example.py
In seconds you’ll see a pass/fail breakdown for each vulnerability along with detailed test-case output. You can convert the results to a pandas DataFrame or save them for downstream analysis.

Why DeepTeam?

Most LLM safety tooling focuses on known benchmarks—DeepTeam dynamically simulates attacks at runtime, so you catch novel, real-world threats and can track improvements over time via reusable attack suites.

We’d love to hear:

- Which vulnerabilities you worry about most - How you integrate red-teaming into your CI/CD pipelines - Feature requests, contributions, and your toughest jailbreak stories

<https://github.com/confident-ai/deepteam>

Comments (0)

No comments yet