Show HN: Compliant-LLM: Audit AI Agents for Compliance with NIST AI RMF

11 kaushik92 4 5/29/2025, 2:52:29 PM github.com ↗
We're excited to launch compliant-llm: an open-source toolkit that helps infosec and compliance teams audit AI agents against regulatory frameworks like NIST AI RMF, ISO 42001, and OWASP Top 10.

Infosec and compliance teams are now responsible for tracking security and compliance risks of a growing number of AI agents across external and internal apps and third-party vendors.

compliant-llm gives you a way to:

- Define and run comprehensive red-teaming tests for AI agents - Maps test outcomes to compliance frameworks like NIST AI RMF - Generate detailed audit logs and documentation - Integrate with Azure, OpenAI, Anthropic, or wherever you host your models - With an open-source, self-hosted solution

Install and launch the red-teaming dashboard locally:

  pip install compliant-llm
  compliant-llm dashboard
This opens an interactive UI for running AI compliance checks and analyzing results.

We’re at v0.1, and would love your feedback. Tell us about the compliance or AI risk issues you’re facing, and we’ll prioritize what matters most.

Comments (4)

aavci · 16h ago
This is pretty neat. All serious products get a lot of questions about prompt vulnerabilities that this would address.
praveenkumarnew · 16h ago
Sounds interesting will try it out for my subsea work
nikhil896 · 12h ago
This is super useful!
andrewski77 · 14h ago
this is super cool!