Problem Our security team was drowning in thousands of cloud alerts but still guessing what to fix first.
What we built AdversAI connects to AWS / Azure / GCP with read-only APIs (no agents). It builds a graph that links code-scan issues, posture gaps, runtime events, and SIEM logs, then runs a multi-LLM pipeline to:
• answer plain-English questions (“Why is GuardDuty screaming?”)
• return the root cause + a one-liner CLI remediation
• optionally spin up a sandboxed lab so you can replay the exploit and patch.
New today We’ve open-sourced a 5-page technical white-paper describing the graph model and LLM orchestration, and put up a minimal landing page + live demo dataset.
Problem Our security team was drowning in thousands of cloud alerts but still guessing what to fix first.
What we built AdversAI connects to AWS / Azure / GCP with read-only APIs (no agents). It builds a graph that links code-scan issues, posture gaps, runtime events, and SIEM logs, then runs a multi-LLM pipeline to: • answer plain-English questions (“Why is GuardDuty screaming?”) • return the root cause + a one-liner CLI remediation • optionally spin up a sandboxed lab so you can replay the exploit and patch.
New today We’ve open-sourced a 5-page technical white-paper describing the graph model and LLM orchestration, and put up a minimal landing page + live demo dataset.
• White-paper (PDF): https://adversai.com/files/whitepaper_security.pdf • Live demo / docs: https://adversai.com
Looking for feedback on architecture & edge cases. Happy to answer anything here.
– Alex