Over the past ~3 months we’ve been building PrivGuard, an AI prompt & data leakage scanner. The goal is simple: catch sensitive data leaks, injection attempts, and bad patterns before they hit production.
Stack is Prisma + Supabase, quad-LLM orchestration, Vercel Pro hosting — so it’s already closer to production-grade than MVP scaffolding.
To keep the demo useful but not abusable, we limited it to 5 prompts unless you “reserve your place” (early access list).
I’d love feedback from this community on:
• Whether you see this type of tool becoming part of the LLM security stack.
• What features/reporting you’d expect beyond just flagging prompts.
• And of course — if you find ways to break/bypass it, that’s even better.
Stack is Prisma + Supabase, quad-LLM orchestration, Vercel Pro hosting — so it’s already closer to production-grade than MVP scaffolding.
To keep the demo useful but not abusable, we limited it to 5 prompts unless you “reserve your place” (early access list).
https://privguard.io
I’d love feedback from this community on: • Whether you see this type of tool becoming part of the LLM security stack. • What features/reporting you’d expect beyond just flagging prompts. • And of course — if you find ways to break/bypass it, that’s even better.