Show HN: Promptproof – GitHub Action to test LLM prompts, catch bad JSON schemas
2 geminimir 0 8/17/2025, 11:20:59 PM github.com ↗
We kept breaking production with small prompt edits — suddenly outputs weren’t valid JSON, fields disappeared, or formats changed silently.
So we built Promptproof, a GitHub Action that runs in CI and blocks PRs when prompts produce invalid outputs.
Features:
- Validates JSON output
- Enforces required keys & schemas
- Runs fast in CI (no external infra)
- Works with OpenAI, Anthropic, and local models
- Adds PR comments so reviewers see failures immediately
We’d love feedback: which rules or integrations would make this most useful for you?
No comments yet