Show HN: Arc OS – spec-first audit layer for GPT prompts (MD and self-check)

1 axezing121321 1 7/22/2025, 10:51:59 AM github.com ↗

Comments (1)

axezing121321 · 5h ago
Why I kept running into “prompt spaghetti”—great model outputs but zero traceability. So I wrote a tiny spec that forces any LLM call to show its reasoning first.

What it looks like GOAL / CONTEXT / CONSTRAINTS ------------------------------ Premise 1 Premise 2 Rule applied Intermediate deduction Conclusion ------------------------------ SELF-CHECK → bias / loop / conflict flags

How to try 1. Download the release ZIP (link in post). 2. Copy `yaml_template.yaml`. 3. Paste it into ChatGPT (or any model) → you get an auditable logic tree.

Ask • Which failure modes am I missing? • Would you integrate something like this into CI / prod pipelines? • PRs with better examples or edge-cases are very welcome.

Thanks for looking!