Show HN: Provability Fabric – Proof-carrying behavior for AI agents
https://mateopetel.substack.com/p/provability-fabric-the-saf...
Most “AI safety” today is vibes. Prompts, heuristics, and dashboards don’t scale when agents can call tools, stream outputs, or mutate state.
Provability Fabric introduces a different contract:
Proof-carrying bundles (PAB-1.0): specs, Lean proofs, SBOM, and provenance in one signed package
ActionDSL → Lean 4: policies compile into proof obligations and runtime monitors
Complete mediation: every effect passes through a Rust sidecar enforcing the same semantics the proofs assume
Egress certificates: every emission leaves with a signed verdict — pass, fail, or inapplicable
The principle is simple:
Safety = correspondence between proofs and runtime enforcement.
Would be interested in HN’s view:
Should deployment of AI agents require proof-carrying bundles?
Where should falsifiability end — policy layer, runtime, or both?
What would make you trust an agent in production?
No comments yet