Show HN: Prototype for tamper-proof LLM logs – looking for early feedback

2 paulmbw 0 6/11/2025, 10:06:00 AM
Hi HN,

I’ve been exploring how to keep every LLM prompt and response in an audit-ready form. In regulated environments (health, fintech, EU AI Act) examiners now ask for content-level logs that are:

- Immutable – no user can “UPDATE … WHERE id = …” the night before audit - Confidential – the logging vendor can’t read your data - Verifiable – a third party can prove nothing was altered or deleted

Cloud audit logs are great for API calls, but they don’t capture the text that left the model. Teams I spoke to were DIY-ing S3 buckets, object-lock settings and spreadsheet trackers = nightmare.

Here is a high-level overview of how the prototype works:

- Client encrypts every prompt/response with your cloud KMS key (BYOK). - Each log entry is written to an append-only store and linked so that any change breaks verification. - A public fingerprint of each batch is published so an auditor (or you) can confirm integrity without trusting the vendor. - When you need the content, you can decrypt a single row on demand under your own KMS permissions.

There are several reasons why I think this matters: regulations like HIPAA, SOC 2 CC7/CC8, and the draft EU AI Act explicitly call for tamper-evident, long-term storage of AI interactions. Furthermore, insider edits or accidental (even intentional) purges can happen if you host logs in S3 or in your DB, and implementing crypto + anchoring + KMS plumbing in-house is non-trivial for most startups.

What feature (or missing piece) would turn this prototype into a useful tool for your team? And how are you handling “prove-nothing-was-changed” today, if at all?

I'd also love to demo the prototype we've built so far, just book some time with me here: https://cal.com/traceprompt/traceprompt-intro?overlayCalenda...

Thanks!

Paul

Comments (0)

No comments yet