Ask HN: Why hasn't x86 caught up with Apple M series?
Ask HN: Is there a temp phone number like temp email?
Ask HN: Best codebases to study to learn software design?
How can a mutex in Wine be faster than a native one on Linux
Ask HN: Why do people hate on Sabine Hossenfelder so much?
Show HN: I made PromptMask, a local LLM-based privacy filter for cloud LLMs
It uses a trusted local LLM (via Ollama, llama.cpp, etc.) to intercept your prompt before it goes to a cloud service like OpenAI. It finds sensitive data and replaces it with semantic placeholders. Instead of a generic [REDACTED] that breaks context, it creates a map like {"John Doe": "${PERSON_1_NAME}"}.
Only the anonymized prompt is sent to the cloud. When the response returns, PromptMask uses the map to restore your original data, so that your secrets never leave your machine.
This is practical on consumer hardware because the local model's job is small: it only outputs a JSON map of secrets, not a full text rewrite. My benchmarks show even sub-1B parameter models are effective at this.
Two ways to integrate it into your AI activities:
1. Python lib for developers: A drop-in replacement of OpenAI SDK. `from promptmask import OpenAIMasked as OpenAI`
2. API Gateway for client-side apps: Run `promptmask-web` to get a local reverse-proxy endpoint (localhost:8000/v1/chat/completions) that secures requests from any OpenAI-compatible app.
I'd love to hear your feedback.
GitHub Repo: https://github.com/cxumol/promptmask (MIT Licensed)
Blog post "How Not to Give AI Companies Your Secrets", a deep dive of "how"s and "why"s: https://xirtam.cxumol.com/promptmask-how-not-give-ai-secrets...
No comments yet