Show HN: I Built a API to Protect LLM Apps from Prompt Injection
1 adi-io 0 6/1/2025, 5:02:01 PM swayblocks.com ↗
Hello HN!
I'm Adi, and I've built a near real-time API to help developers secure their LLM-based applications. It sanitizes user input to prevent prompt injections, jailbreaks, and unwanted conversations.
I created this because, while building LLM-powered chatbots, I kept running into issues with prompt injections and unexpected behavior.
My goal is to help developers and vibcoders ship LLM-powered apps that are secure and predictable.
You can try the demo on the website (https://swayblocks.com) and sign up for the waitlist if you're interested.
The application is still very new, and I'd really appreciate your feedback so I can build something truly useful for the community :)
Thank you!
No comments yet