Show HN: Moss – AI-Powered Semantic Search Running In-Browser (No Cloud)

6 srimalireddi 4 5/5/2025, 11:10:52 PM twitter.com ↗
Hey HN,

We’ve been heads-down building MOSS - a semantic memory layer that brings AI-powered search and personalization fully on-device (No cloud | No latency | No data leaving the user’s device)

We just launched a live demo showing MOSS running entirely in-browser, performing lightning-fast semantic search over local in-browser VectorDB. This unlocks a new class of privacy-first, hybrid AI experiences that work even without a server connection.

If you’re curious about:

- how to run AI search right inside the browser

- the technical challenges behind on-device vector search

- why we believe the future is hybrid (edge + cloud)

… we’d love for you to check it out, break it down, and share your thoughts.

Early access sign-up (for devs + partners): https://form.typeform.com/to/hZKVLFKW

Let’s push on-device AI forward together!

Comments (4)

skilbjo · 4h ago
how are you thinking this will be deployed and how will developers initially integrate this? ie is this a language-level library, or an internal service in a docker container to deploy?

one idea, is vitepress (vitepress.dev) has a local search engine, it would be cool to have moss integrated in that project, https://github.com/vuejs/vitepress

srimalireddi · 1h ago
We provide MOSS as a lightweight TypeScript/WASM library that developers can drop directly into their applications. It's designed for easy frontend integration - no backend services or Docker setups required. With just a few lines of setup, you can instantly index your multi-modal data and start running real-time semantic search entirely on-device.

We love the idea of integrating MOSS with VitePress - it's exactly the kind of high-performance, client-side experience where on-device semantic search could shine. If anyone here is connected with the VitePress maintainers or community, we'd appreciate an introduction! We'd be happy to collaborate or contribute if there’s interest in exploring an integration together.

user0x1d · 9h ago
who benefits from this?
srimalireddi · 1h ago
MOSS benefits any team or developer who wants to add "ask anything" semantic search and personalization without standing up extra backend services. That includes `indie devs`, `small SaaS teams`, `mobile apps`, and `privacy-conscious or regulated products` where sending data to the cloud is problematic. By bundling the entire stack - embeddings, vector DB store, and retrieval - into a lightweight on-device module, you avoid months of backend integration, reduce ongoing costs, and unlock private, instant UX even offline. Beyond search, we see this enabling per-user personalization without the privacy trade-offs of cloud-based systems.