Show HN: I'm trying to make it easier to run local LLMs directly in the browser
To simplify this, I've created 2 new model providers for the Vercel AI SDK that makes it easy to use local and in-browser AI models with a unified API. This allows you to leverage the power of the AI SDK.
It currently supports:
- Chrome/Edge Built-in AI: Leverages the experimental Prompt API in Chrome (Gemini Nano) and Edge (Phi-4-mini) for native performance. It even includes support for multimodal inputs (images and audio), text embeddings and generating structured data.
- WebLLM Integration: Run popular open-source models like Llama 3 and Qwen directly in the browser.
The core idea is to offer a seamless developer experience. You can use the same streamText, generateText, streamObject, generateObject and useChat hook from the Vercel AI SDK, and easily switch to server-side models if the client lacks compatibility.
This is still in its early stages, and I would love to get your feedback, suggestions and help me improve it.
No comments yet