I built VT Chat because I wanted AI conversations that stay completely private while having access to the latest models like Claude 4, O3, Gemini 2.5 Pro, and DeepSeek R1.
The key difference is true local-first architecture. Everything lives in your browser - chats stored in IndexedDB, zero server storage, and your API keys never leave your device. I literally can't see your data even as the developer.
Switch between 15+ AI providers (OpenAI, Anthropic, Google, etc.) and compare responses side-by-side. Safe for shared machines with complete data isolation between users. Local models with LM Studio, LLama.ccp and Ollama is in the road map.
Research features that I actually use daily: Deep Research does multi-step research with source verification, Pro Search integrates real-time web search, and AI Memory builds a personal knowledge base from your conversations. Also includes PDF processing, "thinking mode" to watch AI reasoning unfold, structured data extraction, and semantic routing that automatically activates the right tools.
Built with Next.js 14, TypeScript, and Turborepo. Fully open source for self-hosting. The hosted version is mostly free, with optional VT+ for premium models and advanced research capabilities.
Would love feedback on the local-first approach or any questions about the implementation.
ndgold · 7h ago
I’ve tried it and there’s a lot to like here. I can’t get it to tell me about a PDF yet though, just seeing it hang.
vinhnx · 7h ago
Thank you for trying it out. I will look into it asap. Could you mind to share more on which model you chose, or you can send me an email at hello@vtchat.io.vn I will look into it. Thanks!
If that still doesn't work, please choose a Gemini Model (Gemini 2.5/2.0 Flash) and also check if you have set Gemini API key under API Key section in Settings page. PDF extract requires an Gemini API Key.
For now, PDF support only works for Gemini and Anthropic Models. I will try to support more models soon in future.
App: https://vtchat.io.vn
Open Source: https://github.com/vinhnx/vtchat
The key difference is true local-first architecture. Everything lives in your browser - chats stored in IndexedDB, zero server storage, and your API keys never leave your device. I literally can't see your data even as the developer.
Switch between 15+ AI providers (OpenAI, Anthropic, Google, etc.) and compare responses side-by-side. Safe for shared machines with complete data isolation between users. Local models with LM Studio, LLama.ccp and Ollama is in the road map.
Research features that I actually use daily: Deep Research does multi-step research with source verification, Pro Search integrates real-time web search, and AI Memory builds a personal knowledge base from your conversations. Also includes PDF processing, "thinking mode" to watch AI reasoning unfold, structured data extraction, and semantic routing that automatically activates the right tools.
Built with Next.js 14, TypeScript, and Turborepo. Fully open source for self-hosting. The hosted version is mostly free, with optional VT+ for premium models and advanced research capabilities.
Would love feedback on the local-first approach or any questions about the implementation.
If that still doesn't work, please choose a Gemini Model (Gemini 2.5/2.0 Flash) and also check if you have set Gemini API key under API Key section in Settings page. PDF extract requires an Gemini API Key.
For now, PDF support only works for Gemini and Anthropic Models. I will try to support more models soon in future.