I've been using JetBrains AI Assistant and liked the built-in integration.
But I found it frustrating that it doesn't use much of my project context — it often gives answers that don't "know" the codebase.
So I built RAGmate - a small open-source server that adds local RAG support to JetBrains AI Assistant.
What it does:
* Scans your project and builds a local vector index of your code
* When you ask questions in JetBrains, it injects relevant context into the LLM prompt
* Fully local — works with OpenAI API, LM Studio, Ollama, or anything with an API
* Requires no plugins or cloud syncing
It’s essentially a way to turn JetBrains into a smarter AI assistant, aware of your actual codebase.
I’m not sure who else runs into this issue - if you've tried AI features in IDEs and found them context-blind, I'd love your feedback or ideas.
So I built RAGmate - a small open-source server that adds local RAG support to JetBrains AI Assistant.
What it does: * Scans your project and builds a local vector index of your code * When you ask questions in JetBrains, it injects relevant context into the LLM prompt * Fully local — works with OpenAI API, LM Studio, Ollama, or anything with an API * Requires no plugins or cloud syncing
It’s essentially a way to turn JetBrains into a smarter AI assistant, aware of your actual codebase.
I’m not sure who else runs into this issue - if you've tried AI features in IDEs and found them context-blind, I'd love your feedback or ideas.