Show HN: Nia – MCP server that gives more docs and repos to coding agents
Coding agents generate code well but lose accuracy when the answer lives outside the repo in front of them. Developers end up pasting GitHub links, docs, and blog posts by hand and hoping the agent scrolls far enough. Long context windows help, but recent “context rot” measurements show quality still drops as prompts grow. For example, in LongMemEval, all models scored much higher on focused (short, relevant) prompts (~300 tokens) than on full (irrelevant, 113k tokens) prompts, with performance gaps persisting even in the latest models (https://research.trychroma.com/context-rot).
Nia is a MCP that gives more context to any coding agent or IDE. It Indexes multiple repos and docs sites and makes this available via MCP to your coding agent so it has much more context to work with, giving you more specific and accurate answers.
Nia uses a hybrid code search architecture that combines graph-based structural reasoning with vector-based understanding. When a repo or documentation is ingested, Tree-sitter parses it into ASTs across 50+ languages and natural languages, and the code is chunked by function/class boundaries into stable, content-addressable units. These chunks are stored both in a graph db to model relationships like function calls and class inheritance, and in a vector store. At query time, a lightweight agent with give_weight tool dynamically assigns weights between graph and vector search based on intent (e.g., "who calls X" vs "how does auth work"), and both paths are searched in parallel. Results are fused, enriched with full code context, and passed through multi-stage rerankers: semantic reranker, cross-encoders, LLM-based validators.
Early Signal: In internal evals we improved Cursor’s performance by 27 % once Nia had indexed external docs models couldn’t get from their training data or searching the web.
Quickstart: <https://www.youtube.com/watch?v=5019k3Bi8Wo> Demo: <https://www.youtube.com/watch?v=Y-cLJ4N-GDQ>
To try it out: grab an API key at https://app.trynia.ai/ and follow instructions at https://docs.trynia.ai/integrations/nia-mcp.
Try it and break it! I’d love to know which contexts your agent still misses. Corner cases, latency issues, scaling bugs. I’m here 24/7.
Thanks!
I’ve had generally good results with this approach (I’m on project #3 using this method).
give nia a try and use it on any docs, very curious to hear ur feedback
What external docs do you have access to that aren't found on the web?
LLMs and coding agents have general knowledge but they mostly give outdated info, even when asked to search on the web.
I believe right now you're requiring us to do the scraping/adding?
Nia already supports that. Just take the link i.e https://mintlify.com/docs and ask to index it (it will crawl every subpage available from the rool link you specify)
- nia can do deep research across any docs / codebase and then find any relevant links or repos to index. - it also supports both private and public repos :)
lmk about ur experience with context7 (if u used) it and what docs did u use?
At this time I can't even think about using the tool until I know what you are doing with my information and who owns or has access to it.
edit: it is on the website now. forgot to add it, mb
No comments yet
btw, I am working on allowing users to index their local files and fully store it locally! will update you on that
- it also supports both private and public repos :)
Especially please don't do this in Show HN threads, which have extra rules to forbid this kind of thing: https://news.ycombinator.com/showhn.html.