Show HN: Nia – MCP server that gives more docs and repos to coding agents

17 jellyotsiro 24 7/24/2025, 3:05:12 PM trynia.ai ↗
Hi HN, I’m Arlan, and I built Nia (https://www.trynia.ai), an open MCP that integrates with coding agents like Cursor, Continue, and Cline so they can retrieve external knowledge better than current approaches.

Coding agents generate code well but lose accuracy when the answer lives outside the repo in front of them. Developers end up pasting GitHub links, docs, and blog posts by hand and hoping the agent scrolls far enough. Long context windows help, but recent “context rot” measurements show quality still drops as prompts grow. For example, in LongMemEval, all models scored much higher on focused (short, relevant) prompts (~300 tokens) than on full (irrelevant, 113k tokens) prompts, with performance gaps persisting even in the latest models (https://research.trychroma.com/context-rot).

Nia is a MCP that gives more context to any coding agent or IDE. It Indexes multiple repos and docs sites and makes this available via MCP to your coding agent so it has much more context to work with, giving you more specific and accurate answers.

Nia uses a hybrid code search architecture that combines graph-based structural reasoning with vector-based understanding. When a repo or documentation is ingested, Tree-sitter parses it into ASTs across 50+ languages and natural languages, and the code is chunked by function/class boundaries into stable, content-addressable units. These chunks are stored both in a graph db to model relationships like function calls and class inheritance, and in a vector store. At query time, a lightweight agent with give_weight tool dynamically assigns weights between graph and vector search based on intent (e.g., "who calls X" vs "how does auth work"), and both paths are searched in parallel. Results are fused, enriched with full code context, and passed through multi-stage rerankers: semantic reranker, cross-encoders, LLM-based validators.

Early Signal: In internal evals we improved Cursor’s performance by 27 % once Nia had indexed external docs models couldn’t get from their training data or searching the web.

Quickstart: <https://www.youtube.com/watch?v=5019k3Bi8Wo> Demo: <https://www.youtube.com/watch?v=Y-cLJ4N-GDQ>

To try it out: grab an API key at https://app.trynia.ai/ and follow instructions at https://docs.trynia.ai/integrations/nia-mcp.

Try it and break it! I’d love to know which contexts your agent still misses. Corner cases, latency issues, scaling bugs. I’m here 24/7.

Thanks!

Comments (24)

efitz · 51m ago
When I start a nontrivial coding task with AI, I added a “context” directory, instructions in the tool prompts how to use the files in that directory, and then I spent a couple hours using a thinking chat AI to generate the documentation I wanted (like “build me an API document for this library, the source code is at this URL and here are some URLs with good example code).

I’ve had generally good results with this approach (I’m on project #3 using this method).

jellyotsiro · 34m ago
yep I used similar approach couple months ago but found it really inefficient because it took me some time

give nia a try and use it on any docs, very curious to hear ur feedback

NiloCK · 2h ago
This looks interesting, and congrats on the launch. An immediate piece of critical feedback is that you should try to be a little more specific in the tagline. All MCP servers give context to coding agents - that's what an MCP server is (at least the resources/prompts channels of an MCP server).
jellyotsiro · 1h ago
thanks for the feedback! in this case, it is developer focused so primarily docs / external repos
dang · 19m ago
Ok, I've put that in the title above. Is is correct?
jellyotsiro · 16m ago
thanks!
afro88 · 1h ago
> In internal evals we improved Cursor’s performance by 27 % once Nia had indexed external docs models couldn’t get from their training data or searching the web.

What external docs do you have access to that aren't found on the web?

jellyotsiro · 1h ago
For evals, I used Skyvern -> https://github.com/Skyvern-AI/skyvern and their https://docs.skyvern.com/introduction

LLMs and coding agents have general knowledge but they mostly give outdated info, even when asked to search on the web.

EcommerceFlow · 43m ago
A killer app would be a up-to-date database of documentation from X amount of sources. For example, fully up to date Shopify API Documentation which could be included within cursor at the click of a button.

I believe right now you're requiring us to do the scraping/adding?

jellyotsiro · 37m ago
great question:)

Nia already supports that. Just take the link i.e https://mintlify.com/docs and ask to index it (it will crawl every subpage available from the rool link you specify)

Fraaaank · 18m ago
Context7 does this https://context7.com/
hrpnk · 3m ago
last time I checked, context7 depended on an opt-in from library authors expressed through a marker file in the repository, which is negatively affecting adoption and docs coverage.
jellyotsiro · 13m ago
In my experience, it lacks a lot things that Nia can do:

- nia can do deep research across any docs / codebase and then find any relevant links or repos to index. - it also supports both private and public repos :)

lmk about ur experience with context7 (if u used) it and what docs did u use?

kokanator · 1h ago
I was unable to locate details regarding how the code/data is used/owned by the service. Clicking on the Legal link simply sends you to the top of the Home Page.

At this time I can't even think about using the tool until I know what you are doing with my information and who owns or has access to it.

jellyotsiro · 1h ago
thank you so much, working on it

edit: it is on the website now. forgot to add it, mb

No comments yet

taherchhabra · 1h ago
Sometimes Claude code defaults to using the older versions of certain libraries, have to explicitly tell claude to use the specific version. Even then it goes back to older version, so I downloaded the entire repo of that library and put it in my project folder. Does your product solve that ?
jellyotsiro · 1h ago
my product isn't for local projects and your own workspace but rather if you index other codebase, it will process it and make it callable using MCP (not on ur machine as files get deleted on runtime to prevent privacy issues)
tevon · 1h ago
Can you explain what you mean by "not on ur machine as files get deleted on runtime to prevent privacy issues". I may be misunderstanding, but I'd personally want the files to by on my machine, and served to my agent locally instead of being a remote MCP server that I don't have control of
jellyotsiro · 1h ago
Currently, the open source repositories and documentation you index are stored in both a graph database as embeddings (similar to Cursor.com). Indexing itself happens by creating a temporary file, which is deleted afterward.

btw, I am working on allowing users to index their local files and fully store it locally! will update you on that

nisegami · 52m ago
How does this compare to Context7?
jellyotsiro · 35m ago
- nia can do deep research across any docs / codebase and then find any relevant links or repos to index.

- it also supports both private and public repos :)

kissgyorgy · 1h ago
[flagged]
dang · 17m ago
Please don't be a jerk on HN. We're trying for the opposite here.

Especially please don't do this in Show HN threads, which have extra rules to forbid this kind of thing: https://news.ycombinator.com/showhn.html.

jellyotsiro · 1h ago
hm, what model are you using? just ran it on my mac and got 100 for performance and 96 for best practices.