When I start a nontrivial coding task with AI, I added a “context” directory, instructions in the tool prompts how to use the files in that directory, and then I spent a couple hours using a thinking chat AI to generate the documentation I wanted (like “build me an API document for this library, the source code is at this URL and here are some URLs with good example code).
I’ve had generally good results with this approach (I’m on project #3 using this method).
stillpointlab · 22h ago
I can confirm this is effective - I have done the same.
I haven't done extensive experiments, but I have noticed anecdotal benefits to asking the LLM how they want things structured as well.
For example, for complex multi-stage tasks I asked Claude Code how best to communicate the objective and it recommended a markdown file with the following sections: "High-level goal", "User stories", "Technical requirements", "Non-goals". I then created such a doc for a pretty complex task then asked Claude to review the doc and ask any clarifications. I then answer any questions (usually 5-7) and put them into a "Clarification" section. I have also added a "Completion checklist" section that I use to ensure that Claude follows all of the rules in my subdirectory "README.md" files (I have one for each major sub-section of code, like my service layer, my router layer, my database, etc). I usually go and do 2-3 rounds of Claude asking questions and me adding to the "Clarification" section and then Claude is satisfied and ready to implement.
The bonus of this approach is I now have a growing list of the task specifications checked into a "tasks" directory showing the history of how the code base came to be.
asteroidburger · 13h ago
This sounds a lot like how Kiro works. Your requirements and design are in a .kiro directory inside the project, allowing you to commit them. The process is structured within Kiro to walk you through generating docs for each phase before beginning to write code. Ultimately, it generates a list of tasks, and you can run them one at a time and review/update between each.
pglevy · 9h ago
My use case is a little different (mostly prototyping and building design ops tools) but +1 to this flow.
At this point, I typically do an LLM-readme at the branch level to document both planning and progress. At the project level I've started having it dump (and organize) everything in a work-focused Obsidian vault. This way I end up with cross-project resources in one place, it doesn't bloat my repos, and it can be used by other agents from where it is.
jellyotsiro · 22h ago
oh damn interesting
theshrike79 · 13h ago
I have a "llm-shared" git submodule I add to all my projects.
In there I have generic advice on project management (use `gh` and Github issues for todo lists) and language-specific guidance in separate files, like which libraries to use etc.
Then I have a common prompt template for different agents that tells them to look there for specific technology choices and create/update their own WHATEVER.md file in the repo.
Gemini-cli is pretty efficient for creating specs and doesn't run out of context. With Context7 it can pull up API specs into the documentation it creates and with Brave API it can search for other stuff.
After it's done, I can just tell Claude to make a step by step plan based on the specs and create Github issues for them with the appropriate labels.
Clear context, and get Claude working on the issues one by one.
luckystarr · 1d ago
For thorny problems I let the agent give me a simplified flow-chart in mermaid syntax. LLM's brain-farts are easily visible then. I correct the flow-chart "Ah, you're right!" and then let it translate it to code. Works wonders.
mrinterweb · 23h ago
I often provide mermaid diagrams in my promt. Mermaid seems to be a good common markup to communicate relationships between humans and LLMs.
jellyotsiro · 23h ago
that's smart! I’ve actually been thinking about integrating something like Mermaid flowcharts directly into Nia’s output—visual context can make cursor etc understand context way better. have you found any particular types of problems where the flowchart approach really shines (or falls short)? Would love to hear more
jerpint · 13h ago
I built a library to manage this exact workflow! I actually used the library to build the library
It’s MCP/CLI friendly , and wraps git around a context: folder, so you can super easily load context anywhere using: “ctx load” and ask LLMs to update and save context as things move along
jellyotsiro · 1d ago
yep I used similar approach couple months ago but found it really inefficient because it took me some time
give nia a try and use it on any docs, very curious to hear ur feedback
NiloCK · 1d ago
This looks interesting, and congrats on the launch. An immediate piece of critical feedback is that you should try to be a little more specific in the tagline. All MCP servers give context to coding agents - that's what an MCP server is (at least the resources/prompts channels of an MCP server).
jellyotsiro · 1d ago
thanks for the feedback! in this case, it is developer focused so primarily docs / external repos
dang · 1d ago
Ok, I've put that in the title above. Is is correct?
jellyotsiro · 1d ago
thanks!
kokanator · 1d ago
I was unable to locate details regarding how the code/data is used/owned by the service. Clicking on the Legal link simply sends you to the top of the Home Page.
At this time I can't even think about using the tool until I know what you are doing with my information and who owns or has access to it.
jellyotsiro · 1d ago
thank you so much, working on it
edit: it is on the website now. forgot to add it, mb
No comments yet
mtrovo · 9h ago
The idea is nice but pricing for it might be so hard to land. You're proposal is that I have to pay $20/mo for a Cursor licence which includes an IDE, coding agent and all the shenanigans involved to make it work and then on top of that I have to spare an additional $15/mo to have access to up-to-date documentation. That might be hard selling them side by side.
At some point we will need an aggregator of MCPs to be delivered with the agents, the perceived cost of shopping for them individually is not worth the cost from the consumer perspective.
afro88 · 1d ago
> In internal evals we improved Cursor’s performance by 27 % once Nia had indexed external docs models couldn’t get from their training data or searching the web.
What external docs do you have access to that aren't found on the web?
LLMs and coding agents have general knowledge but they mostly give outdated info, even when asked to search on the web.
mvATM99 · 13h ago
I've been using GitMCP.io + Github Copilot for this problem specifically (AI assistant + accurate docs). The downside is that you need to add a separate MCP server for each repository, but the qualitative difference in agent mode is incomparable.
I used it recently to do a major refactor and upgrade to MLFlow version 3.0. Their documentation is a horrid mess right now, but the MCP server made it a breeze because i could just query the assistant to browse their codebase. Would have taken me hours extra myself.
dcreater · 13h ago
How does GitMCP compare to Context7
mvATM99 · 8h ago
Not sure, i can't run it since i can't install Node.js on my work environment. What is your experience with Context7 like?
As for GitMCP: I think the url fetching tool of the docs it does is not great, but the code searching tool is quite good. Regardless, i remain open to alternatives, not stuck to this yet.
jellyotsiro · 3h ago
you should def give it a try, happy to hear what u think
creepy · 22h ago
What's the competitive advantage of Nia? What stops CursorAI or other companies implementing this feature themselves later?
whoodle · 1d ago
I’ve been following you on X since your account was randomly shown to me. Your story is really cool, but also is your product. I’m working on my own LLM context problem as a side project (in a completely different space), but this could really fill voids I’m finding when doing it myself.
I’m going to try this today. Best of luck with this!
Are you still building this yourself (and Claude)?
jellyotsiro · 1d ago
that means a lot thanks! and your feedback would be golden, lmk how it goes.
Sometimes Claude code defaults to using the older versions of certain libraries, have to explicitly tell claude to use the specific version. Even then it goes back to older version, so I downloaded the entire repo of that library and put it in my project folder. Does your product solve that ?
my product isn't for local projects and your own workspace but rather if you index other codebase, it will process it and make it callable using MCP (not on ur machine as files get deleted on runtime to prevent privacy issues)
tevon · 1d ago
Can you explain what you mean by "not on ur machine as files get deleted on runtime to prevent privacy issues". I may be misunderstanding, but I'd personally want the files to by on my machine, and served to my agent locally instead of being a remote MCP server that I don't have control of
jellyotsiro · 1d ago
Currently, the open source repositories and documentation you index are stored in both a graph database as embeddings (similar to Cursor.com). Indexing itself happens by creating a temporary file, which is deleted afterward.
btw, I am working on allowing users to index their local files and fully store it locally! will update you on that
smcleod · 1d ago
Do you plan on releasing the self hosted version free (or better yet - open source) for non-commercial, personal use?
jellyotsiro · 1d ago
Was thinking a lot about open sourcing MCP specifically.
Will keep you in the loop
smcleod · 23h ago
My thinking is that across software development as an industry we have significant subscription fatigue. Obviously for hosted services there are ongoing costs involved - and regardless businesses should be paying people for their work, however individuals working on their own hobby projects and learning that often can't afford or prioritise the spend are the very people you want being advocates for your products within the commercial space. I think the middle ground for a lot of this subscription fatigue is to offer self hosted versions at no cost for non-commercial, personal use.
jellyotsiro · 23h ago
any companies that instantly come to mind that have similar structure you described? just curious (p sure browser use is one of them)
dcreater · 13h ago
I'm using Context7 and generally happy. Any advantages to using Nia?
jellyotsiro · 3h ago
- deep research agent to enrich and give more context
- support for both documentation and entire codebases (both private and public)
stingraycharles · 10h ago
Context7 injects a huge amount of tokens into your context, which leads to a very low signal/noise ratio. I’m using https://ref.tools myself, it delivers much more targeted docs.
eikaramba · 8h ago
ref.tools did really bad on my tests. it hallucinated quite some wrong documentation.
jellyotsiro · 3h ago
same here, tried both context7 and ref tools.
fudged71 · 1d ago
Is it 27% better or 10x better?
jellyotsiro · 1d ago
you can 10x your productivity haha :)
nisegami · 1d ago
How does this compare to Context7?
jellyotsiro · 1d ago
- nia can do deep research across any docs / codebase and then find any relevant links or repos to index.
- it also supports both private and public repos :)
EcommerceFlow · 1d ago
A killer app would be a up-to-date database of documentation from X amount of sources. For example, fully up to date Shopify API Documentation which could be included within cursor at the click of a button.
I believe right now you're requiring us to do the scraping/adding?
jellyotsiro · 1d ago
great question:)
Nia already supports that. Just take the link i.e https://mintlify.com/docs and ask to index it (it will crawl every subpage available from the rool link you specify)
In my experience, it lacks a lot things that Nia can do:
- nia can do deep research across any docs / codebase and then find any relevant links or repos to index.
- it also supports both private and public repos :)
lmk about ur experience with context7 (if u used) it and what docs did u use?
hrpnk · 1d ago
last time I checked, context7 depended on an opt-in from library authors expressed through a marker file in the repository, which is negatively affecting adoption and docs coverage.
bdangubic · 1d ago
or you can you know - use tools that are un-ethical but have better adoption :)
chromehearts · 14h ago
side note: horrible website - incredibly laggy & takes long to load
jellyotsiro · 3h ago
what laptop are you using? it shows 96/100 performance for most ppl:)
gulercan · 1d ago
we have a trademark for "Nia" in the UK. Keep in your mind when you need to scale.
franky47 · 22h ago
Having young kids, the Thomas the Tank Engine character came to mind.
smrtinsert · 1d ago
Admittedly haven't tried documentation mcps at all, but can anyone quantify how much better it is than simply linking docs in the <LLM>.md file?
jellyotsiro · 1d ago
to just give quick comparison:
one of my recent customers (yc s25) needed to migrate to stripe ASAP and cursor etc gave them deprecated docs. they used my tool to index entire stripe docs and then use it to migrate in couple hours:)
lmk if u have more questions and happy to help
nikolayasdf123 · 14h ago
not sure what I am paying for here. isn't the idea of MCP is self-deployed MCP servers? I have my data, my hardware, I pay for LLM, and then also pay for this?... cmon.
kissgyorgy · 1d ago
[flagged]
dang · 1d ago
Please don't be a jerk on HN. We're trying for the opposite here.
same here on firefox. the static content is not the problem all the animations and webgl stuff is making the website crash.
jellyotsiro · 1d ago
hm, what model are you using? just ran it on my mac and got 100 for performance and 96 for best practices.
elllipses · 17h ago
I'm experiencing something similar with Firefox. I have a canvas fingerprint-blocker extension that's going haywire and preventing the full page from rendering.
Are you using some form of canvas fingerprinting, either intentionally or unintentionally (through third-party scripts)?
I’ve had generally good results with this approach (I’m on project #3 using this method).
I haven't done extensive experiments, but I have noticed anecdotal benefits to asking the LLM how they want things structured as well.
For example, for complex multi-stage tasks I asked Claude Code how best to communicate the objective and it recommended a markdown file with the following sections: "High-level goal", "User stories", "Technical requirements", "Non-goals". I then created such a doc for a pretty complex task then asked Claude to review the doc and ask any clarifications. I then answer any questions (usually 5-7) and put them into a "Clarification" section. I have also added a "Completion checklist" section that I use to ensure that Claude follows all of the rules in my subdirectory "README.md" files (I have one for each major sub-section of code, like my service layer, my router layer, my database, etc). I usually go and do 2-3 rounds of Claude asking questions and me adding to the "Clarification" section and then Claude is satisfied and ready to implement.
The bonus of this approach is I now have a growing list of the task specifications checked into a "tasks" directory showing the history of how the code base came to be.
At this point, I typically do an LLM-readme at the branch level to document both planning and progress. At the project level I've started having it dump (and organize) everything in a work-focused Obsidian vault. This way I end up with cross-project resources in one place, it doesn't bloat my repos, and it can be used by other agents from where it is.
In there I have generic advice on project management (use `gh` and Github issues for todo lists) and language-specific guidance in separate files, like which libraries to use etc.
Then I have a common prompt template for different agents that tells them to look there for specific technology choices and create/update their own WHATEVER.md file in the repo.
Gemini-cli is pretty efficient for creating specs and doesn't run out of context. With Context7 it can pull up API specs into the documentation it creates and with Brave API it can search for other stuff.
After it's done, I can just tell Claude to make a step by step plan based on the specs and create Github issues for them with the appropriate labels.
Clear context, and get Claude working on the issues one by one.
https://github.com/jerpint/context-llemur
It’s MCP/CLI friendly , and wraps git around a context: folder, so you can super easily load context anywhere using: “ctx load” and ask LLMs to update and save context as things move along
give nia a try and use it on any docs, very curious to hear ur feedback
At this time I can't even think about using the tool until I know what you are doing with my information and who owns or has access to it.
edit: it is on the website now. forgot to add it, mb
No comments yet
At some point we will need an aggregator of MCPs to be delivered with the agents, the perceived cost of shopping for them individually is not worth the cost from the consumer perspective.
What external docs do you have access to that aren't found on the web?
LLMs and coding agents have general knowledge but they mostly give outdated info, even when asked to search on the web.
I used it recently to do a major refactor and upgrade to MLFlow version 3.0. Their documentation is a horrid mess right now, but the MCP server made it a breeze because i could just query the assistant to browse their codebase. Would have taken me hours extra myself.
As for GitMCP: I think the url fetching tool of the docs it does is not great, but the code searching tool is quite good. Regardless, i remain open to alternatives, not stuck to this yet.
I’m going to try this today. Best of luck with this!
Are you still building this yourself (and Claude)?
I suggest to watch this quickstart: https://youtu.be/5019k3Bi8Wo?si=3mMcp1Zd5C3Z0Rso
Yes, I am building solo + claude code haha
btw, I am working on allowing users to index their local files and fully store it locally! will update you on that
Will keep you in the loop
- it also supports both private and public repos :)
I believe right now you're requiring us to do the scraping/adding?
Nia already supports that. Just take the link i.e https://mintlify.com/docs and ask to index it (it will crawl every subpage available from the rool link you specify)
- nia can do deep research across any docs / codebase and then find any relevant links or repos to index. - it also supports both private and public repos :)
lmk about ur experience with context7 (if u used) it and what docs did u use?
one of my recent customers (yc s25) needed to migrate to stripe ASAP and cursor etc gave them deprecated docs. they used my tool to index entire stripe docs and then use it to migrate in couple hours:)
lmk if u have more questions and happy to help
Especially please don't do this in Show HN threads, which have extra rules to forbid this kind of thing: https://news.ycombinator.com/showhn.html.
Are you using some form of canvas fingerprinting, either intentionally or unintentionally (through third-party scripts)?