We’ve been working on this problem off and on for over a year now. Many models bake knowledge of particular tools/libraries/patterns into their weights very well and others quite poorly. In my experience Claude is quite good at integrating the dog.ceo API and noticeably ignorant when it comes to Postgres features, and it knows gcloud commands enough to very confidently and consistently hallucinate arguments.
We’ve baked a solution to this into our product, so if anybody is working on an API/SDK/etc feel free to contact me if your users are running into problems using LLMs to integrate them.
One thing we’ve noticed is that subtle changes to library/api integration prompts’ context can be surprisingly impactful. LLMs do very well with example commands and explicit instructions to consider X, Y, and Z. If you just dump an API reference and information that implicitly suggests that X, Y, and Z might be beneficial, they won’t reliably make the logical leaps you want them to unless you let them iterate or “think” (spend more tokens) more. But you can’t as easily provide an example for everything, and the ones you do will bias the models towards them, so you may need a bit of both.
EGreg · 1h ago
I made a provisional patent this year, about how exactly I would solve this problem. Imagine hiring a "team of developers" who can learn your library and iterate 24/7, improving things, doing support, even letting the pointy-haired boss turn his ideas into reality in a forked sandbox on the weekend.
For the last 15 years I've been writing against software patents, and producing open source software that cost me about $1M to develop, but in the case of AI, I have started to make an exception. I have also rethought how I am going to do open source vs closed source in my AI business. A few weeks ago I posted on HN asking whether it's a good idea, and no one responded: https://news.ycombinator.com/item?id=44425545
(If anyone wants to work with me on this, hit me up, email is in my profile)
tantalor · 23m ago
Boooo software patents.
weitendorf · 1h ago
I hope we don’t have to challenge it!
We’re trying to build a similar kind of experience but for both “sides” of the problem: software provider and software users/integrators.
EGreg · 20m ago
I guess that's why patents are annoying. I have been Mr. Open Source and against intellectual property for most of the past 15 years. But with AI companies rampantly taking everyone's work and repurposing it, and with VC companies not being very eager to invest in open source, I'm taking a different tack with my AI ventures.
My first two companies are radically open source, and no one cared:
Don't worry, we're not looking to get into it with some random other projects. It's mostly to protect our business model against the Big Tech and enterprises.
dingnuts · 13m ago
don't worry everybody, this guy's profile shows he was a blockchain booster five minutes ago, just another grifter, nothin to see here
richardblythman · 3h ago
If coding agents are the new entry point to your library, how sure are you that they’re using it well?
I asked this question to about 50 library maintainers and dev tool builders, and the majority didn't really know.
Existing code generation benchmarks focus mainly on self-contained code snippets and compare models not agents. Almost none focus on library-specific generation.
So we built a simple app to test how well coding agents interact with libraries:
• Takes your library’s docs
• Automatically extracts usage examples
• Tasks AI agents (like Claude Code) with generating those examples from scratch
• Logs mistakes and analyzes performance
We’re testing libraries now, but it’s early days. If you're interested: Input your library, see what breaks, spot patterns, and share the results below.
We plan to expand to more coding agents, more library-specific tasks, and new metrics. Let us know what we should prioritize next.
justonceokay · 2h ago
If making dev tooling is selling shovels to the miners, then this is like selling sheet metal to the shovel makers.
grim_io · 28m ago
Yeah. Feels like a data mining operation for training data.
I could be wrong.
bdhcuidbebe · 2h ago
> If coding agents are the new entry point to your library, how sure are you that they’re using it well?
> I asked this question to about 50 library maintainers and dev tool builders, and the majority didn't really know.
Why should they even bother to answer such a loaded and hypothetical question?
weitendorf · 1h ago
Let’s meet and see if it might make sense for us to team up. We’re working on this from the agent/library-specific-task side, and we might be better than chatgpt at marketing your product :)
dotancohen · 2h ago
Note that this comment is not hijacking. The author of this comment is also the author of the post.
add-sub-mul-div · 1h ago
That's the more likely assumption. Accounts with only self-promotion spam activity have become more of a rule here than an exception.
spankalee · 2h ago
Why do we need to log in?
richardblythman · 52m ago
we send out an email when the tests are finished (takes about 30 mins)
grim_io · 31m ago
That makes you sound like you are dodging the question.
metadat · 2h ago
The skip-to-the-end answer: Context7 MCP is so good it seems like magic, even to many well-informed, highly capable hackers. Simply wildly good for libraries and SDKs. All it takes to start using it is to add the MCP provider to your agent config and save your arms, "Use Context7 for this".
I'm confused a bit by this. For instance, Gemini was struggling to write proper Java code for using Firebase Admin SDK. It would write Java code using methods that only exist in the JavaScript SDK. And when I would correct it, it would give other options that also were only in the JavaScript SDK or were invalid.
I was expecting some sort of dump of all the admin methods, but it gives a single example of one library function and info on how to build javadoc.
0x457 · 56m ago
You're looking at a summary for chunks of code that are relevant to the given library. If you type what specifically you need documentation for and adjust output token count, it will give LLM relevant fragments.
What is the best approach to have something like context7 for internal tools and libraries?
rikroots · 2h ago
I've done a lot of work recently to make my library more "LLM Friendly", but I'm not willing at this time to sign up to a service which I don't know I'd ever use again just to run a test on your behalf. If you want to run the test on my library then its GitHub can be found here: https://github.com/KaliedaRik/Scrawl-canvas
prophesi · 59m ago
In Elixir land, the Ash Framework created a package called usage_rules[0] as an experimental attempt to solve this problem a few months ago. The latest version of the Phoenix Framework (1.8) includes it in their `mix phx.new` generator and in their own hex packages[1]. Library owners would need to add their own usage rules, but it seems to help even for just the core packages Phoenix includes.
It's a neat idea. But if, as we're told, LLMs will get better and better, something like this, in theory, will be increasingly unnecessary.
I feel like most of the problems with AI using a library is how we mix code and implementation. C, C++ got it right (even if by accident) with separating specification from implementation.
Instead of lamenting the design trend of not maintaining this split, for my own code I wrote a utility to extract specifications from my existing code.
burnte · 1h ago
> It's a neat idea. But if, as we're told, LLMs will get better and better, something like this, in theory, will be increasingly unnecessary.
I don't think so. I think understanding the context of a project will always produce superior results. I think instead we'll just make it a lot easier to add to the training corpus the LLM pulls from.
weitendorf · 1h ago
Respectfully, I disagree. It is much faster and cheaper to direct an LLM to add a call to a battle-tested library that encapsulates complex logic than it is to design and implement that logic from scratch, even if it’s capable of that.
We’re betting on almost the exact opposite idea: we can make agentic software engineering cheaper and more reliable by making it easy for LLMs to write, find, and integrate libraries and other third party software.
9rx · 1h ago
What do coding agents need my library for?
Don't they know how to write their own code? Isn't that a coding agent's entire purpose in life?
There must be conflicting definitions out there. What does "coding agent" mean in this context?
weitendorf · 1h ago
Why did my engineering team handle payments through Stripe instead of building a custom payment processor? Aren’t they supposed to be engineering things?
9rx · 1h ago
Coding agents presumably don't know how to deal with non-coding things. Stripe's real value isn't in its technology, but it sorting out the complex human problems associated with payment processing. Sending a number over a network is not any great feat. Getting humans to agree that number has meaning is another matter.
imtringued · 5m ago
This is an extremely roundabout way of saying that you need a payment processor license.
suyash · 2h ago
I'd use this if this was an open source tool.
paularmstrong · 2h ago
Needing too sign up before I can see or do anything made me close the tab immediately.
richardblythman · 43m ago
good to know. i think we're likely to move towards running coding agents locally in the next iteration.
We’ve baked a solution to this into our product, so if anybody is working on an API/SDK/etc feel free to contact me if your users are running into problems using LLMs to integrate them.
One thing we’ve noticed is that subtle changes to library/api integration prompts’ context can be surprisingly impactful. LLMs do very well with example commands and explicit instructions to consider X, Y, and Z. If you just dump an API reference and information that implicitly suggests that X, Y, and Z might be beneficial, they won’t reliably make the logical leaps you want them to unless you let them iterate or “think” (spend more tokens) more. But you can’t as easily provide an example for everything, and the ones you do will bias the models towards them, so you may need a bit of both.
For the last 15 years I've been writing against software patents, and producing open source software that cost me about $1M to develop, but in the case of AI, I have started to make an exception. I have also rethought how I am going to do open source vs closed source in my AI business. A few weeks ago I posted on HN asking whether it's a good idea, and no one responded: https://news.ycombinator.com/item?id=44425545
(If anyone wants to work with me on this, hit me up, email is in my profile)
We’re trying to build a similar kind of experience but for both “sides” of the problem: software provider and software users/integrators.
My first two companies are radically open source, and no one cared:
https://github.com/Qbix
https://github.com/Intercoin
And this is what we're doing now with AI, but it's not going to be as open: https://engageusers.ai/deck.pdf
Don't worry, we're not looking to get into it with some random other projects. It's mostly to protect our business model against the Big Tech and enterprises.
I asked this question to about 50 library maintainers and dev tool builders, and the majority didn't really know.
Existing code generation benchmarks focus mainly on self-contained code snippets and compare models not agents. Almost none focus on library-specific generation.
So we built a simple app to test how well coding agents interact with libraries: • Takes your library’s docs • Automatically extracts usage examples • Tasks AI agents (like Claude Code) with generating those examples from scratch • Logs mistakes and analyzes performance
We’re testing libraries now, but it’s early days. If you're interested: Input your library, see what breaks, spot patterns, and share the results below.
We plan to expand to more coding agents, more library-specific tasks, and new metrics. Let us know what we should prioritize next.
I could be wrong.
> I asked this question to about 50 library maintainers and dev tool builders, and the majority didn't really know.
Why should they even bother to answer such a loaded and hypothetical question?
https://context7.com/
So I thought this is where context7 would be useful, but I'm confused what I'm looking at in the detail page: https://context7.com/firebase/firebase-admin-java
I was expecting some sort of dump of all the admin methods, but it gives a single example of one library function and info on how to build javadoc.
It lets you emulate RAG.
It looks like https://firebase.google.com/docs/ is being refreshed as I type this, I imagine that using that as a source and including "Java" in the topic filter might give more results (or maybe the https://github.com/firebase/firebase-docs has the same content).
[0] https://hexdocs.pm/usage_rules/readme.html
[1] https://github.com/phoenixframework/phoenix/tree/main/usage-...
I feel like most of the problems with AI using a library is how we mix code and implementation. C, C++ got it right (even if by accident) with separating specification from implementation.
Instead of lamenting the design trend of not maintaining this split, for my own code I wrote a utility to extract specifications from my existing code.
I don't think so. I think understanding the context of a project will always produce superior results. I think instead we'll just make it a lot easier to add to the training corpus the LLM pulls from.
We’re betting on almost the exact opposite idea: we can make agentic software engineering cheaper and more reliable by making it easy for LLMs to write, find, and integrate libraries and other third party software.
Don't they know how to write their own code? Isn't that a coding agent's entire purpose in life?
There must be conflicting definitions out there. What does "coding agent" mean in this context?