If AI Can't Code It, It's Already Dead
8 kayabuilds 18 5/26/2025, 10:04:20 PM
The clock is ticking for a lot of frameworks and libraries.
Not because they're bad. Not because the community gave up.
But because AI can't - or won't - code in them.
If GPT or Claude struggles with your framework/library, it might already be irrelevant... even if it's technically brilliant.
That means: (1) Popular and well-documented frameworks like React or Next.js will thrive (2) Niche or overly complex tools without wide training data probably won't stick around
I've got mixed feelings about this. On one side it's efficient and great for productivity. On the other it feels like innovation might get filtered out before it even starts.
What do you think?
I know that I've had Copilot make up non-existent methods in the AWS Golang v2 SDK. It also routinely fabricates IAM Actions, AWS managed policies, and Terraform attributes for AWS resources. In a similar vein Claude also made up non-existent kwargs for LlamaIndex methods when I was building a toy RAG implementation last weekend. LLMs are force multipliers but still require supervision because they hallucinate; I see no reason why they couldn't be told about new frameworks on the fly and perform similarly "good enough". They clearly aren't perfect at leveraging the knowledge contained in their weights from their training corpus.
I suspect that as LLM coding tools mature it'll get easier to incorporate framework documentation into queries and mitigate these issues. The last time I used Continue earlier this year it let you add React docs to chat queries so I don't think we're far off.
What is asserted without evidence can be dismissed without evidence.
What do you think?
I didn't mean "we'll never have new libraries again".
My point is that if a dev can't use a tool with the help of GPT or Claude, that tool starts off at a disadvantage.
Innovation can still happen. It just has to fight harder for attention now.
> We will never have a substantial API change to an existing library either, since there will be no training data for it.
If an update breaks the LLM's ability to assist, devs might avoid upgrading until the models catch up. It creates a weird lag where the old API is more AI-friendly than the new one, even if the new one is technically better.
Just look at React Router (Remix). It's a pain having to constantly tell the AI which version you're using. Sometimes you spend more time correcting the AI than writing actual code. (https://x.com/rafalwilinski/status/1924155117172838838)
So yeah, changes can happen. But now they need to account for how LLMs will interpret and support them, not just how humans will.
The AI we've got (LLMs) are going to homogenize everything. It may be that new libraries never get written, or at least don't become popular, but that's kind of true today. I do think that LLMs will keep newbies from making both horrible and interesting mistakes, and will keep the experienced from making interesting judgements. Everything will look the same. We'll finally get the "consistency" in interfaces we've always said we wanted.
Personally, I do not hesitate to use a library if it has decent documentation or even well structured source code. And I say this as an AI autocomplete user.
Additionally, if code is well structured, usually humans and AI can both learn it with a very small context. For example, many AI models can write decent code if you provide them with a list of functions and classes in the library, along with their argument names and argument types.
I generate plenty of solid.js without any issues.
I'm also sure in a couple of iterations coding tools will have better RAG (/ finetuning?) support on existing documentation and types.