If AI Can't Code It, It's Already Dead

8 kayabuilds 18 5/26/2025, 10:04:20 PM
The clock is ticking for a lot of frameworks and libraries.

Not because they're bad. Not because the community gave up.

But because AI can't - or won't - code in them.

If GPT or Claude struggles with your framework/library, it might already be irrelevant... even if it's technically brilliant.

That means: (1) Popular and well-documented frameworks like React or Next.js will thrive (2) Niche or overly complex tools without wide training data probably won't stick around

I've got mixed feelings about this. On one side it's efficient and great for productivity. On the other it feels like innovation might get filtered out before it even starts.

What do you think?

Comments (18)

alexjplant · 1d ago
> What do you think?

I know that I've had Copilot make up non-existent methods in the AWS Golang v2 SDK. It also routinely fabricates IAM Actions, AWS managed policies, and Terraform attributes for AWS resources. In a similar vein Claude also made up non-existent kwargs for LlamaIndex methods when I was building a toy RAG implementation last weekend. LLMs are force multipliers but still require supervision because they hallucinate; I see no reason why they couldn't be told about new frameworks on the fly and perform similarly "good enough". They clearly aren't perfect at leveraging the knowledge contained in their weights from their training corpus.

I suspect that as LLM coding tools mature it'll get easier to incorporate framework documentation into queries and mitigate these issues. The last time I used Continue earlier this year it let you add React docs to chat queries so I don't think we're far off.

zahlman · 1d ago
> What do you think?

What is asserted without evidence can be dismissed without evidence.

add-sub-mul-div · 1d ago
Maybe vibe coding is just an evolution of spouting off general nonsense by vibes.
flooq · 1d ago
Isn't AI supposed to work for us rather than the other way around? If AI can't learn to code, it's not AI, and it's not going to make it.
TheMongoose · 1d ago
By this logic we will never have another new library again. Period. We will never have a substantial API change to an existing library either, since there will be no training data for it.

What do you think?

kayabuilds · 1d ago
> By this logic we will never have another new library again

I didn't mean "we'll never have new libraries again".

My point is that if a dev can't use a tool with the help of GPT or Claude, that tool starts off at a disadvantage.

Innovation can still happen. It just has to fight harder for attention now.

> We will never have a substantial API change to an existing library either, since there will be no training data for it.

If an update breaks the LLM's ability to assist, devs might avoid upgrading until the models catch up. It creates a weird lag where the old API is more AI-friendly than the new one, even if the new one is technically better.

Just look at React Router (Remix). It's a pain having to constantly tell the AI which version you're using. Sometimes you spend more time correcting the AI than writing actual code. (https://x.com/rafalwilinski/status/1924155117172838838)

So yeah, changes can happen. But now they need to account for how LLMs will interpret and support them, not just how humans will.

TheMongoose · 1d ago
Who's gonna provide the training data for thew new version if nobody updates?
bediger4000 · 1d ago
I think we're still suffering from Win32 and stdio in 2025.

The AI we've got (LLMs) are going to homogenize everything. It may be that new libraries never get written, or at least don't become popular, but that's kind of true today. I do think that LLMs will keep newbies from making both horrible and interesting mistakes, and will keep the experienced from making interesting judgements. Everything will look the same. We'll finally get the "consistency" in interfaces we've always said we wanted.

shlomo_z · 1d ago
Do you have any data to back this up, or are you speculating?

Personally, I do not hesitate to use a library if it has decent documentation or even well structured source code. And I say this as an AI autocomplete user.

Additionally, if code is well structured, usually humans and AI can both learn it with a very small context. For example, many AI models can write decent code if you provide them with a list of functions and classes in the library, along with their argument names and argument types.

gitgud · 1d ago
There are so many closed source frameworks developed internally at large companies, for AI to be useful to those enormous customers, it needs to work on systems it wasn’t trained on…
garrisonj · 1d ago
If AI is going to deliver on its promises, it’s gotta do better than that. Learning a new framework or library is the easy part.
credit_guy · 1d ago
You are right. On the other hand, with AI you can create lots of tutorials, examples, you can improve the documentation, etc, so you create a training corpus for the ever crawling AI's out there.
handfuloflight · 1d ago
Not an issue any more with being able to reference custom context. Just more expensive and cumbersome.
mosdl · 1d ago
The LLMs already have trouble with basic react features like hooks, so that bodes poorly.
revskill · 1d ago
AI is too stupid to handle medium-level problem. Tons of overengineered and nonsensical boilerplate generated.
skywhopper · 1d ago
This is definitely untrue. In fact, I’d guess that libraries that are more AI-friendly will be more susceptible to the inevitable AI rot that will come as less human scrutiny and expertise are applied to these projects.
jokethrowaway · 1d ago
This issue improved quite a bit over time so I'm not sure it will be a problem.

I generate plenty of solid.js without any issues.

I'm also sure in a couple of iterations coding tools will have better RAG (/ finetuning?) support on existing documentation and types.

Lionga · 1d ago
If AI can not code in more complex tools/frameworks and only produce React Slop, AI is already dead.