AI must RTFM: Why tech writers are becoming context curators

96 theletterf 42 8/8/2025, 3:04:40 PM passo.uno ↗

Comments (42)

actuallyalys · 1h ago
> At this point, most tech writing shops are serving llms.txt files and LLM-optimized Markdown

I find this hard to believe. I‘m not sure I’ve ever seen llms.txt in the wild and in general I don’t think most tech writing shops are that much on the cutting edge.

I have seen more companies add options to search their docs via some sort of AI, but I doubt that’s a majority yet.

ryandv · 12m ago
At this point I can't tell if all these blog posts hyping LLMs are themselves written by LLMs, and thus hallucinating alleged productivity boosts and "next-generation development practices" that are nowhere to actually be found in reality.

Shame, because it's a bunch of nice looking words - but it doesn't matter if they're completely false.

shortrounddev2 · 16m ago
Im not sure why any publisher would go out of their way to make it easier for an LLM tor read their site. If it were possible id block them entirely
theletterf · 1h ago
No, not a majority yet. Forgot to add "bleeding edge" :)
tokyolights2 · 1h ago
Tangentially related: for those of you using AI tools more than I am, how do LLMs handle things like API updates? I assume the Python2/3 transition was far enough in the past that there aren't too many issues. How about other libraries that have received major updates in the last year?

Maybe a secret positive outcome of using automation to write code is that library maintainers have a new pressure to stop releasing totally incompatible versions every few years (looking at Angular, React...)

mrbungie · 34m ago
Horribly. In my experience when dealing with "unstable" or rapidly evolving APIs/designs like IaC with OpenTofu you need MCP connected to tf provider documentation (or just example/markdown files, whichever you like most) for LLMs to actually work correctly.
gopalv · 37m ago
> for those of you using AI tools more than I am, how do LLMs handle things like API updates?

From recent experience, 95% of changes are good and are done in 15 minutes.

5% of changes are made, but break things because the API might have documentation, but your code probably doesn't document "Why I use this here" and instead has "What I do here" in bits.

In hindsight it was an overall positive experience, but if you'd asked me at the end of the first day, I'd have been very annoyed.

I thought this would take me from Mon-Fri if I was asked to estimate, but it took me till Wed afternoon.

But half a day in I thought I was 95% done, but then it took me 2+ more days to close that 5% of hidden issues.

And that's because the test-suite was catching enough class of issues to go find them everywhere.

kaycebasques · 43m ago
> how do LLMs handle things like API updates?

Quite badly. Can't tell you how many times an LLM has suggested WORKSPACE solutions to my Bazel problems, even when I explicitly tell them that I'm using Bzlmod.

whynotmaybe · 47m ago
With Dart/Flutter, it's often recommending deprecated code and practice.

Deprecated code is quickly identified by VSCode (like Text.textScaleFactor) but not the new way of separating items in a column/row by using the "Spacing" parameters (instead of manually adding a SizedBox between every items).

Coding with an LLM is like coding with a Senior Dev who doesn't follow the latest trends. It works, has insights and experience that you don't always have, but sometimes it might code a full quicksort instead of just calling list.sort().

aydyn · 1h ago
If you think the correct API is not going to be in its weights (or if there are different versions in current use), you ask nicely for it to "please look at the latest API documentation before answering".

Sometimes it ignores you but it works more often than not.

righthand · 2h ago
I think I found where efficiency is being lost.
theletterf · 2h ago
Could you elaborate?
righthand · 2h ago
Well you’ve noticed a trend:

> I’ve been noticing a trend among developers that use AI: they are increasingly writing and structuring docs in context folders so that the AI powered tools they use can build solutions autonomously and with greater accuracy

To me this means a lot of engineers are spending time maintaining files that help them automate a few things for their job. But sinking all that time into context for an LLM is most likely going to net you efficiency gains only for the projects that the context was originally written for. Other projects might benefit from smaller parts of these files, but if engineers are really doing this then there probably is some efficiency lost in the creation and management of it all.

If I had to guess contrary to your post is that devs aren’t RTFM, but instead asking Llm or web search what a good rule/context/limitation would be and pasting it into a file. In which case the use of Llms is a complexity shift.

theletterf · 2h ago
I think some are doing just that, yes, which I guess would only increase entropy. So it's not just curation, but also design through words. Good old specs, if you want.
righthand · 2h ago
Sure but the specs are informal and unadaptable then. Usually a specification follows some sort of guiding format and organized thought. Usually abstracted from the whims of the author to make the spec clear. Context files are what? Dev notes and cheat sheet?
theletterf · 2h ago
I'm hoping they'll be the docs themselves, written before the project is out.
fmbb · 1h ago
We already have README, and API specs, and Jira, and Confluence, and RFCs are freely available.

Why do these "agents" need so much hand holding?

groestl · 58m ago
Because they are like very eager Junior devs.
righthand · 2h ago
It’s a great thought and this track has led me to wonder why no one is trying to build applications from BDD/gherkin statements.

It’s hard for me to believe that people are writing more technical documentation, understanding more, when they want to use the Llm to bypass that. Maybe a handful of disciplined engineers per capita but when the trend is largely the opposite, the academic approach tends to lose out.

MoreQARespect · 1h ago
Gherkin was always a pretty bad format that made it difficult to write terse, clear spectests. Inevitably the people who used it made horribly repetitive or horribly vague gherkin files that were equally bad for BDD and testing.

This is an artefact of the language which the creators are in total denial about.

There are better languages for writing executable user stories but none very popular.

bluGill · 1h ago
bdd/gherkin are sometimes useful but they are not a great format to capture all the comblexity of the problem.
righthand · 1h ago
People don’t want to capture complexity of a problem, they want to play with word legos to output a prefabbed product. The simpler the interface the more likely that interface will win regardless how much nuance and expressiveness is allowed with a full language system.
coffeecoders · 31m ago
To put it bluntly, the current state of AI often comes down to this: describing a problem in plain English (or your local language) vs writing code.

Say, “Give me the stock status of an iPhone 16e 256GB White in San Francisco.”

I still have to provide the API details somewhere — whether it’s via an agent framework (e.g. LangChain) or a custom function making REST calls.

The LLM’s real job in this flow is mostly translating your natural language request into structured parameters and summarizing the API’s response back into something human-readable.

devmor · 1h ago
I think something people really misunderstand about these tools is that for them to be useful outside of very general, basic contexts, you have to already know the problem you want to solve, and the gist of how to solve it - and then you have to provide that as context to the LLM.

That's what the point of these text documents is, and that's why it doesn't actually produce an efficiency gain the majority of the time.

A programmer who expects the LLM to solve an engineering problem is rolling the dice and hoping. A programmer who has solved an engineering problem and expects the implementation from the LLM will usually get something close to what they want. Will it be faster than doing it yourself? Maybe. Is it worth the cost of the LLM? Probably not.

The wild estimates and hype about AI-assisted programming paradigms come from people winning the dice roll on the former case and thinking that result is not only consistent, but also the same for the latter case.

quantdev1 · 20m ago
> I think something people really misunderstand about these tools is that for them to be useful outside of very general, basic contexts, you have to already know the problem you want to solve, and the gist of how to solve it - and then you have to provide that as context to the LLM.

Politely need to disagree with this.

Quick example. I'm wrapping up a project where I built an options back-tester from scratch.

The thing is, before starting this, I had zero experience or knowledge with:

1. Python (knew it was a language, but that's it)

2. Financial microstructure (couldn't have told you what an option was - let alone puts/calls/greeks/etc)

3. Docker, PostgreSQL, git, etc.

4. Cursor/IDE/CLIs

5. SWE principles/practices

This project used or touched every single one of these.

There were countless (majority?) of situations where I didn't know how to define the problem or how to articulate the solution.

It came down to interrogating AI at multiple levels (using multiple models at times).

devmor · 4m ago
I should have specified that I am referring to their usage for experienced developers working on established projects.

I think that they have much more use for someone with no/little experience just trying to get proof of concepts/quick projects done because accuracy and adherence to standards don't really matter there.

(That being said, if Google were still as useful of a tool as it was in its prime, I think you'd have just as much success by searching for your questions and finding the answers on forums, stackexchange, etc.)

bgwalter · 2h ago
They add claude.md files because they are forced by their employers. They could have done that years ago for humans.

I also see it in mostly spaghetti code bases, not in great code bases where no one uses "AI".

theletterf · 2h ago
Author here: If the LLM revolution helps us get more accessible and better docs, I, for one, welcome it.

Edit: I guess some commenters misunderstood my message. I'm saying that by serving also the needs of LLMs we might get more resources to improve docs overall.

jacobsenscott · 2h ago
We've seen this story many times before. Remember UML and Rational Rose, waterfall, big up front design, low/no code services etc. In every case the premise is "pay a few 'big brains' to write the requirements in some 'higher level language' and tools/low skill and low pay engineers can automatically generate the code."

It turns out, to describe a system in enough detail to implement the system, you need to use a programming language. For the system to be performant and secure you need well educated high skill engineers. LLMs aren't going to change that.

Anyway, this is tacitly declaring LLM bankruptcy - the LLM can't understand what to do by reading the most precise specification, the code, so we're going to provide less specific instructions and it will do better?

theletterf · 1h ago
Oh, I remember. I'm an OpenAPI fan, for example. This time it feels a bit different, though. It won't erase the requirement of having high skill engineers, nor the need to have tech writers create docs, but it might indeed help focus on docs under a different light: not as a post-factum, often neglected artifact, but as a design document. I'm talking of actual user docs here, not PRFAQs or specs.
eddythompson80 · 1h ago
It always feels a bit different doesn't it. This time it's though. Maybe. No definitely. Will see. We're waiting for AGI anyway, right?
theletterf · 1h ago
One can only hope. :)
esafak · 1h ago
The difference is that LLMs are more capable than the average coder, and rapidly improving. If you give a seasoned developer a loose spec, (s)he'll deal with the ambiguity.
jeremyjh · 1h ago
Right, because LLMs cannot learn from their prior work in a project nor can they read a large code base for every prompt. So it can be helpful to provide it documents for understanding the project and also the code base layouts immediately in its context, but you can also generate a lot of that and just fix it up.
jrvieira · 1h ago
hell, semantic web even. we wouldn't need AI
zer00eyz · 58m ago
> It turns out, to describe a system in enough detail to implement the system, you need to use a programming language.

Go back to design patterns. Not the Gang of Four, rather the book where the name and concept was lifted from.

What you will find is that implementations are impacted by factors that are not always intuitive without ancillary information.

It's clear when there is a cowpath through a campus, and the need for a sidewalk becomes apparent. It's not so clear when that happens in code because it often isnt linear. It's why documentation is essential.

"Agile" has made this worse, because the why is often lost, meetings or offline chats lead to tickets with the what and not the why. It's great that those breadcrumbs are linked through commits but the devil is in the details. Even when all the connections exist you often have to chase them through layers of systems and pry them out of people ... emails, old slack messages, paper notes, a photo of a white board.

9rx · 2h ago
> more accessible and better docs

Easy now. You might be skilled in documentation, but most developers write docs like they write code. For the most part all you are only going to get the program written twice, once in natural language and once in a programming language. In which case, you could have simply read the code in the first place (or had an LLM explain the code in natural language, if you can't read code for some reason).

StableAlkyne · 2h ago
> you are going to get the program written twice, once in natural language and once in a programming language.

How is this a bad thing? Personally, I'm not superhuman and more readily understand natural language.

If I have the choice between documentation explaining what a function does and reading the code, I'm going to just read the docs every time. Unless I have reason to think something is screwy or having an intricate understanding is critical to my job.

If you get paid by the hour then go for it, but I don't have time to go line by line through library code.

9rx · 2h ago
> How is this a bad thing?

It is not good or bad. I don't understand your question.

> If I have the choice between documentation explaining what a function does and reading the code, I'm going to just read the docs every time.

The apparent "choice", given the context under which the discussion is taking place, is writing your program in natural language and have an LLM transcribe it to a programming language, or writing your program in a programming language and having an LLM transcribe it to natural language.

StableAlkyne · 2h ago
Ah, my apologies then. I misread the post as an argument against generated documentation.
bgwalter · 2h ago
It does not give us better docs. The claude.md files are a performative action for the employer. No one will write APUE level documentation because everything is rewritten constantly in those code bases anyway and no one either has the skills or is rewarded to write like that any more.
theletterf · 2h ago
That's why I feel tech writers need step in and help.