Wow, that article is impressively devoid of real content.
dcminter · 9h ago
I did wonder if it was AI written or possibly AI "assisted" as it doesn't really say anything. It reads as if someone thought up the title and then asked Chat GPT to fill in the rest.
Edit: Oh, and now the submission is flagged. Fairly IMO. There's an interesting post to be had here, but this wasn't it.
raincole · 9h ago
As if the title and the AI cover image haven't signaled it :)
hk1337 · 9h ago
Like most documentation.
rzk · 9h ago
Documentation is more like a pizza baking manual, so that if a new chef takes over, they can still make the same pizza.
WillAdams · 9h ago
Recording this sort of institutional knowledge is why I find it invaluable to write my code as a Literate Program:
That's good but you still need documentation of available methods and how to use them. That's it's literate just makes it that much easier to connect what you read in the documentation with the code.
PaulHoule · 9h ago
SSL doesn't work with Firefox or Chrome.
One could argue that no literate programming system has had more than one user. Knuth's WEB and CWEB never really caught on.
That's one kind of documentation. Checklists and runbooks, for instance, are recipes. Other documentation describes APIs systematically (Javadoc) while other documentation describes architecture and broad concepts.
echelon_musk · 9h ago
A ... recipe?
blueflow · 9h ago
Exactly. Its knowledge transfer from the previous to the next generations. No knowledge transfer, so sustainable progress.
ChrisMarshallNY · 9h ago
This may be something that AI can be helpful with. We'll see.
For myself, I tend to keep inline documentation to a minimum, maybe only adding a note, as to why a certain line might be there (as opposed to what it does).
I do make sure to always provide entrypoint and property descriptions, headerdoc-style.
1. You have to maintain both documentation and code. If you change code and forget to update documentation it can be very confusing and cost a lot of time.
2. Proper code should explain itself (to some extend).
3. Taking a lot of time to write proper documentation is rarely appreciated in a world where long term strategic thinking has no place anymore.
4. It's harder to fire you if you when you are the only guy who knows all the stuff.
dcminter · 9h ago
With respect to (1) I'd love to see more tooling like Rust's documentation tests where broken examples in the doocumentation can fail the build; it can't force the lazy to make good docs but it can make the well intentioned aware of drift between the documentation and the code.
MOARDONGZPLZ · 9h ago
To be fair, the AI that wrote it has no hands on experience with documentation, so it’s natural that it would miss some of these practical points.
yaseer · 9h ago
I've found writing docs and updating docs a great AI use-case.
In my experience documentation generation has a lower error rate than code generation, and the costs of errors are lower too.
I'm not really a big fan of AI agents writing features end-to-end, but I can definitely see them updating documentation alongside pull requests.
throwawayffffas · 9h ago
While I agree to an extent, I think it's not ideal. The point of documentation in my opinion is to explain intent. If want to figure out the functionality of something the code is just as good as documentation, arguably better.
AI ,because by default only sees the code, in general describes the functionality not the intent behind the code.
9rx · 8h ago
> The point of documentation in my opinion is to explain intent.
Of course, that's what your tests are for: To document your intent, while providing a mechanism by which to warn future developers if your intent is ever violated as the codebase changes. So the information is there. It's just a question of which language you want to read it in.
"Updating docs" seems pointless, though. LLMs can translate in realtime, and presumably LLMs will get better at it with time, so caching the results of older models is not particularly desirable.
chasd00 · 9h ago
This is one area where i think a LLM can really help. It's not going to produce perfect documentation but it's so much more productive to edit/update docs than create docs from scratch. Staring at a blank screen and getting started on docs is the hardest part in my experience.
Documentation is needed in the project, lack of it makes it worse - it's literally the opposite of pineapple on pizza.
bitsandboots · 7h ago
Over time I went from 0 doc, 0 automation to putting a lot of thought into both. Projects become a bit of a circus to maintain, and nobody can help you out of it if nothing is documented, and good luck when you forget.
Devs aren't the only problem here. In the few large companies I've been in, the assigned doc writers haven't made a net positive. It always takes me so much effort to walk them through what to write about and how it should be written to match how the users actually read and understand content that I end up writing it myself during such meetings. It's a bit of a living rubber duck exercise at times. I've grown to be a high paid doc writer that writes code too.
alganet · 9h ago
Documentation and automated tests belong together.
It makes tests better. Instead of a shady snippet of code that just passes an assertion, it should generate human readable examples with additional prose included by the developer for special cases.
It makes docs easier to maintain. You probably already need to find the test for the code you changed. If the docs are really close, it's easier to maintain it.
There are many ways of achieving this. I particularly like literate programming, just for the test suite. You can code whatever way you like, but the tests must be in a literate form.
I also like the idea of having a documentation that can fail a build. If you commit a bad example snippet on a markdown somewhere, the CI should fail. This can already be done with clitest, for example (scaling it culturally is a bit hard though).
Somehow, xUnit-like tools and spec frameworks already point in that direction (DSLs that embrace human language, messages in assertions, etc). They're already documentation, and developers already use test suites for "knowing how something works" very often. We just need to stop writing it twice (once on the tests, once on prose) and find a common ground that can serve both purposes.
I mean this for API docs mainly, but for other stuff as well.
Edit: Oh, and now the submission is flagged. Fairly IMO. There's an interesting post to be had here, but this wasn't it.
https://literateprogramming.com/
so that specific problems are documented.
One could argue that no literate programming system has had more than one user. Knuth's WEB and CWEB never really caught on.
Well, I worked up:
https://github.com/WillAdams/gcodepreview/blob/main/literati...
for my current project (and will use it going forward for any new ones) and:
https://github.com/topics/literate-programming
has 443 projects...
For myself, I tend to keep inline documentation to a minimum, maybe only adding a note, as to why a certain line might be there (as opposed to what it does).
I do make sure to always provide entrypoint and property descriptions, headerdoc-style.
Here's my own take on the topic: https://littlegreenviper.com/leaving-a-legacy/
1. You have to maintain both documentation and code. If you change code and forget to update documentation it can be very confusing and cost a lot of time.
2. Proper code should explain itself (to some extend).
3. Taking a lot of time to write proper documentation is rarely appreciated in a world where long term strategic thinking has no place anymore.
4. It's harder to fire you if you when you are the only guy who knows all the stuff.
In my experience documentation generation has a lower error rate than code generation, and the costs of errors are lower too.
I'm not really a big fan of AI agents writing features end-to-end, but I can definitely see them updating documentation alongside pull requests.
AI ,because by default only sees the code, in general describes the functionality not the intent behind the code.
Of course, that's what your tests are for: To document your intent, while providing a mechanism by which to warn future developers if your intent is ever violated as the codebase changes. So the information is there. It's just a question of which language you want to read it in.
"Updating docs" seems pointless, though. LLMs can translate in realtime, and presumably LLMs will get better at it with time, so caching the results of older models is not particularly desirable.
And before someone links Yet Another Docs Framework, I recommend taking a different approach: https://passo.uno/beyond-content-types-presentation/
Devs aren't the only problem here. In the few large companies I've been in, the assigned doc writers haven't made a net positive. It always takes me so much effort to walk them through what to write about and how it should be written to match how the users actually read and understand content that I end up writing it myself during such meetings. It's a bit of a living rubber duck exercise at times. I've grown to be a high paid doc writer that writes code too.
It makes tests better. Instead of a shady snippet of code that just passes an assertion, it should generate human readable examples with additional prose included by the developer for special cases.
It makes docs easier to maintain. You probably already need to find the test for the code you changed. If the docs are really close, it's easier to maintain it.
There are many ways of achieving this. I particularly like literate programming, just for the test suite. You can code whatever way you like, but the tests must be in a literate form.
I also like the idea of having a documentation that can fail a build. If you commit a bad example snippet on a markdown somewhere, the CI should fail. This can already be done with clitest, for example (scaling it culturally is a bit hard though).
Somehow, xUnit-like tools and spec frameworks already point in that direction (DSLs that embrace human language, messages in assertions, etc). They're already documentation, and developers already use test suites for "knowing how something works" very often. We just need to stop writing it twice (once on the tests, once on prose) and find a common ground that can serve both purposes.
I mean this for API docs mainly, but for other stuff as well.