If people would be as patient and inventive to teach junior devs as they are with llms the whole industry would be better of.
sorcerer-mar · 1h ago
You pay junior devs way way way more money for the privilege of them being bad.
And since they're human, the juniors themselves do not have the patience of an LLM.
I really would not want to be a junior dev right now... Very unfair and undesirable situation they've landed in.
fallinditch · 49m ago
Maybe it's the senior devs who should be the ones to worry?
Seniors' attitudes on HN are often quick to dismiss AI assisted coding as something that can't replace the hard-earned experience and skill they've built up during their careers. Well maybe, maybe not. Senior devs can get a bit myopic in their specializations. Whereas a junior Dev doesn't have so much baggage, maybe the fertile brains of youth are better in times of rapid disruption where extreme flexibility of thought is the killer skill.
Or maybe the whole senior/junior thing is a red herring and pure coding and tech skills are being deflated all across the board. Perhaps what is needed now is an entirely new skill set that we're only just starting to grasp.
tonyhart7 · 9m ago
we literally have many no code solution like wordpress etc
do webdev is still there??? yes there are
just because you can "create" something that doesn't mean you knowledge able in that area
we literally have entire industry created to fix wordpress instance + code, what do you else we need to worry for
yakz · 30m ago
Senior devs provide better instructions to the agent, and can recognize more kinds of mistakes and can recognize mistakes more quickly. The feedback loop is more useful to someone with more experience.
I had a feeling today that I should really be managing multiple instances at once, because they’re currently so slow that there’s some “downtime”.
sorcerer-mar · 35m ago
Maybe! Probably not though.
bakugo · 13m ago
> Maybe it's the senior devs who should be the ones to worry?
Why would they be worried?
Who else going to maintain the massive piles of badly designed vibe code being churned out at an increasingly alarming pace? The juniors prompting it certainly don't know what any of it does, and the AIs themselves have proven time and again to be incapable of performing basic maintenance on codebases above a very basic level of complexity.
As the ladder gets pulled up on new juniors, and the "fertile brains" of the few who do get a chance are wasted as they are actively encouraged to not learn anything and just let a computer algorithm do the thinking for them, ensuring they will never have a chance to become seniors themselves, who else will be left to fix the mess?
mentos · 1h ago
At least it’s easier to teach yourself anything now with an LLM? So maybe it balances out.
sorcerer-mar · 1h ago
I think it's actually even worse: it's easier to trick yourself into thinking you're teaching yourself anything.
Learning comes from grinding and LLMs are the ultimate anti-intellectual-grind machines. Which is great for when you're not trying to learn a skill!
andy99 · 27m ago
Even though I think most people know this deep down, I still don't think we actively realize how optimized LLMs are towards sounding good. It's the ultra processed food version of information consumption. People are super lazy (economical if you like) and rlhf et al have optimized LLM output to being easy to digest.
Consequence is you get a bunch of output that looks really good as long as you don't think about it (and they actively promotes not thinking about it) that you don't really understand, and that if you did dig into you'd realize is empty fluff or actively wrong.
It's worse than not learning, it's actively generating unthinking but palatable garbage that's the opposite of learning.
tnel77 · 39m ago
>>Learning comes from grinding
Says who? While “grinding” is one way to learn something, asking AI for a detailed explanation and actually consuming that knowledge with the intent to learn (rather than just copy and pasting) is another way.
Yes, you should be on guard since a lot of what it says can be false, but it’s still a great tool to help you learn something. It doesn’t completely replace technical blogs, books, and hard earned experience, but let’s not pretend that LLMs, when used appropriately, don’t provide an educational benefit.
sorcerer-mar · 36m ago
Pretty much all education research ever points to the act of actually applying knowledge, especially against variable cases, to be required to learn something.
There is no learning by consumption (unfortunately, given how we mostly attempt to "educate" our youth).
I didn't say they don't or can't provide an educational benefit.
jyounker · 1h ago
Yeah, you have to be really careful about how you use LLMs. I've been finding it very useful to use them as teachers, or to use them in the same way that I'd use a coworker. "What's the idiomatic ways to write this python comprehension in javascript?" Or, "Hey, do you remember what you call it when..." And when I request these things I'll try to ask in the most generic way possible so that I then get retype the relevant code, filling in the blanks with my own values.
That's just one use though. The other is treating it like it's a jr developer, which has its own shift in thinking. Practice in writing details specs goes a long way here.
sorcerer-mar · 1h ago
100% agreed.
> Practice in writing details specs goes a long way here.
This is an additional asymmetric advantage to more senior engineers as they use these tools
yieldcrv · 41m ago
> I really would not want to be a junior dev right now... Very unfair and undesirable situation they've landed in.
I don't really get this, at the beginning of my career I masquaraded as a senior dev with experience as fast as I could until it was laundered into actual experience
Form the LLC and that's your prior professional experience, working for it
I felt I needed to do that and that was way before generative AI, like at least a decade
drewlesueur · 1h ago
I think it would be great to be a junior dev now and be able to learn quickly with llms.
lelanthran · 16m ago
> I think it would be great to be a junior dev now and be able to learn quickly with llms.
I'm not so sure; I get great results (learning) with them because I can nitpick what they give me, attempt to explain how I understand it and I pretty much always preface my prompts with "be critical and show me where I am wrong".
I've seen a junior use it to "learn", which was basically "How do I do $FOO in $LANGUAGE".
For that junior to turn into a senior who prompts the way I do, they need a critical view of their questions, not just answers.
qsort · 1h ago
The vilification of juniors and the abandonment of the idea that teaching and mentoring are worthwhile are single-handedly making me speedrun burnout. May a hundred years of Microsoft Visio befall anybody who thinks that way.
jayofdoom · 10m ago
I spent a lot of time in my career, honestly some of the most impactful stuff I've done, mentoring college students and junior developers. I think you are dead on about the skills being very similar. Being verbose, not making assumptions about existing context, and generalized warnings against pitfalls when doing the sort of thing you're asking it to do goes a long long way.
Just make sure you talk to Claude in addition to the humans and not instead of.
handfuloflight · 1h ago
Took a junior dev under my wings recently and the more time and energy and resources I spent on on him, the more expectation and disrespect I was shown.
"Familiarity breeds contempt."
None of my instances with AI have shown me contempt.
noman-land · 45m ago
It sounds like this person doesn't deserve to be under your wing. Time to let him fly for himself, or crash.
QuercusMax · 53m ago
Damn, that sucks. My experience has been the exact opposite; maybe you need to adjust your approach and set expectations up-front, or get management involved? (I've had a similar experience to you with my teenage kids, but that's a whole other situation.)
My M.S. advisor gave me this advice on when I should ask for help, which I've passed on to lots of junior engineers: It's good to spend type struggling to understand something, and depending on the project it's probably good to exert yourself on your own somewhere between an hour and a day. If you give up after 5 minutes, you won't learn, but if you spend a week with no progress, that's also not good.
godelski · 1h ago
A constant reminder: you can't have wizards without having noobs.
Every wizard was once a noob. No one is born that way, they were forged. It's in everybody's interest to train them. If they leave, you still benefit from the other companies who trained them, making the cost equal. Though if they leave, there's probably better ways to make them stay that you haven't considered (e.g. have you considered not paying new juniors more than your current junior that has been with the company for a few years? They should be able to get a pay bump without leaving)
lunarboy · 1h ago
I'm sure people (esp engineers) know this. But imagine you're starting a company: would you try to deploy N agents (even if shitty), or take a financial/time/legal/social risk with a new hire. When you consider short-term costs, the math just never works out in favor of real humans.
geraneum · 59m ago
Well, in the beginning, the math doesn’t work out in favor of building the software (or the thing you want to sell) either.
QuercusMax · 51m ago
What about the financial / legal / social risk of your AI agent doing something bad? You're only looking at cost savings, without seeing the potentially major downsides.
shinycode · 24m ago
To follow up my previous comment, I worked on a project where someone fixed an old bug. This bug became a feature for clients who build their systems around this api endpoint. The consequence is hundreds of thousands of user duplicates with automations attaching new ressources and actions randomly on the duplicates. Massive consequences for the customers. If it were an AI doing the fixing with no human intervention, good luck understanding, cleaning the mess and holding accountable.
People seem lightly think that if the agent is doing something bad it’s just a risk to take. But when a codebase with massive amounts of loc and logic is build and no human knows it, how to deal with the consequences on people’s business ? Can’t help but think it’s crappy software with a « Google closed your Gmail account, no one knows why and we can’t do anything about it, sorry ». But instead of a mail account it’s part of your business
shinycode · 33m ago
I can’t stop thinking that this way of thinking is either plain wrong and misses completely what software development is really about. Or very true and in X years people will just ask the trending AI « I need a billing/CRM/X system with those constraints ». Then the AI will ask questions and refine the need. Work for 30mn the time to use libs and code the whole thing, pass into systems to test and deploy and voila. Custom feature on demand. No CEO, no sales, nobody. You just deploy your own SaaS feature.
Then good luck to scale properly and migrate data and add features and complexity. If agents hold onto their promise, then the future is custom based, you deploy what you need, SaaS platform is dead with everyone in between useless.
QuantumGood · 1h ago
I think too many see it more as "every stem cell has the potential to be any [something]", but it's generally better to let them self differentiate until survivors with more potential exist.
TuringNYC · 23m ago
>> A constant reminder: you can't have wizards without having noobs.
Try telling that to companies with quarterly earnings. Very few resist the urge to optimize for the short term.
It's simultaneously the obvious next step and portends a potentially very dangerous future.
lubujackson · 37s ago
Am I the only one who saw in the prompt:
> ${SUGESTION}
And recognized it wouldn't do anything because of a typo? Alas, my kind is not long for this world...
TeMPOraL · 2h ago
> It's simultaneously the obvious next step
As it has been over three years ago, when that was originally published.
I'm continuously surprised both by how fast the models themselves evolve, and how slow their use patterns are. We're still barely playing with the patterns that were obvious and thoroughly discussed back before GPT-4 was a thing.
Right now, the whole industry is obsessed with "agents", aka. giving LLMs function calls and limited control over the loop they're running under. How many years before the industry will get to the point of giving LLMs proper control over the top-level loop and managing the context, plus an ability to "shell out" to "subagents" as a matter of course?
qsort · 1h ago
> How many years before the industry will get to the point
When/if the underlying model gets good enough to support that pattern. As an extreme example, you aren't ever going to make even a basic agent with GPT-3 as the base model, the juice isn't worth the squeeze.
Models have gotten way better and I'm now convinced (new data -> new opinion) that they are a major win for coding, but they still need a lot, a lot of handholding, left to their own devices they just make a mess.
The underlying capabilities of the model are the entire ballgame, the "use patterns" aren't exactly rocket science.
benlivengood · 1h ago
We haven't hit the RSI threshold yet and so evolution is so slow that it's usually terminated as not-useful or it solves a concrete problem and is terminated by itself or a human. Earlier model+frameworks merely petered out almost immediately. I'm guessing it's roughly correlated with the progress on METR.
jasonthorsness · 1h ago
The terminal really is sort of the perfect interface for an LLM; I wonder whether this approach will become favored over the custom IDE integrations.
drcode · 1h ago
sort of, except I think the future of llms will be to to have the llm try 5 separate attempts to create a fix in parallel, since llm time is cheaper than human time... and once you introduce this aspect into the workflow, you'll want to spin up multiple containers, and the benefits of the terminal aren't as strong anymore.
jyounker · 1h ago
Having command line tools to spin up multiple containers and then to collect their results seems like it would be a pretty natural fit.
bionhoward · 1h ago
Assuming attention to detail is one of the best signs people give a fuck about craftsmanship, isn’t the fact the Anthropic legal terms are logically impossible to satisfy a bad sign for their ability to be trusted as careful stewards of ASI?
Not exactly “three laws safe” if we can’t use the thing for work without violating their competitive use prohibition
SamPatt · 2h ago
>Claude code feels more powerful than cursor, but why? One of the reasons seems it's ability to be scripted. At the end of the day, cursor is an editor, while claude code is a swiss army knife (on steroids).
Agreed, and I find that I use Claude Code on more than traditional code bases. I run it in my Obsidian vault for all kinds of things. I run it to build local custom keyboard bindings with scripts that publish screenshots to my CDN and give me a markdown link, or to build a program that talks to Ollama to summarize my terminal commands for the last day.
I remember the old days of needing to figure out if the formatting changes I wanted to make to a file were sufficient to build a script or just do them manually - now I just run Claude in the directory and have it done for me. It's useful for so many things.
Aeolun · 1h ago
The thing is, Claude Code only works if you have the plan. It’s impossible to use it on the API, and it makes me wonder if $100/month is truly enough. I use it all day every day now, and I must be consuming a whole lot more than my $100 is worth.
CGamesPlay · 19m ago
You use it "all day every day", so it makes sense that you would prefer the plan. It's perfectly economical to use it without a plan, if your usage patterns are different. Here's a tool someone else wrote that can help you decide: https://github.com/ryoppippi/ccusage
sorcerer-mar · 1h ago
> It’s impossible to use it on the API
What does this mean?
oxidant · 47m ago
Not OP but probably just cost.
practal · 1h ago
I think it is available on Claude Pro now, so just $20.
ggsp · 1h ago
You can definitely use Claude Code via the API
lawrencechen · 55m ago
I think he means it's not economically sound to use it via API
jjice · 1h ago
I'm very interested to hear what your uses cases are when using it in your Obsidian Vault
tinyhouse · 1h ago
This article is a bit all over the place. First, a slide deck to describe a codebase is not that useful. There's a reason why no one ever uses a slide deck for anything besides supporting an oral presentation.
Most of these things in the post aren't new capabilities. The automation of workflows is indeed valuable and cool. Not sure what AGI has anything to do with it.
bravesoul2 · 1h ago
Also I don't trust it. They touched on that I think (I only skimmed).
Plus you shouldn't need an LLM to understand a codebase. Just make it more understandable! Of course capital likes shortcuts and hacks to get the next feature out in Q3.
groby_b · 1m ago
> Plus you shouldn't need an LLM to understand a codebase. Just make it more understandable!
<laughs in legacy code>
And fundamentally, that isn't a function of "capital". All code bases are shaped by the implicit assumptions of their writers. If there's a fundamental mismatch or gap between reader and writer assumptions, it won't be readable.
LLMs are a way to make (some of) these implict assumptions more legible. They're not a panacea, but the idea of "just make it more understandable" is not viable. It's on par with "you don't need debuggers, just don't write bugs"
imiric · 1h ago
> Plus you shouldn't need an LLM to understand a codebase. Just make it more understandable!
The kind of person who prefers this setup wants to read (and write) the least amount of code on their own. So their ideal workflow is one where they get to make programs through natural language. Making codebases understandable for this group is mostly a waste of effort.
It's a wild twist of fate that programming languages were intended to make programming friendly to humans, and now humans don't want to read them at all. Code is becoming just an intermediary artifact useless to machines, which can instead write machine code directly.
I wish someone could put this genie back in the bottle.
DougMerritt · 36m ago
> It's a wild twist of fate that programming languages were intended to make programming friendly to humans, and now humans don't want to read them at all.
Those are two different groups of humans, as you implied yourself.
lelandbatey · 1h ago
There is no amount of static material that will perfectly conform to the shape and contours of every mind that consumes that static material such that they can learn what they want to learn when they want to learn it.
Having a thing that is interactive and which can answer questions is a very useful thing. A slide deck that sits around for the next person is probably not that great, I agree. But if you desperately want a slide deck, then an agent like Claude which can create it on demand is pretty good. If you want summaries of changes over time, or to know "what's the overall approach at a jargon-filled but still overview level explanation of how feature/behavior X is implemented?", an agent can generate a mediocre (but probably serviceable) answer to any of those by reading the repo. That's an amazing swiss-army knife to have in your pocket.
I really used to be a hater, and I really did not trust it, but just using the thing has left me unable to deny its utility.
Uehreka · 58m ago
> Not sure what AGI has anything to do with it.
Judging from the tone of the article, they’re using the term AGI in a jokey way and not taking themselves too seriously, which is refreshing.
I mean like, it wouldn’t be refreshing if the article didn’t also have useful information, but I do actually think a slide deck could be a useful way to understand a codebase. It’s exactly the kind of nice-to-have that I’d never want a junior wasting time on, but if it costs like $5 and gets me something minorly useful, that’s pretty cool.
Part of the mind-expanding transition to using LLMs involves recognizing that there are some things we used to dislike because of how much effort they took relative to their worth. But if you don’t need to do the thing yourself or burn through a team member’s time/sanity doing it, it can make you start to go “yeah fuck it, trawl the codebase and try to write a markdown document describing all of the features and requirements in a tabular format. Maybe it’ll go better than I expect, and if it doesn’t then on to something else.”
intralogic · 1h ago
How can I make this page easier to read in chrome? The gray-on-gray doesn't have enough contrast for me to read easily.
CGamesPlay · 14m ago
In general, "reader mode". I don't use Chrome but Google suggests that it's in a menu <https://support.google.com/chrome/answer/14218344?hl=en>. Many Chrome-alikes provide it built-in (Brave calls it Speedreader), and many extensions can add it for you (Readability was the OG one).
abhisheksp1993 · 1h ago
```
claude --dangerously-skip-permissions # science mode
```
And since they're human, the juniors themselves do not have the patience of an LLM.
I really would not want to be a junior dev right now... Very unfair and undesirable situation they've landed in.
Seniors' attitudes on HN are often quick to dismiss AI assisted coding as something that can't replace the hard-earned experience and skill they've built up during their careers. Well maybe, maybe not. Senior devs can get a bit myopic in their specializations. Whereas a junior Dev doesn't have so much baggage, maybe the fertile brains of youth are better in times of rapid disruption where extreme flexibility of thought is the killer skill.
Or maybe the whole senior/junior thing is a red herring and pure coding and tech skills are being deflated all across the board. Perhaps what is needed now is an entirely new skill set that we're only just starting to grasp.
do webdev is still there??? yes there are just because you can "create" something that doesn't mean you knowledge able in that area
we literally have entire industry created to fix wordpress instance + code, what do you else we need to worry for
I had a feeling today that I should really be managing multiple instances at once, because they’re currently so slow that there’s some “downtime”.
Why would they be worried?
Who else going to maintain the massive piles of badly designed vibe code being churned out at an increasingly alarming pace? The juniors prompting it certainly don't know what any of it does, and the AIs themselves have proven time and again to be incapable of performing basic maintenance on codebases above a very basic level of complexity.
As the ladder gets pulled up on new juniors, and the "fertile brains" of the few who do get a chance are wasted as they are actively encouraged to not learn anything and just let a computer algorithm do the thinking for them, ensuring they will never have a chance to become seniors themselves, who else will be left to fix the mess?
Learning comes from grinding and LLMs are the ultimate anti-intellectual-grind machines. Which is great for when you're not trying to learn a skill!
Consequence is you get a bunch of output that looks really good as long as you don't think about it (and they actively promotes not thinking about it) that you don't really understand, and that if you did dig into you'd realize is empty fluff or actively wrong.
It's worse than not learning, it's actively generating unthinking but palatable garbage that's the opposite of learning.
Says who? While “grinding” is one way to learn something, asking AI for a detailed explanation and actually consuming that knowledge with the intent to learn (rather than just copy and pasting) is another way.
Yes, you should be on guard since a lot of what it says can be false, but it’s still a great tool to help you learn something. It doesn’t completely replace technical blogs, books, and hard earned experience, but let’s not pretend that LLMs, when used appropriately, don’t provide an educational benefit.
There is no learning by consumption (unfortunately, given how we mostly attempt to "educate" our youth).
I didn't say they don't or can't provide an educational benefit.
That's just one use though. The other is treating it like it's a jr developer, which has its own shift in thinking. Practice in writing details specs goes a long way here.
> Practice in writing details specs goes a long way here.
This is an additional asymmetric advantage to more senior engineers as they use these tools
I don't really get this, at the beginning of my career I masquaraded as a senior dev with experience as fast as I could until it was laundered into actual experience
Form the LLC and that's your prior professional experience, working for it
I felt I needed to do that and that was way before generative AI, like at least a decade
I'm not so sure; I get great results (learning) with them because I can nitpick what they give me, attempt to explain how I understand it and I pretty much always preface my prompts with "be critical and show me where I am wrong".
I've seen a junior use it to "learn", which was basically "How do I do $FOO in $LANGUAGE".
For that junior to turn into a senior who prompts the way I do, they need a critical view of their questions, not just answers.
Just make sure you talk to Claude in addition to the humans and not instead of.
"Familiarity breeds contempt."
None of my instances with AI have shown me contempt.
My M.S. advisor gave me this advice on when I should ask for help, which I've passed on to lots of junior engineers: It's good to spend type struggling to understand something, and depending on the project it's probably good to exert yourself on your own somewhere between an hour and a day. If you give up after 5 minutes, you won't learn, but if you spend a week with no progress, that's also not good.
Every wizard was once a noob. No one is born that way, they were forged. It's in everybody's interest to train them. If they leave, you still benefit from the other companies who trained them, making the cost equal. Though if they leave, there's probably better ways to make them stay that you haven't considered (e.g. have you considered not paying new juniors more than your current junior that has been with the company for a few years? They should be able to get a pay bump without leaving)
Try telling that to companies with quarterly earnings. Very few resist the urge to optimize for the short term.
On the other hand, every time people are just spinning off sub-agents I am reminded of this: https://www.lesswrong.com/posts/kpPnReyBC54KESiSn/optimality...
It's simultaneously the obvious next step and portends a potentially very dangerous future.
> ${SUGESTION}
And recognized it wouldn't do anything because of a typo? Alas, my kind is not long for this world...
As it has been over three years ago, when that was originally published.
I'm continuously surprised both by how fast the models themselves evolve, and how slow their use patterns are. We're still barely playing with the patterns that were obvious and thoroughly discussed back before GPT-4 was a thing.
Right now, the whole industry is obsessed with "agents", aka. giving LLMs function calls and limited control over the loop they're running under. How many years before the industry will get to the point of giving LLMs proper control over the top-level loop and managing the context, plus an ability to "shell out" to "subagents" as a matter of course?
When/if the underlying model gets good enough to support that pattern. As an extreme example, you aren't ever going to make even a basic agent with GPT-3 as the base model, the juice isn't worth the squeeze.
Models have gotten way better and I'm now convinced (new data -> new opinion) that they are a major win for coding, but they still need a lot, a lot of handholding, left to their own devices they just make a mess.
The underlying capabilities of the model are the entire ballgame, the "use patterns" aren't exactly rocket science.
Not exactly “three laws safe” if we can’t use the thing for work without violating their competitive use prohibition
Agreed, and I find that I use Claude Code on more than traditional code bases. I run it in my Obsidian vault for all kinds of things. I run it to build local custom keyboard bindings with scripts that publish screenshots to my CDN and give me a markdown link, or to build a program that talks to Ollama to summarize my terminal commands for the last day.
I remember the old days of needing to figure out if the formatting changes I wanted to make to a file were sufficient to build a script or just do them manually - now I just run Claude in the directory and have it done for me. It's useful for so many things.
What does this mean?
Most of these things in the post aren't new capabilities. The automation of workflows is indeed valuable and cool. Not sure what AGI has anything to do with it.
Plus you shouldn't need an LLM to understand a codebase. Just make it more understandable! Of course capital likes shortcuts and hacks to get the next feature out in Q3.
<laughs in legacy code>
And fundamentally, that isn't a function of "capital". All code bases are shaped by the implicit assumptions of their writers. If there's a fundamental mismatch or gap between reader and writer assumptions, it won't be readable.
LLMs are a way to make (some of) these implict assumptions more legible. They're not a panacea, but the idea of "just make it more understandable" is not viable. It's on par with "you don't need debuggers, just don't write bugs"
The kind of person who prefers this setup wants to read (and write) the least amount of code on their own. So their ideal workflow is one where they get to make programs through natural language. Making codebases understandable for this group is mostly a waste of effort.
It's a wild twist of fate that programming languages were intended to make programming friendly to humans, and now humans don't want to read them at all. Code is becoming just an intermediary artifact useless to machines, which can instead write machine code directly.
I wish someone could put this genie back in the bottle.
Those are two different groups of humans, as you implied yourself.
Having a thing that is interactive and which can answer questions is a very useful thing. A slide deck that sits around for the next person is probably not that great, I agree. But if you desperately want a slide deck, then an agent like Claude which can create it on demand is pretty good. If you want summaries of changes over time, or to know "what's the overall approach at a jargon-filled but still overview level explanation of how feature/behavior X is implemented?", an agent can generate a mediocre (but probably serviceable) answer to any of those by reading the repo. That's an amazing swiss-army knife to have in your pocket.
I really used to be a hater, and I really did not trust it, but just using the thing has left me unable to deny its utility.
Judging from the tone of the article, they’re using the term AGI in a jokey way and not taking themselves too seriously, which is refreshing.
I mean like, it wouldn’t be refreshing if the article didn’t also have useful information, but I do actually think a slide deck could be a useful way to understand a codebase. It’s exactly the kind of nice-to-have that I’d never want a junior wasting time on, but if it costs like $5 and gets me something minorly useful, that’s pretty cool.
Part of the mind-expanding transition to using LLMs involves recognizing that there are some things we used to dislike because of how much effort they took relative to their worth. But if you don’t need to do the thing yourself or burn through a team member’s time/sanity doing it, it can make you start to go “yeah fuck it, trawl the codebase and try to write a markdown document describing all of the features and requirements in a tabular format. Maybe it’ll go better than I expect, and if it doesn’t then on to something else.”
This made me chuckle