The more I use LLMs to code, the farther I stray from the joy of coding.
I won't regurgitate the common sentiment of it being like pair programming with an oddly spiky profile junior engineer savant. I find my emotions being activated too often in having to recontextualize or reign in or re-orient or repeat.
If I wrote code that doesn't work, that is on me. If I have to read code that doesn't work, I didn't write it, and it's veracity or accuracy are only uncoverable by me - but are already being treated as gospel by my AI peer, and the LLM keeps asserting issues with "my" code that I don't yet feel I have ownership over because I have not ingested it fully is supremely tiring. The whiplash feels almost like an abusive relationship.
I want to get back to feeling accomplished once finished with a coding session, not like I just went through a kafkaesque ringer.
hn_throwaway_99 · 22h ago
> The more I use LLMs to code, the farther I stray from the joy of coding.
I agree. The way I put it is that it feels like programming has been turned into the job of an editor, when it used to be an author. Of course, editing was always a huge part of programming, probably the biggest part, but I always kinda felt like reviewing and editing was more the "brussel sprouts" part of the job and authoring the "ice cream". I was fine to eat my brussel sprouts if I got to have some scoops of ice cream sprinkled throughout, but now it feels like just endless plates of brussel sprouts.
rozap · 22h ago
I've been struggling with the same thing. It's the same reason I didn't really want to go on the management path because it loses the authoring part. But now it seems to be forced on us.
That being said, there are lots of brussel sprouts in authoring sometimes - the boilerplate and wrote stuff we've done a billion times. You can make Claude eat those brussel sprouts which is nice.
also, very nice analogy, but don't besmirch the good brussel sprout name. I think in real life, cooked properly, I might even prefer them to ice cream.
OkayPhysicist · 21h ago
Tangential fun fact!
Good tasting brussel sprouts are a very recent invention. They used to be a lot more bitter, but a new cultivar was engineered in the 90's that all but completely eliminated the bitter notes. Over the next 30 years, pretty much everybody has switched over, so now brussel sprouts are just drastically better than they were when they got the reputation for being gross.
quesera · 19h ago
Also, it's brussels sprouts. The name comes from the city of Brussels, BE.
worldsayshi · 22h ago
You can call it editor or you can call it director/architect. Instead of tapping out characters you spend more time making interesting and more high level decisions.
kiitos · 20h ago
It turns out that the process of "tapping out characters" is (a) never any kind of bottleneck for meaningful product velocity in any kind of meaningful system; and (b) actually important for engineers to do in order to understand the system that they're expected to maintain.
If (a) isn't true for you, then you're operating in a pathological environment, context, organization, etc. which isn't representative of any kind of broader industry experience.
And (b) is usually clear to anyone who's, for example, taken a university course, and compared the efficacy of manual note-taking vs. (say) automatic transcriptions from lectures. The process of parsing the lecture through your brain and into notes that you write yourself, turns out to be essential to information retention and conceptual understanding.
worldsayshi · 18h ago
There's a big difference in criticality of different kinds of software. If the code you're writing will be 'the thing that runs the actual business logic in production' of course you should understand and own the code. And that code is very meaningful in the way you describe.
But so much of the software we need to solve various problems is not mission critical. Like tooling. Or any low stakes software that can be easily replaced. Or some script that makes your life a little bit easier. Or maybe even a small gui app that helps you compose some specific configuration for that other software. Tools where the output is easy to verify and can stand on its own and can't be used by an attacker to exploit your systems.
If you put a lot of effort into tooling that you can throw away the moment better tooling appears you can make the mission critical software leaner.
> actually important for engineers to do in order to understand the system
It is one tool for it and an important one. But it is not the end all tool for understanding or trusting code. If that were the case you'd have to rewrite all code you ever had to maintain. You rely on a stacks of software all the time that you do not write.
AI indeed makes mistakes but so does humans so we have to validate the code we use with multiple such tools.
kiitos · 17h ago
In any kind of minimally-effective engineering team, engineers will only ever be working with and maintaining code that's "mission critical" in the sense you're describing here, there won't be any meaningful amount of "non-mission-critical" code worth considering or optimizing.
This is in some sense a didactic assertion: if it's not the case, then your engineering team isn't providing any value beyond what a bash script, gluing-together JIRA tickets and GitHub PRs via the LLM-du-Jour, could do autonomously.
worldsayshi · 16h ago
I have never worked in an engineering organisation that lives up to that level of efficiency that you seem to describe. Once you scratch the surface there's always a mountain of things that would help people get more efficient that never gets prioritized.
kiitos · 16h ago
And I've never worked in an engineering org that _didn't_ live up to the level of efficiency that I'm describing. I guess, seek better employment?
hn_throwaway_99 · 22h ago
I totally disagree with this, and that's because (like this blog post says) the output can't be trusted - you still need to review every line of generated code. The recent post I linked in another comment about the Cloudflare OAuth implementation had a catastrophic bug that would have been "game over" if the engineer wasn't such an expert in the field.
So it's like an architect but only if you need to triple check every nail, bolt and screw because otherwise the building will probably collapse.
I definitely hear you on this. I, like most people my age came into coding when it was mostly high level (Javascript, python, etc), yet I watch YouTube videos about things like NES programming using 6502 assembly. Part of me yearns for a time I didn't get to experience where we were closer to the hardware. Screaming at an LLM for giving me outdated methods to a library I'm trying to implement isn't in my ideal workday.
So far, the AI revolution has only given me more work. People have come at me with applications that are 80% done (really more like 50%) that they "vibe coded" and need an actual programmer to finish. These apps would most likely just be a spark in someone's imagination pre-AI.
In a way, this is a positive. I can't say it's been more fun though.
rfitz · 22h ago
I can definitely relate to wanting to feel accomplished after a coding session and how AI can sort of strip you of that feeling at times. For me personally, I started to find the joy in the AI + coding relationship once I realized I just had to reframe my thinking a bit.
It's less about code generation for me and more about opportunities to learn new things. It could be as simple as having it optimize my code in a way I hadn't even thought of or it could be me wanting to learn a new complex architectural pattern I haven't had the time to deep dive but have wanted to. Now I can spin something up and have a base understanding of it in minutes. That's exciting to me more than anything else. In a way, it takes me back to way earlier in my career when every day felt like I was learning something new and cool on the job from more seniors devs. I think as you get more senior and experienced, those "cool" learning moments start to happen a little less, so having AI be able to reignite that is exciting in a lot of ways.
datameta · 21h ago
I still experience what you outline for bootstrapping understanding, or bringing the knowledge horizon closer, so to speak. Part of the learning though is through the vigilance of parsing and correcting.
When I had a small passive circuit in my head that I wanted to one-shot solder on a protoboard and didn't want to get bogged down in KiCAD, talking through it with an AI and repeatedly correcting its understanding really solidified my own. It's like mentoring while being tutored.
So I still see value in use for smaller novel projects, but using LLMs to hit a deadline for production is not something I want to do any longer, for the time being.
ookblah · 20h ago
do you let it generate lots of code before review? i find this feeling you describe if i let it run wild and then have to sit there and figure out what it did and slog through every line.
the happy medium for me is either giving it very defined small tasks that i can review (think small edits or refactors over a few files). or i baby sit it and review every change its about to make. doesn't always pan out but it's basically a super powered auto-complete and i can "steer" it the direction i want. i can validate chunks much faster and it's typically of good quality.
that or i make mental notes of what i want it to fix after the first pass and run it thru again with the edits. in that way it does feel more collaborative and less like reviewing some unknown piece of code from someone else.
JohnFen · 17h ago
Yes, it seems to decrease the fun/satisfying parts of software development and increase the unpleasant parts.
jes5199 · 20h ago
I lost the joy of coding years ago. Using an LLM is novel and interesting, in way that programming hasn't been for over a decade
hn_throwaway_99 · 22h ago
I don't really disagree with anything in this post, but the advice is so basic it could have easily been written by an LLM, and apologies for my jadedness but I've seen like a million other posts with the same advice. The advice is simply:
1. Write detailed prompts.
2. Don't trust generated code output - it must be thoroughly reviewed, and feedback should be given back to the LLM.
3. AIs are very helpful for those learning new architectural patterns.
4. Try different tools.
If you really want to learn about using AI to write code (including the pros and cons), I think this post, https://news.ycombinator.com/item?id=44159166, is excellent. Note the author of the Cloudflare OAuth component, kentonv, makes a lot of good, insightful comments in that thread IMO.
rfitz · 22h ago
No need to apologize for the jadedness at all! I think we're in an era of AI fatigue where every piece of content you see feels like it's AI generated (or assisted). Arguably most points in this post can be looked at as common sense to some degree. I think what I was trying to get across was my mental models and view on AI in tech since I still see a shocking amount of devs who look at it simply as a means to generate code for them and nothing more.
I'm planning on writing more that dives a bit deeper into experimentation I've done as far as tooling, MCP servers, etc that may be a bit more intriguing to those who have already dove into the AI side of things.
ausbah · 22h ago
it’s really hard to be consistently “excited” about this space when half of the AI companies are proudly proclaiming their commitment to try and automate your job (doubtful but the mantra is suffocating) and the other half are just about increasing your productivity (you get more done at the same pay). i guess some boilerplate is truly automated which is a mild qol improvement
worldsayshi · 22h ago
Often in the past I have felt that software engineering contains a lot of reinventing the wheel again and again in slightly different ways. In an organisation that wants to achieve anything you're often stuck collectively fantasising about all the things you could do "if only" A, B, C and D were solved problems within the internal IT landscape. My hope is that the increased productivity can eventually translate into actually doing those things.
We fret about AI adding a bunch of bad code but I see that there are clear methods for avoiding/mitigating this. (Just make sure to test and otherwise verify that the code does the right thing. Make it transparent and easy to replace etc etc.)
Sure it can be used to add a bunch of technical debt but wielded right it can just add well be used to cut down the debt.
platevoltage · 20h ago
Yeah, this is why I don't give them any money. I use their models locally. I use ChatGPT without an account.
I don't remember the WSIWYG editor companies bragging about eliminating jobs by making web development more accessible.
I don't remember No-Code platforms bragging about eliminating jobs by making it easier to build your own website for your business.
I don't remember Arduino bragging about eliminating jobs by making embedded programming more accessible.
I'm not worried too much about my "job" being eliminated, I just have a really hard time giving people money who want me to end up underneath an Oakland overpass in a tent.
rfitz · 22h ago
It's definitely easy to get caught up in that and I think for a while I was probably in that headspace, but where the excitement started to set in was the realization that it could be leveraged to reduce the less "fun" parts of my day so I could focus on what I enjoy most. It's not purely a "AI writes code for me" kind of hype (although that's a nice benefit). I spend a lot less time debugging tiny issues hidden in legacy code or sifting through poorly written docs for libs that the codebase depends on, for example. That's a huge win and it lets me focus my time on producing quality work.
dasil003 · 22h ago
I find it helpful to ignore the hype and the pitches, and instead focus on how AI tools are enabling better software to be made. I may personally prefer old workflows, but if they are inefficient and I cling to them, then my value as a software professional will trend to zero over time. On the other hand, the volume of software and it's importance is still continuing to increase—AI is only accelerating this trend. So understanding how software works and what AI can and can't do with it has never been of higher value.
Sure investors and CEOs want to reduce software engineering costs, but at the end of the day software is built to serve human needs, and only humans can reason and make a judgement call about whether software systems are working well or not, and because software is so precise and deterministic, there will always need to be someone who thinks like a programmer to tell the AI what to do with sufficient precision to be useful. Sure I can imagine AGI could at some point invalidate that thinking, but I believe we are very far from that point if its even possible, and even if we do reach that point we'll need massive social change or the pitchforks will be coming out from many directions.
skydhash · 21h ago
Why are these arguments always the same?
> But with AI, I’m producing more than ever before, with a level of speed, quality, and understanding I’ve never experienced. In many ways, it’s 10x’d me as an engineer.
> The best way I can explain my mindset is that I see AI as a multiplier of myself, not just a tool that does work for me on command
> But more often than not, it’s a sign that the prompt wasn’t clear or specific enough. With better framing, you can guide the model toward a much more useful and accurate response
Etc,…
I long for an essay like Seven habits of effective text editing[0] by Bram Moolenaar with proper arguments made about AI Coding.
I won't regurgitate the common sentiment of it being like pair programming with an oddly spiky profile junior engineer savant. I find my emotions being activated too often in having to recontextualize or reign in or re-orient or repeat.
If I wrote code that doesn't work, that is on me. If I have to read code that doesn't work, I didn't write it, and it's veracity or accuracy are only uncoverable by me - but are already being treated as gospel by my AI peer, and the LLM keeps asserting issues with "my" code that I don't yet feel I have ownership over because I have not ingested it fully is supremely tiring. The whiplash feels almost like an abusive relationship.
I want to get back to feeling accomplished once finished with a coding session, not like I just went through a kafkaesque ringer.
I agree. The way I put it is that it feels like programming has been turned into the job of an editor, when it used to be an author. Of course, editing was always a huge part of programming, probably the biggest part, but I always kinda felt like reviewing and editing was more the "brussel sprouts" part of the job and authoring the "ice cream". I was fine to eat my brussel sprouts if I got to have some scoops of ice cream sprinkled throughout, but now it feels like just endless plates of brussel sprouts.
That being said, there are lots of brussel sprouts in authoring sometimes - the boilerplate and wrote stuff we've done a billion times. You can make Claude eat those brussel sprouts which is nice.
also, very nice analogy, but don't besmirch the good brussel sprout name. I think in real life, cooked properly, I might even prefer them to ice cream.
Good tasting brussel sprouts are a very recent invention. They used to be a lot more bitter, but a new cultivar was engineered in the 90's that all but completely eliminated the bitter notes. Over the next 30 years, pretty much everybody has switched over, so now brussel sprouts are just drastically better than they were when they got the reputation for being gross.
If (a) isn't true for you, then you're operating in a pathological environment, context, organization, etc. which isn't representative of any kind of broader industry experience.
And (b) is usually clear to anyone who's, for example, taken a university course, and compared the efficacy of manual note-taking vs. (say) automatic transcriptions from lectures. The process of parsing the lecture through your brain and into notes that you write yourself, turns out to be essential to information retention and conceptual understanding.
But so much of the software we need to solve various problems is not mission critical. Like tooling. Or any low stakes software that can be easily replaced. Or some script that makes your life a little bit easier. Or maybe even a small gui app that helps you compose some specific configuration for that other software. Tools where the output is easy to verify and can stand on its own and can't be used by an attacker to exploit your systems.
If you put a lot of effort into tooling that you can throw away the moment better tooling appears you can make the mission critical software leaner.
> actually important for engineers to do in order to understand the system
It is one tool for it and an important one. But it is not the end all tool for understanding or trusting code. If that were the case you'd have to rewrite all code you ever had to maintain. You rely on a stacks of software all the time that you do not write.
AI indeed makes mistakes but so does humans so we have to validate the code we use with multiple such tools.
This is in some sense a didactic assertion: if it's not the case, then your engineering team isn't providing any value beyond what a bash script, gluing-together JIRA tickets and GitHub PRs via the LLM-du-Jour, could do autonomously.
So it's like an architect but only if you need to triple check every nail, bolt and screw because otherwise the building will probably collapse.
So far, the AI revolution has only given me more work. People have come at me with applications that are 80% done (really more like 50%) that they "vibe coded" and need an actual programmer to finish. These apps would most likely just be a spark in someone's imagination pre-AI.
In a way, this is a positive. I can't say it's been more fun though.
It's less about code generation for me and more about opportunities to learn new things. It could be as simple as having it optimize my code in a way I hadn't even thought of or it could be me wanting to learn a new complex architectural pattern I haven't had the time to deep dive but have wanted to. Now I can spin something up and have a base understanding of it in minutes. That's exciting to me more than anything else. In a way, it takes me back to way earlier in my career when every day felt like I was learning something new and cool on the job from more seniors devs. I think as you get more senior and experienced, those "cool" learning moments start to happen a little less, so having AI be able to reignite that is exciting in a lot of ways.
When I had a small passive circuit in my head that I wanted to one-shot solder on a protoboard and didn't want to get bogged down in KiCAD, talking through it with an AI and repeatedly correcting its understanding really solidified my own. It's like mentoring while being tutored.
So I still see value in use for smaller novel projects, but using LLMs to hit a deadline for production is not something I want to do any longer, for the time being.
the happy medium for me is either giving it very defined small tasks that i can review (think small edits or refactors over a few files). or i baby sit it and review every change its about to make. doesn't always pan out but it's basically a super powered auto-complete and i can "steer" it the direction i want. i can validate chunks much faster and it's typically of good quality.
that or i make mental notes of what i want it to fix after the first pass and run it thru again with the edits. in that way it does feel more collaborative and less like reviewing some unknown piece of code from someone else.
1. Write detailed prompts.
2. Don't trust generated code output - it must be thoroughly reviewed, and feedback should be given back to the LLM.
3. AIs are very helpful for those learning new architectural patterns.
4. Try different tools.
If you really want to learn about using AI to write code (including the pros and cons), I think this post, https://news.ycombinator.com/item?id=44159166, is excellent. Note the author of the Cloudflare OAuth component, kentonv, makes a lot of good, insightful comments in that thread IMO.
I'm planning on writing more that dives a bit deeper into experimentation I've done as far as tooling, MCP servers, etc that may be a bit more intriguing to those who have already dove into the AI side of things.
We fret about AI adding a bunch of bad code but I see that there are clear methods for avoiding/mitigating this. (Just make sure to test and otherwise verify that the code does the right thing. Make it transparent and easy to replace etc etc.)
Sure it can be used to add a bunch of technical debt but wielded right it can just add well be used to cut down the debt.
I don't remember the WSIWYG editor companies bragging about eliminating jobs by making web development more accessible.
I don't remember No-Code platforms bragging about eliminating jobs by making it easier to build your own website for your business.
I don't remember Arduino bragging about eliminating jobs by making embedded programming more accessible.
I'm not worried too much about my "job" being eliminated, I just have a really hard time giving people money who want me to end up underneath an Oakland overpass in a tent.
Sure investors and CEOs want to reduce software engineering costs, but at the end of the day software is built to serve human needs, and only humans can reason and make a judgement call about whether software systems are working well or not, and because software is so precise and deterministic, there will always need to be someone who thinks like a programmer to tell the AI what to do with sufficient precision to be useful. Sure I can imagine AGI could at some point invalidate that thinking, but I believe we are very far from that point if its even possible, and even if we do reach that point we'll need massive social change or the pitchforks will be coming out from many directions.
> But with AI, I’m producing more than ever before, with a level of speed, quality, and understanding I’ve never experienced. In many ways, it’s 10x’d me as an engineer.
> The best way I can explain my mindset is that I see AI as a multiplier of myself, not just a tool that does work for me on command
> But more often than not, it’s a sign that the prompt wasn’t clear or specific enough. With better framing, you can guide the model toward a much more useful and accurate response
Etc,…
I long for an essay like Seven habits of effective text editing[0] by Bram Moolenaar with proper arguments made about AI Coding.
[0]: https://www.moolenaar.net/habits.html