> Regardless, the lesson for people like myself is that, in order to feel happy with creating, we have to actually create. An artist would not call their work art if they had little to no role in creating it.
Thanks. The author touched something there, close to a truth (or deep belief I got ?) about our life, something about the journey mattering more than the destination...
ryanrasti · 23m ago
Yeah that captures what I've been feeling: our work is changing from being craftsmen to managers.
Engineering used to be my go-to to enter flow state. Now, I spend a few minutes thinking about what I want and then a lot of time babysitting Claude code -- similar to experiences here.
Has anyone found a way to make the "manager" part feel as engaging and creative as the "craftsman" part used to?
NathanKP · 8h ago
I think the author's definition of "creating" is just too narrow. A gardener can get tremendous satisfaction from watching their plants grow from the bed of soil that they prepared, even if there is not as much weeding or watering to do later on in the growth cycle. A parent can get tremendous satisfaction from watching their child continue to grow and develop, even after the child is no longer an infant who requires constant care and attention.
In my opinion, having spent about a year and half working on various coding projects using AI, there are phases to the AI coding lifecycle.
1) Coding projects start out like infants: you need to write a lot of code by hand at first to set the right template and patterns you want the AI to follow going forward.
2) Coding projects continue to develop kind of like garden beds: you have to guide the structure and provide the right "nutrients" for the project, so that the AI can continue to add additional features based on what you have supplied to it.
3) Coding projects mature kind of like children growing up to become adults. A well configured AI agent, starting from a clean, structured code repo, might be mostly autonomous, but just like your adult kid might still need to phone home to Mom and Dad to ask for advice or help, you as the "parent" of the project are still going to be involved when the AI gets stuck and needs help.
Personally, while I can get some joy and satisfaction from manually typing lines of code, most of those lines of code are things I've typed literally hundreds of times over my decades long journey as a developer. There isn't as much joy in typing out the same things again and again, but there is joy in the longer term steering and shaping of a project so that it stays sane, clean, and scalable. I get a similar same sense of joy out of gently steering AI towards success in my projects, that I get from gently steering my own child towards success. There is something incredible about providing the right environment and the right pushes in the right direction, and then seeing something grow and develop mostly on it's own (but with your support backing it up).
fakedang · 8h ago
Cue me, cursing the AI with a choice selection of names, when my AI code writer of choice decides to change core files that I had explicitly told it not to touch earlier in the chat.
Guess I will not be a good parent lol.
NathanKP · 6h ago
Negative instructions do not work as well as positive ones. If you tell the LLM "don't do this" you only put the idea of doing that into it's context. (Surprisingly the same goes for human toddlers.... the AI is just in it's toddler phase).
Not to mention that context length is limited, so if you told it something "earlier" then your statement has probably already dropped off the end of the context window.
What works better is to prompt with positive instructions of intent like:
"Working exclusively in file(s) ____ and ____ implement ____ in a manner similar to how it is done in example file ______".
I start a fresh chat for each prompt, with fresh context, and try to keep all instructions embedded within a single prompt rather than relying on fragile past state that may or may not have dropped off the end of the context window. If there is something like "don't touch these core files" or "work exclusively in folder X" that I want it to always consider, then I add it as a system prompt or global rule file (ensures that the instruction gets included automatically on every prompt).
And don't get me wrong I get frustrated with AI sometimes too, but the frustration has declined dramatically as I've learned how to prompt it: appropriate task sizes, how to use positive statements rather than negative, how to gather the appropriate context for it to steer behavior and output, etc.
fakedang · 2h ago
Thank you! I will keep these in mind.
I realized I was doing it wrong when Cloudflare launched their own prompt spec for their workers implementation. Their proposed pattern is slightly different though: "You need to do this, but you did that. Please fix (with this [optional])". I might try a hybrid approach the next time.
Good. Enjoy that journey... on your own time. You've been missing your productivity OKRs, and your Claude logs say you haven't been using the tools the company has provided. You're on a PIP: if measurable progress is not seen in 30 days, disciplinary action up to and including termination may be taken.
cyanydeez · 8h ago
Claude, write me a macro that will autofelate yourself according to these metrics.
> I wonder if some “actual" artists (as in, those people who create the kind of art most people would recognize) have gone through a similar arc of realizing the emptiness of creating with AI tools.
My impression is that artists are even more hostile than the most AI-skeptic of software engineers. In large part, this is likely because the economic argument doesn't hold much sway. For the large majority of artists, it's hard for them to make money with art as is, the bottleneck is not the volume of art they can produce. There's a much clearer path to turning "more code" into "more money", even if it's still not direct.
GianFabien · 5h ago
The way I see it: Painters (as in artists) paint with brushes not spray guns.
The industrial scale painting robots work well for painting cars coming off an assembly line, but not for landscapes nor portraits.
Automation (not just AI, but in general) works well for highly structured, repetitious work but not for creative expression.
jaredcwhite · 6h ago
Perhaps that's why I as a software developer am fully genAI-skeptic…I've always considered myself a multidisciplinary artist and the skill I have in writing code is simply one of the many possible avenues I use to express myself. (Alas, it's the one which produces the most income by far, but that's another conversation!)
Jotalea · 3h ago
I agree. Using AI for development is addictive, once you start you cannot stop. It has harmed me, stunting my skills; just like the lack of exercise leads to physical decline, my coding muscles atrophied due to the lack of practice this dependency caused. And since tech is my only real strength, and I'm not what you'd call "successful", feeling stuck at the one thing I'm supposed to be good at makes me feel useless, makes me feel like a burden. And the worst thing? I'm unable to get back on track. I've tried getting back to coding manually, but I simply can't do it anymore. Last time I programmed anything without using AI was in 2023, on a Scratch project, simply because AI could not write Scratch blocks.
But at least I can now ship (shitty) code a lot faster.
bluefirebrand · 10h ago
I haven't had nearly the same experience of success with AI.
I'm often accused of letting my skepticism hold me back from really trying it properly, and maybe that's true. I certainly could not imagine going months without writing any code, letting the AI just generate it while I prompt
My work is pushing these tools hard and it is taking a huge toll on me. I'm constantly hearing how life changing this is, but I cannot replicate it no matter what I do
I'm either just not "getting it", or I'm too much of a control freak, or everyone else is just better than I am, or something. It's been miserable. I feel like I'm either extremely unskilled or everyone else is gaslighting me, basically nowhere in between
I have not once had an LLM generate code that I could accept. Not one time! Every single time I try to use the LLM to speed me up, I get code I have to heavily modify to correct. Sometimes it won't even run!
The advice is to iterate, but that makes no sense to me! I would easily spend more time iterating with the LLM than just writing the code myself!
It's been extremely demoralizing. I've never been unhappier in my career. I don't know what to do, I feel I'm falling behind and being singled out
I probably need to change employers to get away from AI usage metrics at this point, but it feels like it's everyone everywhere guzzling the AI hype. It feels hopeless
clown_strike · 8h ago
You're being gaslit. The point is to make you look unproductive.
The untrained temp workers using AI to do the entirety of their jobs aren't producing code of professional quality, it doesn't adhere to best practices or security unless you monitor that shit like a hawk but if you're still engineering for quality then AI is not the first train you've missed.
They will get code into production quicker and cheaper than you through brute force iteration. Nothing else matters. Best practices went the way of the rest of the social contract the instant feigned competence became cheaper.
Even my podunk employer has AI metrics. You won't escape it. AI will eventually gatekeep all expertise and the future employee becomes just a disposable meat interface (technician) running around doing whatever SHODAN tells them to.
geoka9 · 6h ago
My "agentic" experience is mostly Aider, working across a Golang webapp codebase. I've mostly used Gemini (whatever model Aider chooses to use at the moment).
Most of my experience has been similar to yours. But yesterday, out of the blue, it spit out a commit that I accepted almost verbatim (just added some line breaks and stuff). I was actually really surprised: not only it followed the existing codebase conventions and variable naming style, but also introduced a couple of patterns that I haven't thought of (and I liked).
But it also charged me $2 for the privilege :)
(On a related note, Gemini API has become noticeably more expensive compared to, say, a month ago.)
I find that with Aider managing context (what files you add to it) can make all the difference.
sokoloff · 5h ago
That $2 represents how many minutes of your annual labor? 2 minutes? Less than 1 if you account for all the non-coding drag on your tota working time?
GianFabien · 4h ago
>letting the AI just generate it while I prompt
But isn't prompting and iterating another way of instructing the computer to do what you want? Perhaps we could view it as a step up in the level of abstraction we work at.
We had similar arguments when high-level languages were introduced. Experienced programmers of that era maintained that they could write better programs in assembly language than in COBOL/FORTRAN/PL-I/Pascal etc. Yet even today we still need core portions of code written in assembler, but not much.
bluefirebrand · 3h ago
But we aren't moving up a layer of abstraction here
We are operating at the same level of abstraction, just with tools to generate high volumes of inconsistent quality of it for us
Edit: It would have to produce a lot higher quality a lot more consistently for me to seriously consider moving up to the "LLM prompt" abstraction layer permanently. As it is I think I'm just better off writing the code myself
ssutch3 · 9h ago
AI coding tools aren't equally effective across all software domains or languages. They're going to be the "best" (relative to their own ability distribution) in the "fat middle" of software engineering where they have the most training data. Popular tasks in popular languages and popular libraries (web dev in React, for example). You're probably out of luck if your task is writing netcode for a game engine, for instance.
bluefirebrand · 8h ago
I am a web dev in React, though
My experience is in one of the areas that people are saying it is most helpful
Which really just adds to the gaslighting effect
20after4 · 9h ago
I have a working theory that it's mostly bad programmers who are achieving massive productivity gains. Really good programmers will probably have trouble getting the LLM tools to perform as well as their normal level of output.
This could be cope but I don't think it is.
glouwbug · 7h ago
My approach has always been to consult 3 different models to get my understanding on the right path and then do the rest myself. Are we really just doing blind copy pastes? As an example, I recently was able to prototype several different one dimensional computational fluid dynamic GLSL shaders. Claude outputted everything with vec3s, and so the flux math matched what you’d see in the theory. It’s rapid iteration and a declutterred search engine for me with an interactive inline comment section, though I do understand some would disagree that statement, especially since it’s lacking any sort of formal verification. I counter with the old adage that anyone can be a dog on the internet
bluefirebrand · 7h ago
> My approach has always been to consult 3 different models to get my understanding on the right path and then do the rest myself. Are we really just doing blind copy pastes
For me, if I spent the time testing 3 different models I would definitely be slower than writing the code myself
glouwbug · 4h ago
But I'm not writing code. It's research with iteration. Punching out manual CFD is time consuming
steveklabnik · 8h ago
I have seen good programmers, ones I respect a lot, get good results with AI.
I don't think this is it, personally.
bluefirebrand · 8h ago
I'm not sure if it is cope, but I sort of feel the same
The quality of LLM code is consistently average at best, and usually quite bad imo. People say it is like a junior, but if a junior I hired produced such code consistently and never improved I would be recommending the company PIP them out.
Having output like a Junior would be fine, if I didn't have to fix it myself. As it stands, I've never been able to get it to produce code of the quality I want so I have to spend more time fixing it than I would just writing it.
I dunno. It sucks man
cheevly · 9h ago
And the irony is that those of us using AI to amplify our output to produce at exponential speeds feel like your comments are gaslighting us instead! Ive never seen such an outright divide in practitioners of a technology in terms of perception and outcomes. I got into LLMs super early, using them daily since 2022; so that may have bolstered the way I’ve augmented my approaches and tooling. Now almost everything I build uses AI at runtime to generate better tools for my AI to generate tools at runtime.
upghost · 9h ago
Can we use this micro moment to try to bridge the gap? I was sold on cocaine but all I've gotten so far was corn starch. Is there like a definitive tutorial on this? I mean look I am proud of my work but if I can drop 200-1000/month for the "blue stuff" I'm not gonna turn my nose up at it.
I've been pretty deeply into LLMs myself since 2023 and built several small models myself from scratch and (SFT) trained many more so it's not like I'm ignorant of how it works, I'm just not getting the workflow results.
ssutch3 · 9h ago
It's going to depend heavily on what you're doing. If you're doing common tasks in popular languages, and not using cutting edge library features, the tools are pretty good at automating a large amount of the code production. Just make sure the context/instruction file (i.e. claude.md) and codebase are set up to properly constrict the bot and you can get away with a lot.
If you're not doing tasks that are statistically common in the training data however you're not going to have a great experience. That being said, very little in software is "novel" anymore so you might be surprised.
bluefirebrand · 8h ago
Just because it's not strictly novel doesn't mean that the LLM is outputting the right thing
We used to caution people not to copy and paste from StackOverflow without understanding the code snippets, now we have people generating "vibe code" from nothing using AI, never reading it once and pushing it to master?
It feels like an insane fever dream
jaredcwhite · 6h ago
> amplify our output to produce at exponential speeds
I think I blacked out when my brain tried to process this phrase.
Nothing personal, but I automatically discount all claims like this (something something require extraordinary evidence and all that…).
bluefirebrand · 8h ago
> And the irony is that those of us using AI to amplify our output
I'm guessing you don't care about quality very much, since you are focusing on your output volume
01HNNWZ0MV43FF · 7h ago
Maybe I need to watch some videos on YouTube to understand what other people are seeing.
I couldn't even get Zed hooked up to GitHub Copilot. I use ChatGPT for snippets and search and it's okay but I don't want to bother checking its work on a large scale
breckenedge · 10h ago
> As I have kept up conversation with my developer friends, it has become essentially the norm, and everyone is being pressed to find greater productivity using AI coding tools.
What a weird alternate universe it is that I live in. My managers are somewhat skeptical of AI workflows and keep throwing up roadblocks to deeper and more coordinated use among my colleagues. Probably because there is so much churn, and it’s difficult to replicate the practice from one engineer to another. Some of my colleagues are very resistant to using AI. I use it quite extensively, but rate limits mean that there are occasions when I must pick up where the machine leaves off.
randomNumber7 · 6h ago
When you get better than junior level you see the limitations of current coding assistants.
But to get there it might be a good move to code for yourself (and read books).
Then on the other hand coding will not be a fun job anymore...
Thanks. The author touched something there, close to a truth (or deep belief I got ?) about our life, something about the journey mattering more than the destination...
Engineering used to be my go-to to enter flow state. Now, I spend a few minutes thinking about what I want and then a lot of time babysitting Claude code -- similar to experiences here.
Has anyone found a way to make the "manager" part feel as engaging and creative as the "craftsman" part used to?
In my opinion, having spent about a year and half working on various coding projects using AI, there are phases to the AI coding lifecycle.
1) Coding projects start out like infants: you need to write a lot of code by hand at first to set the right template and patterns you want the AI to follow going forward.
2) Coding projects continue to develop kind of like garden beds: you have to guide the structure and provide the right "nutrients" for the project, so that the AI can continue to add additional features based on what you have supplied to it.
3) Coding projects mature kind of like children growing up to become adults. A well configured AI agent, starting from a clean, structured code repo, might be mostly autonomous, but just like your adult kid might still need to phone home to Mom and Dad to ask for advice or help, you as the "parent" of the project are still going to be involved when the AI gets stuck and needs help.
Personally, while I can get some joy and satisfaction from manually typing lines of code, most of those lines of code are things I've typed literally hundreds of times over my decades long journey as a developer. There isn't as much joy in typing out the same things again and again, but there is joy in the longer term steering and shaping of a project so that it stays sane, clean, and scalable. I get a similar same sense of joy out of gently steering AI towards success in my projects, that I get from gently steering my own child towards success. There is something incredible about providing the right environment and the right pushes in the right direction, and then seeing something grow and develop mostly on it's own (but with your support backing it up).
Guess I will not be a good parent lol.
Not to mention that context length is limited, so if you told it something "earlier" then your statement has probably already dropped off the end of the context window.
What works better is to prompt with positive instructions of intent like:
"Working exclusively in file(s) ____ and ____ implement ____ in a manner similar to how it is done in example file ______".
I start a fresh chat for each prompt, with fresh context, and try to keep all instructions embedded within a single prompt rather than relying on fragile past state that may or may not have dropped off the end of the context window. If there is something like "don't touch these core files" or "work exclusively in folder X" that I want it to always consider, then I add it as a system prompt or global rule file (ensures that the instruction gets included automatically on every prompt).
And don't get me wrong I get frustrated with AI sometimes too, but the frustration has declined dramatically as I've learned how to prompt it: appropriate task sizes, how to use positive statements rather than negative, how to gather the appropriate context for it to steer behavior and output, etc.
I realized I was doing it wrong when Cloudflare launched their own prompt spec for their workers implementation. Their proposed pattern is slightly different though: "You need to do this, but you did that. Please fix (with this [optional])". I might try a hybrid approach the next time.
https://youtu.be/u6XAPnuFjJc, referenced often here: https://hn.algolia.com/?dateRange=all&page=0&prefix=false&qu...
https://medium.com/@kenichisasagawa/the-reason-behind-develo...
My impression is that artists are even more hostile than the most AI-skeptic of software engineers. In large part, this is likely because the economic argument doesn't hold much sway. For the large majority of artists, it's hard for them to make money with art as is, the bottleneck is not the volume of art they can produce. There's a much clearer path to turning "more code" into "more money", even if it's still not direct.
The industrial scale painting robots work well for painting cars coming off an assembly line, but not for landscapes nor portraits.
Automation (not just AI, but in general) works well for highly structured, repetitious work but not for creative expression.
But at least I can now ship (shitty) code a lot faster.
I'm often accused of letting my skepticism hold me back from really trying it properly, and maybe that's true. I certainly could not imagine going months without writing any code, letting the AI just generate it while I prompt
My work is pushing these tools hard and it is taking a huge toll on me. I'm constantly hearing how life changing this is, but I cannot replicate it no matter what I do
I'm either just not "getting it", or I'm too much of a control freak, or everyone else is just better than I am, or something. It's been miserable. I feel like I'm either extremely unskilled or everyone else is gaslighting me, basically nowhere in between
I have not once had an LLM generate code that I could accept. Not one time! Every single time I try to use the LLM to speed me up, I get code I have to heavily modify to correct. Sometimes it won't even run!
The advice is to iterate, but that makes no sense to me! I would easily spend more time iterating with the LLM than just writing the code myself!
It's been extremely demoralizing. I've never been unhappier in my career. I don't know what to do, I feel I'm falling behind and being singled out
I probably need to change employers to get away from AI usage metrics at this point, but it feels like it's everyone everywhere guzzling the AI hype. It feels hopeless
The untrained temp workers using AI to do the entirety of their jobs aren't producing code of professional quality, it doesn't adhere to best practices or security unless you monitor that shit like a hawk but if you're still engineering for quality then AI is not the first train you've missed.
They will get code into production quicker and cheaper than you through brute force iteration. Nothing else matters. Best practices went the way of the rest of the social contract the instant feigned competence became cheaper.
Even my podunk employer has AI metrics. You won't escape it. AI will eventually gatekeep all expertise and the future employee becomes just a disposable meat interface (technician) running around doing whatever SHODAN tells them to.
Most of my experience has been similar to yours. But yesterday, out of the blue, it spit out a commit that I accepted almost verbatim (just added some line breaks and stuff). I was actually really surprised: not only it followed the existing codebase conventions and variable naming style, but also introduced a couple of patterns that I haven't thought of (and I liked).
But it also charged me $2 for the privilege :) (On a related note, Gemini API has become noticeably more expensive compared to, say, a month ago.)
I find that with Aider managing context (what files you add to it) can make all the difference.
But isn't prompting and iterating another way of instructing the computer to do what you want? Perhaps we could view it as a step up in the level of abstraction we work at.
We had similar arguments when high-level languages were introduced. Experienced programmers of that era maintained that they could write better programs in assembly language than in COBOL/FORTRAN/PL-I/Pascal etc. Yet even today we still need core portions of code written in assembler, but not much.
We are operating at the same level of abstraction, just with tools to generate high volumes of inconsistent quality of it for us
Edit: It would have to produce a lot higher quality a lot more consistently for me to seriously consider moving up to the "LLM prompt" abstraction layer permanently. As it is I think I'm just better off writing the code myself
My experience is in one of the areas that people are saying it is most helpful
Which really just adds to the gaslighting effect
This could be cope but I don't think it is.
For me, if I spent the time testing 3 different models I would definitely be slower than writing the code myself
I don't think this is it, personally.
The quality of LLM code is consistently average at best, and usually quite bad imo. People say it is like a junior, but if a junior I hired produced such code consistently and never improved I would be recommending the company PIP them out.
Having output like a Junior would be fine, if I didn't have to fix it myself. As it stands, I've never been able to get it to produce code of the quality I want so I have to spend more time fixing it than I would just writing it.
I dunno. It sucks man
I've been pretty deeply into LLMs myself since 2023 and built several small models myself from scratch and (SFT) trained many more so it's not like I'm ignorant of how it works, I'm just not getting the workflow results.
If you're not doing tasks that are statistically common in the training data however you're not going to have a great experience. That being said, very little in software is "novel" anymore so you might be surprised.
We used to caution people not to copy and paste from StackOverflow without understanding the code snippets, now we have people generating "vibe code" from nothing using AI, never reading it once and pushing it to master?
It feels like an insane fever dream
I think I blacked out when my brain tried to process this phrase.
Nothing personal, but I automatically discount all claims like this (something something require extraordinary evidence and all that…).
I'm guessing you don't care about quality very much, since you are focusing on your output volume
I couldn't even get Zed hooked up to GitHub Copilot. I use ChatGPT for snippets and search and it's okay but I don't want to bother checking its work on a large scale
What a weird alternate universe it is that I live in. My managers are somewhat skeptical of AI workflows and keep throwing up roadblocks to deeper and more coordinated use among my colleagues. Probably because there is so much churn, and it’s difficult to replicate the practice from one engineer to another. Some of my colleagues are very resistant to using AI. I use it quite extensively, but rate limits mean that there are occasions when I must pick up where the machine leaves off.
But to get there it might be a good move to code for yourself (and read books).
Then on the other hand coding will not be a fun job anymore...