Tell HN: I Lost Joy of Programming
57 Eatcats 74 7/8/2025, 11:36:50 AM
Small confession
I’ve been using Windsurf editor for about six months now, and it does most of the coding work for me.
Recently, I realized I no longer enjoy programming. It feels like I’m just going through the pain of explaining to the LLM what I want, then sitting and waiting for it to finish. If it fails, I just switch to another model—and usually, one of them gets the job done.
At this point, I’ve even stopped reviewing the exact code changes. I just keep pushing forward until the task is done.
On the bright side, I’ve gotten much better at writing design documents.
Anyone else feel the same?
Review the code. Hell, maybe even write some code yourself.
What you're describing is how I feel whenever I use an LLM for anything more than the most basic of tasks. I've gone from being a senior level software developer to managing a barely competent junior developer who's only redeeming skill is the ability to type really, really quickly. I quit the management track some time ago because I hated doing all my software development via the medium of design documents which would then be badly implemented by people who didn't care, there's no way you're going to get me to volunteer for that.
Re LLMs I love collaborative coding because I can sometimes pick up or teach new tricks. If I'm too tired to type the boilerplate I sometimes use an LLM. These are the only two redeeming values of LLM agents: they produce code or designs I can start from when I ask them too. I rarely do.
I hope OP can find a balance that works. It's sad to see the (claimed) state of the art be a soulless crank we have to turn.
---
For example: About two years ago I worked with a contractor who was a lot more junior than our team needed. I'd give him instructions, and then the next day spend about 2-3 hours fixing his code. In total, I was spending half of my time handholding the contractor through their project.
The problem was that I had my own assignments; and the contractor was supposed to be able to do their job with minimal oversight. (IE, roughly 0.5-1.5 hours of my day.)
If the contractor was helping me with my assignment; IE, if the contractor was my assistant, I'd have loved the arrangement.
(In case you're wondering how it ended up for the contractor, we let him go and hired someone who worked out great.)
---
I suspect if the OP can figure out how to make the AI an assistant, instead of an employee with autonomy, then it will be a better arrangement. I personally haven't found an AI that does that for me, but I suspect I'm either using it incorrectly or using the wrong tools.
I feel the opposite. I appreciate the ability to iterate and prototype in a way which lowers friction. Sure I have to plan steps out ahead of time, but that's expected with any kind of software architecture. The stimulating part is the design and thought and learning, not digging the ditch.
If you're just firing off prompts all day with no design/input, yea I'm sure that sucks. You might as well "push the big red button" all day.
> If it fails, I just switch to another model—and usually, one of them gets the job done.
This is a huge red flag that you have no idea what you're doing at the fundamental software architecture level imo. Or at least you have bad process (prior to LLMs).
I feel the same way. Things I like: Thinking about architectures and algorithms. Things I don't like: Starting out with a blank slate, looking up the exact function names or parameters. I find it much easier to take something roughly implemented and improve upon it than to start from nothing and build it.
I think about what I want fairly specifically. I discuss it with the LLM. It implements something. Half of the time it's what I expect, I can move on. Sometimes it's done something I wasn't expecting in a better way, which is nice. Frequently it's done something I wasn't expecting in a worse way; I either tell it to fix it, or just fix it myself.
In my previous role, I did a huge amount of patch review, which I always found quite tedious. Even though this looks superficially similar, it doesn't have the same vibe at all. I think it's because the LLM will accept being told what to do in a way no self-respecting coder would. (One complaint I'd heard about another person's reviews was that the person whose code was reviewed felt like they were a marionette, just typing exactly what the reviewer told them to type.)
This way I can do the things I enjoy, while neither having to worry about some human being's feelings, nor having to do the low-level stuff that's a chore.
Particularly in the present. If any of the current models can consistently make senior-level decisions I'd like to know which ones they are. They're probably going to cross that boundary soon, but they aren't there yet. They go haywire too often. Anyone who codes only using the current generation of LLM without reviewing the code is surely capping themselves in code quality in a way that will hurt maintainability.
How? There’s no understanding, just output of highly probable text suggestions which sometimes coincides with correct text suggestions.
Correctness exists only in the understanding of humans.
In the case of writing to tests there are infinite ways to have green tests and break things anyway.
And that's not just because its output is often not the best, but also because by doing it myself it causes me to think deeply about the problem, come up with a better solution that considers edge cases. Furthermore, it gives me knowledge in my head about that project that helps me for the next change.
I see comments here where people seem to have eliminated almost all of their dev work, and it makes me wonder what I'm doing wrong.
I'm in the same boat: I'm mostly doing C# in Visual Studio (classic) with co-pilot, and it very rarely gives useful code from prompts. Often times the auto-suggestions are hallucinations, and frequently they interfere with "normal" tab completion.
I'm wondering if I'm using the wrong tool, or if Visual Studio (classic) co-pilot is just far behind industry norms?
I am playing with Zed now though, and it has a "subtle" mode for suggestions which is great. When I explicitly want to see them, I press option key. Otherwise, I don't see them.
I find it’s really great for augmenting specific kinds of concentrated tasks. But just like you, I have to review everything it creates. Even Claude Opus 4 on MAX produces many bugs on a regular basis that I fix before merging in a change. I don’t mind it though, as I can choose to use it on the most annoying areas, and leave the ones I enjoy to work on myself.
Essentially, vibe coding is synchronous as it is necessary to wait around for the LLM to respond. Codex is async and allows you to focus on other things while it is working. The primary issue with async workflows is you really don't want to have to iterate much. Therefore, investing more time upfront clearly defining the prompt / agents.md and prior examples becomes really important. Usually if it is > 90% correct I will just fix the rest myself. For hand coding, I use a fairly basic vim setup with no LLM plugins. I do not like LLMs jumping in and trying to auto complete stuff.
You stopped reviewing the code..? You're not gonna make it.
You still need the visceral feel of writing the code, this builds the mental model in your head.
You don't? Sounds to me like you just don't enjoy prompting. Try doing some programming again. Engage your brain with a challenge and try to solve the problem itself not just explain it to an ai and never even look at the code. You enjoy the driving not the destination, getting a taxi there is removing your purpose.
Sometimes I want to hunt it down and erase the lazy, lying, gas-lighting **** from existence.
Nobody forced you to switch LLM models until eventually one of them solve your problem.
It's also got me to explore a lot more domains than I would've considered otherwise, e.g. using Python to accomplish tasks with local pytorch/onnx models and creating ComfyUI nodes or using bash for large complex scripts that I would've previously used bun .ts shell scripts to implement.
Even non dev tasks like resolving Linux update/configuration/conflicts have become a breeze to resolve with Claude/Gemini Pro being able to put me on the right track which no amount of old-school Google searches were able to.
Although it's not all upside as LLMs typically generate a lot more code than I would've used to accomplish the same task so code maintenance is likely to require more effort, so I don't like using LLMs to change code I've written, but am more than happy to get them to make changes to code that other LLMs have created.
I found the joy of making things.
As a technical person who is not a professional programmer, but finds or makes whatever I need, LLMs (Gemini) are dizzyingly powerful.
I've made so many things I never would never have even attempted without it: a change-based timelapse tool, virtual hand-controlled web-based theremin, automated sunrise and sunset webcam timelapse creator, healthy-eating themed shoot'em up, content-based podcast ad removal tool, virtual linescan camera, command line video echo effect, video player progress bar benchmark tool, watermark remover, irregular panoramic photo normalizer, QR code game of live, 3D terrain bike share usage map, movie "barcode" generator, tool to edit video by transcript, webcam-based window parallax effect, hidden-image mosaic generator, and all kinds of other toys I've already lost track of.
The majority of what I end up using langle mangles for is trivially verifiable but tedious to do things like "turn this Go struct into an OpenAPI Schema" or "take this protocol buffer definition and write the equivalent in Rust prost".
But on the flip-side, using the AI to help me learn the bits of programming that I’ve spent my whole career ignoring, like setting up DevOps pipelines or containerisation, has been very enjoyable indeed. Pre-AI the amount of hassle I’d have to go through to get the syntax right and the infrastructure set up was just prohibitively annoying. But now that so much of the boilerplate is just done for me, and now that I’ve got a chat window I can use to ask all my stupid questions, it’s all clicking into place. And it’s like discovering a new programming paradigm all over again.
Can 100% recommend stepping outside your comfort zone and using AI to help where you didn’t want to go before.
Long story short, LLMs are great for people who never wanted to become "code artists" (aka hackers) which many people within CS and SWE do not wish to be.
If you goal is to be able to express your ideas fluently though, you'll have to get good at coding. The differentiator is how you look at the pain and struggle involved. If your goal is to improve yourself, the struggle has value. You learn by trying to do harder and harder things. If your goal isn't to learn though, you may as well outsource the struggling to a bot.
I've became very lazy. Most tasks, I explain to the LLM, and go browse the web while it computes. More often than not, it fails, and I re-iterate my prompt several times. Eventually, I need to review the changes before submitting it for review, which isn't very fun.
Overall, I feel I'm losing my skills and the competitive advantage I had at my role (I'm a decent coder, but don't care too much about product discussions). The way I'm using the tool right now, I'm pretty sure I'm not more productive.
We'll see how it goes. It's still a pretty new tech and I think I should learn when not to use it and try to have good hygiene with it.
I'd say if you used to find pleasure and satisfaction if the art of writing code unless you're willing to stop using AI it might be worth finding a different pursuit to channel that energy into. If you don't enjoy prompting now it's only going to get worse from here and your energy will be better spent finding something you do enjoy.
Sure, get an LLM to suggest an approach, but how can you feel joy when you've turned yourself into a system architect working with a particularly stupid and relentlessly optimistic bunch of idiots who never really learn?
You can choose how you do your work. You have autonomy. So, choose.
The actual meat I prefer to code myself with minor LLM support (for example, I ask it to review my code).
(e.g. did you consider simply not using LLMs to write code and maybe just use them for rubberducking, cross-checking your code and as StackOverflow replacement?)
No more joy in writing software. Instead my time is spend in writing user stories and specifications as good as possible.
Personally I have found that the sky is now the limit thanks to AI assistants! I was never the best coder out there, probably a median level programmer, but now I can code anything I imagine and it's tons of fun.
Find some creative projects you want to work on and code them up!
This is definitely going to end well.
You don't have to give up anything you did before at all.
LLMs just here to increase your productivity, but cranking out unreviewed code where you don't want to do that is just silly to me.
Why would you program in a non-joyus way if you're doing it for fun? For professional work I fully get why you'd want to optimize.
This isn't programming. Delete the AI stuff and start programming again. It's fine to use LLM's if you want but nobody is forcing you to.
I’ve pivoted to architecture and higher level problem solving to continue my growth.
I have also found I do my best work when I’m happy. It’s important that the tool works for me and I don’t work for the tool.
The real kick in the nuts is that people don't care about quality. Honestly they never have, but now it's just worse. People see productivity gains and that's literally all that matters. I guess they know they can ship bad stuff and still sell it. Only when retention numbers get bad do they complain - not even think about taking the time to do things proper of course - about it and demand higher quality.
I think there's going to be a high demand for AI slop fixers in the future. Don't get me wrong, it's not that AI itself is incapable, it's that people aren't putting any effort in.
I think we'll push the people who code for enjoyment away and they'll be replaced by people who aren't as senior.
How is this the future of software engineering?
Programming is also the field where it's the easiest to strike out on your own. Seizing the means of production in programming amounts to grabbing a $200 laptop.
I usually start out with good intentions like
1. planning out work
2. crafting really good prompts
3. accepting bare miniumum code changes
4. reviewing and testing code changes
But most 'agentic code tools' are not really well aligned with this philosophy. They are always over eager and do more than what you ask them to. Like if you ask it to change button color, it goes and builds out a color picker .
They sneak in more and more code and vex you with all the extra junk that you slowly stop caring about extra stuff that's being snuk in and the whole thing spirals out of control. Now you are just doing pure vibe coding.
have self-respect
Getting tasks done, tasks you would never have had the time or courage to tackle the traditional way, is another source of joy.
OP, take some time off and evaluate what you want.
s/these comments/this website/
Maybe it's more of a problem with your job and the tasks you're assigned?
Why do you find this strange? It's like saying you find it strange that a carpenter enjoys working with wood, that it's only about the end product and not the process.
I do, however, use electric planers, table saws and miter saws, because I want to produce the product fast and efficiently, because the end product still is the goal.
Your point is well taken however.