This is a really good post. I'm a naturally controlling person, and I care about my craft a lot, so even in my recent dabbling (on a ~3000 LOC project) with agentic coding, one of the things I naturally did from the start was not just skim the diffs that the AI generated, but decide for myself what technologies should be used, describe the logic and architecture of the code I wanted in detail — to keep my mental model fresh and accurate — and read every single line of code as if it was someone else's, explicitly asking the AI to restructure anything that I didn't feel was the way I'd implemented it — thus ensuring that everything fit my mental model, and going in and manually adding features, and always doing all debugging myself as a natural way to get more familiar with the code.
One of the things I noticed is that I'm pretty sure I was still more productive with AI, but I still had full control over the codebase, precisely because I didn't let AI take over any part of the mental modelling part of the role, only treating it as, essentially, really really good refactoring, autocompletion, and keyboard macro tools that I interact with through an InterLISP-style REPL instead of a GUI. It feels like a lever to actually enable me to add more error handling, make more significant refactors for clarity to fit my mental model, and so on. So I still have a full mental model of where everything is, how it works, how it passes data back and forth, and the only technologies I'm not familiar with the use of in the codebase are things I've made the explicit choice not to learn because I don't want to (TKinter, lol).
Meanwhile, when I introduced my girlfriend (a data scientist) to the same agentic coding tool, her first instinct was to essentially vibe code — let it architect things however it wanted, not describe logic, not build the mental model and list of features explicitly herself, and skim the code (if that) and we quickly ended up in a cul de sac where the code was unfixable without a ton of work that would've eliminated all the productivity benefits.
So basically, it's like that study: if you use AI to replace thinking, you end up with cognitive debt and have to struggle to catch up which eventually washes out all the benefits and leaves you confused and adrift
aogaili · 8m ago
AI will hopefully humble so of the people I work with.
The people who understand nothing about business, yet you can't talk to because they think gifted for being able to write instructions to a computer.
The people spin out new frameworks every day and make a clusterf*ck of hyped and over-engineered frameworks.
The people who took a few courses and went into programming for money..
I went into software because I enjoyed creating (coding was a means to an end), and I always thought coding was the easiest part of software development. But when I get into corporate work, I find people who preach code like religion and don't even care about what is being produced, spend thousands of hours debating syntax. What a waste of life, I knew they were stupid, and AI made sure they knew as well.
TrackerFF · 11m ago
There used to be a time when you needed to be very skilled woodworker in order to make nice cabinets. There still are, but the number of machine / CNC made cabinets outnumber artisanal 100% hand-made cabinets by some incredible number. For every masterpiece made by a Japanese cabinet maker, imagine how many Ikea cabinets there are out there...
And that's how I believe software engineering will end up. Hand crafted code will still be a thing, written by very skilled developers...but it will be a small niche market, where there's little (to no) economic incentives to keep doing it the craftmanship way.
It is a brave new world. We really don't know if future talent will learn the craft like old talent did.
rafterydj · 44s ago
Counterpoint: a cabinet has always been a cabinet and nobody expects it to be anything but a cabinet. Rarely are software projects as repeatable and alike to each other as cabinets are.
Software is codified rules and complexity, which is entirely aribtrary, and builds off of itself in an infinite number of ways. That makes it much more difficult to turn into factory output cabinetry.
I think more people should read "No Silver Bullet" because I hear this argument a lot and I'm not sure it holds. There _are_ niches in software that are artisanal craft, that have been majorly replaced (like custom website designers and stock WordPress templates), but the vast majority of the industry relies on cases where turning software into templates isn't possible, or isn't as efficient, or conflicts with business logic.
Cheer2171 · 5m ago
People do love Ikea, Wal-Mart, McDonalds, and The Gap.
therein · 6m ago
It is an analogy that only passes the initial glance. Especially since the CNC made cabinets are not full of design flaws. Your analogy would only make sense if these CNC cabinets were generated by CNC AI that may or may not follow the sensibilities of a human designer. Or if the inexperienced carpenters using CNC machines just described the design verbally to the CNC machine instead of carefully encoding their design into gcode.
Clearly you don't value the process of coding if you think it is analogous to a carpenter manually carving the details of a design that's above the process of building it. It is not a good analogy, at all.
logicprog · 2m ago
There's plenty of cheap furniture that was designed only in CAD or something that is flimsy, doesn't fit human proportions well, and looks ugly in real life, because it was quicker to just throw it together on the computer, CNC it out, and mail the parts to people for them to build themselves than to actually carefully test it and work out the kinks. That's basically what half of IKEA is. So I think this is a decent analogy.
mkoryak · 6m ago
I find that AI is really good at the easy stuff like writing tests for simple class without too many dependencies that we have all written hundreds of times.
Things go wrong as soon as I ask the AI to write something that I don't fully grasp, like some canvas code that involves choosing control points and clipping curves.
I currently use AI as a tool that writes code I could write myself. AI does it faster.
If I need to solve a problem in a domain that I haven't mastered, I never let the AI drive. I might ask some questions, but only if I can be sure that I'll be able to spot an incorrect hallucinated answer.
I've had pretty good luck asking AI to write code to exacting specifications, though at some point it's faster to just do it yourself
gwynforthewyn · 1h ago
I've been seeing teammates go from promising juniors to people who won't think, and I've tried hard here to say what I think they're going wrong.
Like the great engineers who came before us and told us what they had learned, Rob Pike, Jez Humble, Martin Fowler or Bob Martin, it's up to those of us with a bit more experience to help the junior generation to get through this modern problem space and grow healthily. First, we need to name the problem we see, and for me that's what I wrote about here.
mindslight · 20m ago
"People who won't think" resonates with me for the draw I've felt being pulled towards by chatbots, and I've got plenty of experience in software and electrical engineering. They're pretty damn helpful to aid discovery and rubber ducking, but even trying to evaluate different products/approaches versus one another they will hallucinate wild facts and tie them together with a nice polished narrative. It's easy enough to believe them as it is, never mind if I had less expertise. I've found that I have to consciously pull the ripcord at a certain point, telling myself that if I really want the answer to some question I've got to spend the time digging into it myself.
One of the things I noticed is that I'm pretty sure I was still more productive with AI, but I still had full control over the codebase, precisely because I didn't let AI take over any part of the mental modelling part of the role, only treating it as, essentially, really really good refactoring, autocompletion, and keyboard macro tools that I interact with through an InterLISP-style REPL instead of a GUI. It feels like a lever to actually enable me to add more error handling, make more significant refactors for clarity to fit my mental model, and so on. So I still have a full mental model of where everything is, how it works, how it passes data back and forth, and the only technologies I'm not familiar with the use of in the codebase are things I've made the explicit choice not to learn because I don't want to (TKinter, lol).
Meanwhile, when I introduced my girlfriend (a data scientist) to the same agentic coding tool, her first instinct was to essentially vibe code — let it architect things however it wanted, not describe logic, not build the mental model and list of features explicitly herself, and skim the code (if that) and we quickly ended up in a cul de sac where the code was unfixable without a ton of work that would've eliminated all the productivity benefits.
So basically, it's like that study: if you use AI to replace thinking, you end up with cognitive debt and have to struggle to catch up which eventually washes out all the benefits and leaves you confused and adrift
The people who understand nothing about business, yet you can't talk to because they think gifted for being able to write instructions to a computer.
The people spin out new frameworks every day and make a clusterf*ck of hyped and over-engineered frameworks.
The people who took a few courses and went into programming for money..
I went into software because I enjoyed creating (coding was a means to an end), and I always thought coding was the easiest part of software development. But when I get into corporate work, I find people who preach code like religion and don't even care about what is being produced, spend thousands of hours debating syntax. What a waste of life, I knew they were stupid, and AI made sure they knew as well.
And that's how I believe software engineering will end up. Hand crafted code will still be a thing, written by very skilled developers...but it will be a small niche market, where there's little (to no) economic incentives to keep doing it the craftmanship way.
It is a brave new world. We really don't know if future talent will learn the craft like old talent did.
Software is codified rules and complexity, which is entirely aribtrary, and builds off of itself in an infinite number of ways. That makes it much more difficult to turn into factory output cabinetry.
I think more people should read "No Silver Bullet" because I hear this argument a lot and I'm not sure it holds. There _are_ niches in software that are artisanal craft, that have been majorly replaced (like custom website designers and stock WordPress templates), but the vast majority of the industry relies on cases where turning software into templates isn't possible, or isn't as efficient, or conflicts with business logic.
Clearly you don't value the process of coding if you think it is analogous to a carpenter manually carving the details of a design that's above the process of building it. It is not a good analogy, at all.
Things go wrong as soon as I ask the AI to write something that I don't fully grasp, like some canvas code that involves choosing control points and clipping curves.
I currently use AI as a tool that writes code I could write myself. AI does it faster.
If I need to solve a problem in a domain that I haven't mastered, I never let the AI drive. I might ask some questions, but only if I can be sure that I'll be able to spot an incorrect hallucinated answer.
I've had pretty good luck asking AI to write code to exacting specifications, though at some point it's faster to just do it yourself
Like the great engineers who came before us and told us what they had learned, Rob Pike, Jez Humble, Martin Fowler or Bob Martin, it's up to those of us with a bit more experience to help the junior generation to get through this modern problem space and grow healthily. First, we need to name the problem we see, and for me that's what I wrote about here.