AI Doesn't Lighten the Burden of Mastery; AI Makes It Easy to Stop Valuing It

112 gwynforthewyn 37 8/17/2025, 5:03:20 PM playtechnique.io ↗

Comments (37)

logicprog · 1h ago
This is a really good post. I'm a naturally controlling person, and I care about my craft a lot, so even in my recent dabbling (on a ~3000 LOC project) with agentic coding, one of the things I naturally did from the start was not just skim the diffs that the AI generated, but decide for myself what technologies should be used, describe the logic and architecture of the code I wanted in detail — to keep my mental model fresh and accurate — and read every single line of code as if it was someone else's, explicitly asking the AI to restructure anything that I didn't feel was the way I'd implemented it — thus ensuring that everything fit my mental model, and going in and manually adding features, and always doing all debugging myself as a natural way to get more familiar with the code.

One of the things I noticed is that I'm pretty sure I was still more productive with AI, but I still had full control over the codebase, precisely because I didn't let AI take over any part of the mental modelling part of the role, only treating it as, essentially, really really good refactoring, autocompletion, and keyboard macro tools that I interact with through an InterLISP-style REPL instead of a GUI. It feels like a lever to actually enable me to add more error handling, make more significant refactors for clarity to fit my mental model, and so on. So I still have a full mental model of where everything is, how it works, how it passes data back and forth, and the only technologies I'm not familiar with the use of in the codebase are things I've made the explicit choice not to learn because I don't want to (TKinter, lol).

Meanwhile, when I introduced my girlfriend (a data scientist) to the same agentic coding tool, her first instinct was to essentially vibe code — let it architect things however it wanted, not describe logic, not build the mental model and list of features explicitly herself, and skim the code (if that) and we quickly ended up in a cul de sac where the code was unfixable without a ton of work that would've eliminated all the productivity benefits.

So basically, it's like that study: if you use AI to replace thinking, you end up with cognitive debt and have to struggle to catch up which eventually washes out all the benefits and leaves you confused and adrift

rafterydj · 1h ago
Interesting. Would you mind elaborating a bit on your workflow? In my work I go back and forth between the "stock" GUIs, and copy-pasting into a separated terminal for model prompts. I hate the vibe code-y agent menu in things like Cursor, I'm always afraid integrated models will make changes that I miss because it really only works with checking "allow all changes" fairly quickly.
logicprog · 50m ago
Ah, yeah. Some agentic coding systems try to force you really heavily into clicking a loud. I don't think it's intentional, but like, I don't think they're really thinking through the workflow of someone who's picky and wants to be involved as much as I am. So they make it to that, you know, canceling things is really disruptive to the agent or difficult or annoying to do or something. And so it kind of railroads you into letting the agent do whatever it wants, and then trying to clean up after, which is a mess.

Typically, I just use something like QwenCode. One of the things I like about it, and I assume this is true of Gemini CLI as well, is that it's explicitly designed to make it as easy as possible to interrupt an agent in the middle of its thought or execution process and redirect it, or to reject its code changes and then directly iterate on them without having to recapitulate everything from the start. It's as easy as just hitting escape at any time. So I tell it what I want to do by usually giving like a little markdown formatted you know paragraph or so that's you know got some bullet points or some numbers maybe a heading or two, explaining the exact architecture and logic I want for a feature, not just the general feature. And then I let it kind of get started and I see where it's going. And if I generally agree with the approach that it's taking, then I let it turn out a diff. And then if I like the diff after reading through it fully, then I accept it. And if there's anything I don't like about it at all, then I hit Escape and tell it what to change about the disc before it even gets to merge it in.

There are three advantages to this workflow over the chat GPT copy and paste workflow.

One is that the agent can automatically use grep and find and read source files, which makes it much easier and more convenient to load it up with all of the context that it needs to understand the existing style architecture and purpose of your codebase. Thus, it typically generates code that I'm willing to accept more often without me doing a ton of legwork.

The second is that it allows the agent to automatically of its own accord, run things like linters, type checkers, compilers, and tests, and automatically try to fix any warnings or errors in that result, so that it's more likely to produce correct code that adheres to whatever style guide I've provided. Of course, again I could run those tools manually, manually and copy and paste the output into a chat window, but that's just enough extra effort and friction after I've gotten what's ostensibly something working that I know I would be likely to be lazy and not do that at some point. This sort of ensures that it's always done. Some tools like OpenCode even automatically run LSPs and linters and feed that back into the model after the diff is applied automatically, thus allowing it to automatically correct things.

Third, this has the benefit of forcing the AI to use small and localized diffs to generate code, instead of regenerating whole files or just autoregressively completing or filling in the middle for things, which makes it way easier to keep up with what it's doing and make sure you know everything that's going on. It can't slip subtle modifications past you, or, and doesn't tend to generate 400 lines of nonsense.

sitkack · 23m ago
Jon Gjengset (jonhoo) Who is famously fastidious did a stream on live coding where he did something similar in terms of control. Worth of a watch if that is a style you want to explore.

https://www.youtube.com/watch?v=EL7Au1tzNxE

I don't have the energy to do that for most things I am writing these days which are small PoC where the vibe is fine.

I suspect as you do more, you will create dev guides and testing guides that can encapsulate more of that direction so you won't need to micromanage it.

If you used Gemini CLI, you picked the coding agent with the worst output. So if you got something that worked to your liking, you should try Claude.

logicprog · 12m ago
> I suspect as you do more, you will create dev guides and testing guides that can encapsulate more of that direction so you won't need to micromanage it.

Definitely. Prompt adherence to stuff that's in an AGENTS/QWEN/CLAUDE/GEMINI.md is not perfect ime though.

>If you used Gemini CLI, you picked the coding agent with the worst output. So if you got something that worked to your liking, you should try Claude.

I'm aware actually lol! I started with OpenCode+GLM 4.5 (via OpenRouter), but I started burning through cache extremely quickly, and I can't remotely afford Claude Code, so I was using qwen-code mostly just for the 2000 free requests a day and prompt caching abilities, and because I prefer Qwen 3 Coder to Gemini... anything for agentic coding.

westurner · 17m ago
Having read parts of e.g. the "Refactoring" and "Patterns of Enterprise Architecture" books and ThoughtWorks and Fowler web pages and blog posts, and "The Clean Coder", and about distributed computing algorithms; I've been working with a limited set of refactoring terms in my prompts like "factor out", "factor up", "extract an interface/superclass from".

TIL according to Wikipedia, the more correct terms are "pull up" and "push down".

How should they learn terms for refactoring today? Should they too train to code and refactor and track customer expectations without LLMs? There's probably an opportunity to create a good refactoring exercise; with and without LLMs and IDEs and git diff.

System Prompt, System Message, User, User Prompt, Agent, Subagent, Prompt Template, Preamble, Instructions, Prompt Prefix, Few-Shot examples; which thing do we add this to:

First, summarize Code Refactoring terms in a glossary.

Code refactoring: https://en.wikipedia.org/wiki/Code_refactoring

"Ask HN: CS papers for software architecture and design?" (2017) https://news.ycombinator.com/item?id=15778396

"Ask HN: Learning about distributed systems?" (2020) https://news.ycombinator.com/item?id=23932271

Would methods for software quality teams like documentation and tests prevent this cognitive catch-up on so much code with how much explanation at once?

Generate comprehensive unit tests for this. Generate docstrings and add comments to this.

If you build software with genai from just a short prompt, it is likely that the output will be inadequate in regards to the unstated customer specifications and that then there will need to be revisions. Eventually, it is likely that a rewrite or a clone of the then legacy version of the project will be more efficient and maintainable. Will we be attached to the idea of refactoring the code or to refactoring the prompts and running it again with the latest model too?

Retyping is an opportunity to rewrite! ("Punch the keys" -- Finding Forrester)

Are the prompts worth more than the generated code now?

simonw/llm by default saves all prompt inputs and outputs in a sqlite database. Copilot has /save and gemini-cli has /export, but they don't yet autosave or flush before attempting to modify code given the prompt output?

Catch up as a human coder, Catch up the next LLM chat context with the prior chat prompt sequences (and manual modifications, which aren't but probably should be auto-committed distinctly from the LLM response's modifications)

gwynforthewyn · 2h ago
I've been seeing teammates go from promising juniors to people who won't think, and I've tried hard here to say what I think they're going wrong.

Like the great engineers who came before us and told us what they had learned, Rob Pike, Jez Humble, Martin Fowler or Bob Martin, it's up to those of us with a bit more experience to help the junior generation to get through this modern problem space and grow healthily. First, we need to name the problem we see, and for me that's what I wrote about here.

dimal · 31m ago
There’s always been this draw in software engineering to find the silver bullet that will allow you to turn off your brain and just vibe your way to a solution. It might be OOP or TDD or pair programming or BDD or any number of other “best practices”. This is just an unusual situation where someone really can turn off their brain and get a solution that compiles and solves the problem, and so for the type of person that doesn’t want to think, it feels like they found what they’re looking for. But there’s still no silver bullet for complexity. I guess there’s nothing to do but reject the PR and say “Explain this code to me, then I’ll review it.”
TrueTom · 1h ago
I have to disagree. The same people who won't think existed in previous generations as well. The only difference was, they blindly regurgitated what Bob Martin et. al were saying.
mindslight · 1h ago
"People who won't think" resonates with me for the draw I've felt being pulled towards by chatbots, and I've got plenty of experience in software and electrical engineering. They're pretty damn helpful to aid discovery and rubber ducking, but even trying to evaluate different products/approaches versus one another they will hallucinate wild facts and tie them together with a nice polished narrative. It's easy enough to believe them as it is, never mind if I had less expertise. I've found that I have to consciously pull the ripcord at a certain point, telling myself that if I really want the answer to some question I've got to spend the time digging into it myself.
aogaili · 1h ago
AI will hopefully humble so of the people I work with.

The people who understand nothing about business, yet you can't talk to because they think gifted for being able to write instructions to a computer.

The people spin out new frameworks every day and make a clusterf*ck of hyped and over-engineered frameworks.

The people who took a few courses and went into programming for money..

I went into software because I enjoyed creating (coding was a means to an end), and I always thought coding was the easiest part of software development. But when I get into corporate work, I find people who preach code like religion and don't even care about what is being produced, spend thousands of hours debating syntax. What a waste of life, I knew they were stupid, and AI made sure they knew as well.

recursive · 39m ago
I'm probably one of the people you're talking about.

I'm feeling basically mostly fulfilled in life, and don't feel my life is being wasted.

I didn't understand the thing about "AI made sure they knew as well", or maybe I'm not actually who you're describing.

But I definitely get into language, syntax, frameworks, parsing, and blah blah blah.

Plenty of people still play chess. Plenty of people still run. Machine performance has surpassed humans long ago in both disciplines. Are those people stupid also?

adidoit · 1h ago
I wouldn't put too much hope into this because now, AI can help these people go from really vague thoughts to something that sounds even more fluent.

My sense is we really have to raise everyone's critical thinking abilities in order to spot bullshit.

mkoryak · 1h ago
I find that AI is really good at the easy stuff like writing tests for simple class without too many dependencies that we have all written hundreds of times.

Things go wrong as soon as I ask the AI to write something that I don't fully grasp, like some canvas code that involves choosing control points and clipping curves.

I currently use AI as a tool that writes code I could write myself. AI does it faster.

If I need to solve a problem in a domain that I haven't mastered, I never let the AI drive. I might ask some questions, but only if I can be sure that I'll be able to spot an incorrect hallucinated answer.

I've had pretty good luck asking AI to write code to exacting specifications, though at some point it's faster to just do it yourself

blitztime · 1h ago
For better or worse I’ve been finding it difficult to stay motivated at times for sharpening my craft. I’m currently reading Learning Go 2nd and it’s cool learning the idiomatic ways to write code in a language. However part of me feels like even if I strive to write “clean code”, now the bottleneck seems to be shifting to reviewers time and expertise.

So I fear I’m fighting a losing battle. I can’t and don’t want to review everything my coworkers put out, and code has always been a means to an end for leadership anyways so it seems difficult to justify carving out time for the team as a whole to learn, especially in the age of genAi.

p0w3n3d · 1h ago
I'm a 20+ years programmer working with my colleagues who are 2-5 years of experience. I could see they had accepted AI generated tests which are testing solely the mocks. This is quite annoying, because for me, when I want to finish a task properly, I have to write tests from scratch. I'm also using the same AI tool, but I want to use it differently, as a "copilot" and not the "main pilot"
nextworddev · 17m ago
What actually happened is that AI shifted / moved the valuable points to master
mirekrusin · 37m ago
At some level it is true that AI is a bit like adding electric engines everywhere.

Including gym equipment.

TrackerFF · 1h ago
There used to be a time when you needed to be very skilled woodworker in order to make nice cabinets. There still are, but the number of machine / CNC made cabinets outnumber artisanal 100% hand-made cabinets by some incredible number. For every masterpiece made by a Japanese cabinet maker, imagine how many Ikea cabinets there are out there...

And that's how I believe software engineering will end up. Hand crafted code will still be a thing, written by very skilled developers...but it will be a small niche market, where there's little (to no) economic incentives to keep doing it the craftmanship way.

It is a brave new world. We really don't know if future talent will learn the craft like old talent did.

ThrowawayR2 · 4m ago
[delayed]
benreesman · 10m ago
Anything adversarial is going to be the best people using the best assist: military drones, trading, anything performance hot pathed (AI inference itself), thats all too competitive to be anything but hybrid.

But by a similar argument, most anything with healthy competition and a discerning market can only lag that standard by some amount.

It's easy to conflate a once-in-a-generation orgy of monopolistic consolidation with the emergence of useful coding LLMs, but monopolies always get flabby and slack, and eventually the host organism evicts the parasite. It's when, not if.

And we'll still have the neat tools.

adidoit · 59m ago
I understand the analogy, but code is so much more fungible. It is the stuff of thought. Rather than a physical thing with hard limits.

I think software will remain much more artisan because, in some sense, software is a more crystallized form of thinking, yet still incredibly fungible and nebulous.

rafterydj · 1h ago
Counterpoint: a cabinet has always been a cabinet and nobody expects it to be anything but a cabinet. Rarely are software projects as repeatable and alike to each other as cabinets are.

Software is codified rules and complexity, which is entirely aribtrary, and builds off of itself in an infinite number of ways. That makes it much more difficult to turn into factory output cabinetry.

I think more people should read "No Silver Bullet" because I hear this argument a lot and I'm not sure it holds. There _are_ niches in software that are artisanal craft, that have been majorly replaced (like custom website designers and stock WordPress templates), but the vast majority of the industry relies on cases where turning software into templates isn't possible, or isn't as efficient, or conflicts with business logic.

treetalker · 1h ago
No Silver Bullet — Essence and Accident in Software Engineering: https://worrydream.com/refs/Brooks_1986_-_No_Silver_Bullet.p...
rafterydj · 43m ago
Thanks for adding the link!
hellisothers · 1h ago
Counterpoint: I forget where I originally read this thought but consider compilers. At one point coding was writing assembly and now it’s generally not, sometimes some people still do it but it is far from the norm. Now, usually, you “write code” in an abstraction (possibly of an abstraction) and magic takes care of the rest.

While I imagine “make an app that does X” won’t be as useful as “if … else” there is a middle ground where you’re relinquishing much of the control you currently are trying to retain.

al_borland · 1h ago
As complexity in a program increases, getting to the level of detail of defining the if...else becomes important. Using plain English to define the business logic, and allowing AI to fill in the gaps, will likely lead to a lot of logic errors that go uncaught until there is a big problem.

For the AI to avoid this, I'd imagine it would need to be directed not to assume anything, and instead ask for clarification on each and every thing, until there is no more ambiguity about what is required. This would be a very long and tedious back and forth, where someone will want to delegate the task to someone else, and at that point, the person might as well write their own logic in certain areas. I've found myself effectively giving sudocode to the LLM to try to properly explain the logic that is needed.

rafterydj · 47m ago
I mean that's basically all high level programming languages are, right?

I would argue that as an industry we love high level programming languages, because they allow you to understand what you are writing, much easier than looking at assembly code. Excellent for the vast majority of needs.

But then people go right on and build complicated frameworks and libraries with those languages, and very quickly the complexity (albeit presented much better for reading) comes back into a project.

smogcutter · 58m ago
But we’re way beyond templates here.

There will be niches in research, high performance computing & graphics, security, etc. But we’re in the last generation or two that’s going to hand write their own CRUD apps. That’s the livelihood of a lot of software developers around the world.

rafterydj · 34m ago
I don't really disagree with you about handwriting CRUD apps. But I'm not sure that having an off-the-shelf solution, from AI output or not, that would spin up CRUD interfaces would _actually_ erase software as an industry.

To me it's similar to saying that there's no need for lawmakers after we get the basics covered. Clearly it's absurd, because humans will always want to expand on (or subtract from) what's already there.

jrflowers · 31m ago
> For every masterpiece made by a Japanese cabinet maker, imagine how many Ikea cabinets there are out there

Minimalist design isn’t the result of minimal effort. It actually takes quite a lot of time and skill to design a cabinet that can flat pack AND be assembled easily AND fit cost constraints AND fit manufacturing and shipping and storage constraints AND have mass market appeal.

IKEA stuff exists because of hundreds or thousands of people with expertise in their roles, not in spite of them.

Cheer2171 · 1h ago
People do love Ikea, Wal-Mart, McDonalds, and The Gap.
therein · 1h ago
It is an analogy that only passes the initial glance. Especially since the CNC made cabinets are not full of design flaws. Your analogy would only make sense if these CNC cabinets were generated by CNC AI that may or may not follow the sensibilities of a human designer. Or if the inexperienced carpenters using CNC machines just described the design verbally to the CNC machine instead of carefully encoding their design into gcode.

Clearly you don't value the process of coding if you think it is analogous to a carpenter manually carving the details of a design that's above the process of building it. It is not a good analogy, at all.

TrackerFF · 1h ago
Surely you can see the point though - that there are numerous of trades that previously involved mastery of tools, a process of designing custom products for customers, and were made to last for decades, centuries even.

But many of those have long since been automated away, because customers are more than happy to purchase cheaper products, made almost entirely by machines.

"AI-free" development will be a tiny niche in the coming years and decades. And those developers will not get paid any extra for doing it the old way. Just like artisanal workers today.

dgfitz · 1h ago
What’s the saying, software projects are never finished, only abandoned.

Cabinets are finished.

logicprog · 1h ago
There's plenty of cheap furniture that was designed only in CAD or something that is flimsy, doesn't fit human proportions well, and looks ugly in real life, because it was quicker to just throw it together on the computer, CNC it out, and mail the parts to people for them to build themselves than to actually carefully test it and work out the kinks. That's basically what half of IKEA is. So I think this is a decent analogy.
dingdingdang · 1h ago
I mean, yes. And also; it's the age old problem that arises every time a new technology arrives in full: it requires a steadier-than-priorly hand in order to not have ballistic effects on society. This goes for books over what-my-betters-told-me, swords over flint-blades, automotive cars over horse-carts and indeed; ALL technology. New tech of any real substance invariably carry the need for cultural shift towards a more advanced stance that allows for the gainful use of said technology at scale.
coolThingsFirst · 23m ago
Music to my ears, can't wait this slop generator to flop.