LLMs are mirrors of operator skill

47 ghuntley 90 6/4/2025, 2:40:21 PM ghuntley.com ↗

Comments (90)

makmanalp · 1d ago
Counterthoughts: a) These skills fit on a double sided sheet of paper (e.g. the claude code best practices doc) and b) what these skills are has been changing so rapidly that even the best practices docs fall out of date super quick.

For example, managing the context window has become less of a problem with increased context windows in newer models and tools like the auto-resummarization / context window refresh in claude code make it so that you might be just fine without doing anything yourself.

All this to say that the idea that you're left significantly behind if you aren't training yourself on this feels bogus (I say this as a person who /does/ use these tools daily). It should take any programmer not more than a few hours to learn these skills from scratch, with the help of a doc, meaning any employee you hire should be able to pick these up no problem. I'm not sure it makes sense as a hiring filter. Perhaps in the future this will change. But right now these tools are built more like user friendly appliances - more like a cellphone or a toaster than a technology to wrap your head around, like a compiler or a database.

jmsdnns · 1d ago
A key thing that has been shown in research at Wharton is that LLMs elevate people with less experience a lot more than they elevate people a lots of experience.

If we take "operator skill" to mean "they know how to make prompts", there is some truth to it and we can see that by whether or not the operator is designing the context window or not.

But for the more important question, whether or not LLMs are useful has an inverse relationship with how skilled the person is in the context they are using for. This is why the best engineers mostly shrug at LLMs while those that arent the best feel a big lift.

So, LLMs are not mirrors of operator skill. This post is instead an argument that everyone should become prompt engineers.

namuol · 1d ago
Disagree. Poor engineers will go in circles with AI because they will under specify their tasks and fail to recognize improper solutions. Ultimately if you’re not thoughtful about your problem and critical about the solution you will fail. This is true with or without AI at the wheel.
Jensson · 1d ago
> Poor engineers will go in circles with AI

But they move quickly around that circle making them feel much more productive. And if you don't need something outside of the circle it is good enough.

jmsdnns · 1d ago
It's not an opinion, it's what research has shown many times. For example, they can ask the LLM about how to get started or what an experienced engineer might do, etc, as a research tool before writing code.
a_e_k · 1d ago
Experience, though, can definitely vary by domain. Trying to get one to code an algorithm recently that I had a pretty good idea of took longer and gave worse results than just doing it myself.

On the other hand, something like wrestling with the matplotlib API? I don't have too much experience there and an LLM was a great help in piecing things together.

ghuntley · 1d ago
Keen to read the research. Can you drop the link?
jmsdnns · 1d ago
I was at Penn's first AI conference last year and heard Dr Lilach Mollick's keynote where she said this is shown to be true over and over. She doesnt seem to publish often, but her husband Ethan always has a lot to say about AI.

https://www.oneusefulthing.org/p/everyone-is-above-average

ghuntley · 1d ago
Thanks.
namuol · 1d ago
> If I were interviewing a candidate now, the first things I'd ask them to explain would be the fundamentals of how the Model Context Protocol works and how to build an agent.

Um, what? This is the sort of knowledge you definitely do not need in your back pocket. It’s literally the perfect kind of question for AI to answer. Also this is such a moving target that I suspect most hiring processes change at a slower pace.

jrm4 · 1d ago
Having worked with and around a lot of different people and places in IT as an instructor; frankly the funniest thing I've observed is that everyone in IT believes that there is some baseline concept or acronym or something that "ought to be obvious and well known to EVERYONE."

And it never is. There's just about nothing that fits this criteria.

jiggawatts · 1d ago
Hashtable.

Explain how it works and what you use for.

If you don’t know this, you’re not a programmer in any language, platform, framework, front end or back end.

It’s my go-to interview question.

Tell me what’s wrong with that.

aezart · 1d ago
What's wrong with that is that a lot of languages don't call them hashtables. I don't think I've used the actual term since like 2016.
jrm4 · 11h ago
Boom. Nailed exactly what happens, and how you deal with it.
jiggawatts · 1d ago
Sure, but if you've never even heard the term hash table, and you don't know that Dictionary<K,V> or your language's "associative array" or whatever uses hash tables under the hood... you're not a programmer in my mind.

It's such a foundational thing in all of modern programming, that I just can't imagine someone being "skilled" and not knowing even this much. Even scripting languages use hash tables, as do all interpreted languages such as Python and JavaScript.

Keep in mind that I'm not asking anyone to implement a hash table from scratch on a white board or some nonsense like that!

There ought to be a floor on foundational knowledge expected from professional developers. Some people insist on this being zero. I don't understand why it shouldn't be at least this?

You can't get a comp-sci or comp-eng degree from any reputable university (or even disreputable ones) without being taught at least this much!

What next? Mechanical engineers who can't be expected to hold a pencil or draw a picture using a CAD product?

Surgeons that have never even seen a scalpel?

Surveyors that don't know what "trigonometry" even means?

Where's your threshold?

tough · 1d ago
try asking chatgpt about MCP and its so new it will hallucinate about some other random stuff

its still a bad interview question unless you’re hiring someone to build ai agents imho

dgfitz · 1d ago
I would just give some sort of LLM-esque answer that sounds correct but is very wrong, and hope I would get the opportunity to follow that up with: "Oh, I must have hallucinated that, can you give me a better prompt?"

I crack myself up.

layer8 · 1d ago
Don’t forget to apologize and acknowledge that they are right.
a_e_k · 1d ago
You're right to ask about that! ...
ghuntley · 1d ago
No, it is actually a critical skill. Employers will be looking for software engineers that can orchestrate their job function and these are the two key primitives to do that.
kartoffelsaft · 1d ago
The way it is written is to say that this is an important interview question for any software engineering position, and I'm guessing you agree by the way you say it's critical.

But by the same logic, should we be asking for the same knowledge of the language server protocol and algorithms like treesitter? They're integral right now in the same way these new tools are expected to become (and have become for many).

As I see it, knowing the internals of these tools might be the thing that makes the hire, but not something you'd screen every candidate with who comes through the door. It's worth asking, but not "critical." Usage of these tools? sure. But knowing how they're implemented is simply a single indicator to tell if the developer is curious and willing to learn about their tools - an indicator which you need many of to get an accurate assessment.

ghuntley · 1d ago
Understanding how to build an agent and how Model Context Protocol works is going to be, by my best guess, the new "what is a linked list and how do you reverse a linked list" interview question in the future. Sure, new abstractions are going to come along, which means that you could perhaps be blissfully unaware about how to do that because there's a higher order function to achieve such things. But for now, we are at the level of C and, like C, it's essential to know what those are and how to work with them.
elliotbnvl · 1d ago
It's a good heuristic for determining how read in somebody is to the current AI space, which is super important right now, regardless of being a moving target. The actual understanding of MCP is less important than the mindset having such an understanding represents.
namuol · 1d ago
Hard disagree. It’s not super important to be AI-pilled. You just need to be a good communicator. The tooling is a moving target, but so long as you can explain what you need well and can identify confusion or hallucination, you’ll be effective with them.
elliotbnvl · 1d ago
Nope. Being a good communicator and being good at AI are two completely different skillsets. Plenty of overlap, to be sure, but being good at one does not imply being good at the other any more than speaking first-language quality English means you are good at fundraising in America.

I know plenty of good communicators who aren't using AI effectively. At the very least, if you don't know what an LLM is capable of, you'll never ask it for the things it's capable of and you'll continue to believe it's incapable when the reality is that you just lack knowledge. You don't know what you don't know.

mromanuk · 1d ago
Every time I ask an LLM to write some UI and model for SwiftUI I have to specify to use @Observable macro (is the new way), which they normally do, after asking for it.

The LLM tells me that they prefere the "older way" because it's more broadly compatible, that's ok if you are aiming for that. But If the programmer doesn't know about that they will be stuck with the LLM calling the shots for them.

bcrosby95 · 1d ago
You need to create your own preamble that you include with every request. I generally have one for each codebase, which includes a style guide, preferred practices & design (lots of 'best practices' are cargo culted and the LLM will push them on you even when it doesn't make sense - this helps eliminate those), and declarations of common utility functions that may need to be used.
klntsky · 1d ago
Use always-enabled cursor (or your agentic editor of choice) rules.
starlust2 · 1d ago
A thing people miss is that there are many different right ways to solve a problem. A legacy system might need the compatibility or it might be a greenfield. If you leave a technical requirement out of the prompt you are letting the LLM decide. Maybe that will agree with your nuanced view of things, but maybe not.

We're not yet at a point where LLM coders will learn all your idiosyncrasies automatically, but those feedback loops are well within our technical ability. LLM's are roughly a knowledgeable but naïve junior dev; you must train them!

Hint: add that requirement to your system/app prompt and be done with it.

maxwell · 1d ago
It's just a higher level abstraction, subject to leaks as with all abstractions.

How many professional programmers don't have assemblers/compilers/interpreters "calling the shots" on arbitrary implementation details outside the problem domain?

LorenPechtel · 1d ago
But we trust those tools to do the job correctly. The compiler has considerable latitude in messing with the details so long as the result is guaranteed to match what was ordered--when we find any deviation from that even in an edge case we consider it a bug (Borland Pascal debugger, I'm looking at you. I wasted a *lot* of time on the fact that in single step mode you peacefully "execute" an invalid segment register load!) LLMs lack this guarantee.
maxwell · 15h ago
We trust those tools to do the job correctly now.

https://vivekhaldar.com/articles/when-compilers-were-the--ai...

cpinto · 1d ago
Have you tried writing rules for how you want things done, instead of repeating the same things every time?
jasonjmcghee · 1d ago
The attempting backward compatibility trained behavior has never once been useful to me and is constantly an irritation.

> Please write this thing

> Here it is

> That's asinine why would you write it that way, please do this

> I rewrote it and kept backward compatibility with the old approach!

:facepalm:

dwaltrip · 1d ago
You will get better results if you reset the code changes, tweak the prompt with new guidelines (e.g. don’t do X), and then run it again in a fresh chat.

The less cruft and red herrings in the context, the better. And likewise with including key info, technical preferences, and guidelines. The model can’t read our minds, although sometimes we wish so :)

There are lots of simple tricks to make it easier for the model to provide a higher quality result.

Using these things effectively is definitely an complex skill set.

diggan · 1d ago
Sounds like an OK default, especially since the "better" (in your opinion) way can be achieved by just adding "Don't try to keep backwards compatibility with old code" somewhere in your reusable system prompt.

It's mostly useful when you work a lot with "legacy code" and can't just remove things willy nilly. Maybe that sort of coding is over-represented in the datasets, as it tends to be pretty common in (typically conservative) larger companies.

patrickhogan1 · 1d ago
I agree with most of what you’re saying—especially the Unix pipe analogy and the value of building a prompt library while understanding which LLM to use.

That said, I think there’s value in a catch-all fallback: running a prompt without all the usual rules or assumptions.

Sometimes, a simple prompt on the latest model just works—and often in surprisingly more effective ways than one with a complex prompt.

foldr · 1d ago
Isn’t this just a roundabout way of saying that people who are skilled at using LLMs will get better results from them than people who aren’t? In other words, LLMs “mirror operator skill” in about the same way as hammers, paintbrushes or any other tool. Hardly a blinding insight.
energy123 · 1d ago
It's a controversial opinion because both AI optimists and AI pessimists can find room for disagreement with the premise. The optimists think vibe coding is about to be fully automated and humans don't have long, one or two years at best. The pessimists think LLMs don't add much value in the first place. In either case they would disagree with the premise.
marinmania · 1d ago
Agreed. By HN standards I am a very shitty programmer, and as of a year ago I would have said it takes up about 25% of my time. I pretty much just make demos to display some non-coding research.

I think with the rise of LLMs, my coding time has been cut down by almost half. And I definitely need to bring in help less often. In that sense it has raised my floor, while making the people above me (not necessarily super coders, but still more advanced) less needed.

brcmthrowaway · 1d ago
Is there a good guide resource on properly using LLMs/agents for coding? How to get started?
spacebanana7 · 1d ago
I found that learning prompt engineering was largely a waste of time. The value of the knowledge seems to depreciate so quickly.

I spent loads of time learning to use special syntax that helped GPT 3.5 or comfy ui for Stable diffusion. Now the latest models can do exactly what I want without any of those “high skill” prompts. The context windows are so big that we can be quite lazy about dumping files into prompts without optimisation.

The only general advice I’d give it to take more risk and continually ask more of the models.

ghuntley · 1d ago
See the tweet on workflow that I put in the post. No courseware, no bullshit, it's there. Have fun. The blog has plenty of guidance from using specs to creating standard libraries of prompts and how to clone a venture capital-backed company while you sleep.
satisfice · 1d ago
A lot of these questions no one knows the answer to.

If you think you know “the best LLM to use for summarizing” then you must have done a whole lot of expensive testing— but you didn’t, did you? At best you saw a comment on HN and you believed it.

And if you did do such testing, I hope it wasn’t more than a month ago, because it’s out of date, now.

The nature of my job affords me the luxury of playing with AI tech to do a lot of things, including helping me write code. But I’m not able to answer detailed technical questions about which LLM is best for what. There is no reliable and durable data. The situation itself is changing too quickly to track unless you have the resources to have full subscriptions to everything and you don’t have any real work to do.

ninetyninenine · 1d ago
My hope is that the AI never improves to take over my job and the next generation of programmers is so used to learning programming with AI that they become mirrors of hallucinating AI. That eliminates ageism and AI taking my job.

Realistically though I think AI will come to a point where it can take over my job. But if not, this is my hope.

whatnow37373 · 1d ago
Maybe we can learn some lessons from digital artists who naturally fret over the use of their skills and how they will be replaced by stable diffusion and friends.

In one way, yes, this massively shifts power into the hands of the less skilled. On the other hand if you’d need some proper and I mean proper marketing materials, who are you going to hire? A professional artist using AI or some dipshit with AI?

There will be slop of course but after a while everyone has slop and the only differentiating factor will be quality or at least some gate-kept arbitrary level of complexity. Like how rich people want fancy hand made stuff.

Edit: my point is mainly that the level will rise to a point that you’d need to be scientist to create a - then - fancy app again. You see this with web. It was easy, we made it ridiculous and I mean ridiculously complicated where you need to study computer science to debug React rendering for your marketing pamphlet.

henning · 1d ago
Centering an interview around MCP and "building an agent" which often means "write a prompt and make HTTP calls to some LLM host service" is incredibly stupid unless that is what the product the company produces is.

MCP is a tool and may only be minor in terms of relevance to a position. If I use tools that use MCP but our actual business is about something else, the interview should be about what the company actually does.

Your arrogance and presumptions about the future don't make you look smart when you are so likely to be wrong. Enumerating enough predictions until one of them is right is not prescient, it's bullshit.

mouse_ · 1d ago
isn't the whole selling point of ML the idea that operators no longer need to be as skilled? it feels like the goal posts have been moving as of late.
IggleSniggle · 1d ago
That's the selling point to VCs. The selling point to consumers of the tech is that AI will make only them, with their special inherent qualities, outperform their peers even more (unspoken, but also implying that if they are an underperformer they might be able to catch up), but only if they act now! This limited time opportunity will soon be used by everyone!
breckenedge · 1d ago
Sort of. The super users will massively outcompete unskilled users. So while it does raise everyone’s abilities, it will be lopsided
disgruntledphd2 · 1d ago
I mean, all the research (to be fair there's very little) suggests that in CS (customer support) the major gains are for people who are worse than others, and there was little to no impact for the higher skilled cs people.

And that's kinda what one would expect, given that LLMs are basically a blurry JPEG of the web/github etc.

Like, I think reasoning can help here, but I rarely see good results when prompting an LLM with something complicated (technical statistical problems and good approaches) while they are fantastic at less edge case stuff (working with docker and well known frameworks).

So yeah, definite productivity gains but I'm not convinced that they're as transformational as they are being pitched.

gmerc · 1d ago
That's clearly why the VCs are now pushing 996 and everyone has to work twice as hard. Lol
whartung · 1d ago
Isn't the whole point of <abstraction> that we don't need to worry about the inner details?

A friend currently has an AI workflow that pits two AIs against each other. They start with an issues database. AI 1 pulls issues, and fixes them, and commits them. AI 2 reviews the work and makes new issues. And the cycle repeats.

After a bunch of work is complete and issues are flagged as done, the tests run green, he grabs all of the commits, walks through them, cleans them up if necessary, and crunches them into one big commit to the main branch.

He loves waking up in the morning to a screen filled with completed things.

He has, essentially, self-promoted to management over some potentially untrustworthy junior developers with occasional flashes of brilliance.

And I was pondering that, and it just reminded me of something I've been feeling for sometime.

A lot of modern development is wiring together large, specialized libraries. Our job is to take these things (along with their cascade of dependencies) and put our own little bits of glue and logic around them.

And, heck, if I wanted to glue badly documented black boxes together, I would have gone into Electrical Engineering.

But here's the thing.

While the layers are getting thicker, the abstractions more opaque, in the end, much like a CEO is responsible for that ONE PERSON down in the factory behaving badly, we, as users of this stuff, are responsible for ALL OF IT. Down to bugs in the microprocessor.

When push comes to shove, it's all on us.

We can whine and complain and delegate. "I didn't write the compiler, not my fault." "Not my library..." "The Manager assured the VP who assured me that..."

But doesn't really matter when you have a smoking crater of a system, does it?

Because we're the ones delivering services and such, putting our names on it.

So, yea, no, you don't have to be "as skilled", perhaps, when using something.

But you're still responsible for it.

ghuntley · 1d ago
Absolutely! I kind of hate the term "vibe coding" because of its associations with brain off. It is so important for an engineer to take accountability for what they ship.

Now to your ponderoo about libraries, something I've found that's really fascinating is I've really stopped using open source libraries unless there's a network ecosystem effect for example like Tailwind. But for everything else, it's much easier to code generate it. And if there's something wrong with the implementation, I can take ownership and accountability for that implementation and I can just fix it with a couple more prompts. No more bullshit in open source. Some person might not even be maintaining the project anymore or having to deal with like getting a pull request fixed or open source supply chain attack vectors related to project takeovers and all that noise. It just doesn't exist anymore. It's really changed how I do software development.

bgwalter · 1d ago
"Someone can be highly experienced as a software engineer in 2024, but that does not mean they're skilled as a software engineer in 2025, now that AI is here."

With that arrogance, my only question is: Where is your own code and what makes you more qualified than Linux kernel or gcc developers?

Workaccount2 · 1d ago
Using AI to accelerate your productivity is not the same thing as letting AI do your job. The author seems to be pointing out that if you are someone who is going to dig in their heels as a "never-ai" dev, you are liable to be leaving productivity on the table.
skydhash · 1d ago
How so? For any mature project (aka anything past launch), most PRs are very small and you spend more time analyzing the tradeoffs of the solution than writing it.
q3k · 1d ago
Yeah, any of these AI accelerationist threads make me feel like I'm working in some parallel universe.

Writing code has never been a bottleneck for me. Planning out a change across multiple components, adhering to both my own individual vision and the project direction and style, fixing things along the way (but only when it makes sense), comparing approaches and understanding tradeoffs, knowing how to interpret loose specs... that's where my time goes. I could use LLM assistance, but given all of the above, it's faster for me to write it myself than to try to distill all of this into a prompt.

skydhash · 1d ago
And for most things, you already have a basic idea on how to implement it (or where to go to find information). But then you have to check your assumptions about the codebase and that's a time sink, especially in a collaborative environment. Like what is the condition for this modals to appear? And without realizing it, you're deep into the backend code deciphering the logic for a particular state with three design documents open and a slack chat going with 4 teammates.
elliotbnvl · 1d ago
If you have a good codebase, a good team, and strong product direction behind you a lot of the more abstract work you're describing goes away because most of those decisions were made weeks, months, or years ago by the time you're ready to put pen to paper on the actual code. Maybe that's part of why your experience is so different?
skydhash · 1d ago
Sometimes, one of those decisions was right in the past, but is wrong in the current context (eg. the company is now trying to get governments contracts). Changing one of these have rippling effects on all the decisions, so you're trying to reconcile the two sets in a way that minimizes the need to rewrite code. It's more like a research lab than a factory assembly line.
JackSlateur · 1d ago
Yes, it is because your job is to think

Those AI accelerationists are not thinking and, as such, are indeed boosted by a non-thinking machine

In the end, code is nothing but a way to map your intelligence into the physical world (using an interface called "computer")

bryanlarsen · 1d ago
One of the best tools for that task is rubber duck debugging. AI's are better than rubber ducks. Often not very much better, but sometimes an inane comment they make in reply triggers a eureka.
ghuntley · 1d ago
Exactly. One of my favorite things to do is to dump a code path into the context window and ask it to generate a mermaid sequence diagram or a class diagram explaining how everything is connected together.
ethanwillis · 1d ago
But you don't need an LLM to generate that kind of diagram do you?
elliotbnvl · 1d ago
It's not arrogance, since they're not asserting anything about themself. It's a factual observation with which you're free to disagree – but should be challenged directly rather than via ad hominem if you want to actually make a point rather than just collect internet snark kudos.

Also, there are far more generic web developers than there are Linux kernel developers, and they represent the vast majority of the market share / profit generation in software development, so your metric isn't really relevant either.

skydhash · 1d ago
So what has changed about the realm of programming that make all the skill obsolete, including the skill to learn new programming thingies?

The DOM API is old, All the mainstream backend languages are old, unix administrations has barely changed (only the way to use those tools have). Even Elasticsearch is 15 years old. Amazon S3 is past drinking age in many countries around the world. And that's just pertaining to web projects.

You just need to open a university textbook to realize how old many of the fundamentals are. Most shiny new things is old stuff repackaged.

elliotbnvl · 1d ago
A lot of people are rejecting AI because of how transformational it is. Those people will fall behind the people who adopt it aggressively.

It's akin to people who refused to learn C because they knew assembly.

skydhash · 1d ago
I don't think people has refused to learn C (which is not particularly hard to learn for someone who knows about assembly and various other languages at the time). A lot of compilers were buggy and people have lots of assembly snippets for particular routines. And that's not counting mature projects that was already in assembly and you have to maintain. A lot of programmers are actually fine trying new stuff out, but some are professionals and don't bring everything under the sun in their work projects.
elliotbnvl · 1d ago
You're missing the point. People refused to learn it not because it was technically challenging but because it was a transformation. It happens with every increase in abstraction; folks fall by the wayside because they don't want change.

The same thing is happening with LLMs. If anything, the gap is far smaller than between assembly and C, which only serves to prove my point. People who don't understand it or like it could easily experience massive productivity gains with a minimum of effort. But they probably never will, because their mindset is the limiting factor, not technical ability.

It really comes down to neural plasticity and willingness to adapt. Some people have it, some people don't. It's pretty polarizing, because for the people that don't want change it becomes an emotional conversation rather than a logical one.

What's the opportunity cost of properly exploring LLMs and learning what everybody else is talking about? Near zero. But there are plenty of people who haven't yet.

skydhash · 1d ago
I have, and it's not amazing. I'm pretty sure a lot of people have and agree with me. Why ask an LLMs when I can just open an API reference and have all the answers in a dense and succint format that gives me what I need?

Let's say I'm writing and Eloquent query (Laravel's ORM) and I forgot the signature for the where method. It's like 5 seconds to find the page and have the answer (less if I'm using Dash.app). It would take me longer to write a prompt for that. And I have to hope the model got it right.

For bigger use cases, a lot of times I already know the code, the reason I haven't written it yet is I'm thinking how it would impact the whole project architecture. Once I have a good feel, writing the code is a nice break from all of those thinking sessions. Like driving on a scenic route. Yeah you could have an AI drive you there, but not when you're worrying it taking the wrong turn at every intersection.

JackSlateur · 1d ago
If I wanted to babysit a retarded unit doing mindless things, I would have not chosen this career

I've yet to see a single occurrence at work (a single!) of something done better/quicker/easier with IA (as a dev) I've read lots of bullshit on the internet, sure, but from my day to day real-world experience, it was always a disaster disguised as glorious success story

thih9 · 1d ago
> It's not arrogance, since they're not asserting anything about themself.

But you can be arrogant without referencing yourself directly.

After all, anything you say is implicitly prefixed with “I declare that”.

E.g. one of Feynman’s “most arrogant” statements is said to be: “God was always invented to explain the mystery. God is always invented to explain those things that you do not understand.”[1] - and there’s no direct self reference there.

[1]: https://piggsboson.medium.com/feynmans-most-arrogant-stateme...

falcor84 · 1d ago
TFA didn't say that experienced software engineers in 2024 "necessarily aren't" skilled as a software engineer in 2025, but just that they "aren't necessarily" so, which is an entirely valid point regardless of AI.
incomingpain · 17h ago
I have a greater appreciation of LLMs after working out how to do them offline.

My 2 year old graphics card sure could be bigger... kicks can... it was plenty for starfield...

holy crap vram is expensive!

>If they waste time by not using the debugger, not adding debug log statements, or failing to write tests, then they're not a good fit.

Why would you deprive yourself of a tool that will make you better?

jbellis · 1d ago
Yes! This is an underappreciated point that both sides in "AI makes coding better" vs "AI just writes slop" have mostly missed. Co-authoring code with AI is a qualitatively different activity than writing code by hand, and it requires new skills.

I started Brokk to give humans better tools with which to do this: https://github.com/BrokkAi/brokk

ghuntley · 1d ago
Hey dude, it's been a couple weeks since we caught up for a zoom. Maybe it's three weeks now. Still keen to catch up again, dude. It's gonna be a little bit busy for the next couple of weeks. I've got two conference talks to do then I'm gonna be over in San Fran, but keen.
elliotbnvl · 1d ago
> If they waste time by not using the debugger, adding debug log statements, or writing tests, then they're not a good fit.

Do you mean writing manual tests? Because having the LLM write tests is key in iteration speed w/o backtracking.

JackSlateur · 1d ago
The only purpose of tests is the help you define good behavior and bad behavior, and keep it that way

So, when you write tests, your main job is to think (define what is good and what is bad)

As such, using IA to write tests is writing useless tests

I got a job interview this monday; I asked the guy : "do you use IA ?" He mumbled something like "yes" Then, I baited him: "It's quite handy to write tests!" He responded: yes, but no (for the above reason)

He got the job

elliotbnvl · 1d ago
All programming is thinking. If AI can write good code, it can write good tests. Your job – for now – is to make sure the tests are good, if only because for the time being we're still slightly better at reasoning than the AI is. Just ask Garry Kasparov if that'll ever change.
JackSlateur · 1d ago
Who is Garry Kasparov ? wikipedia hits a chess player

Chess saw no innovation since hundreds of years or something

I get your point: with IA in charge, the world will stagnate

What I do not share is your belief that this is a good outcome

elliotbnvl · 1d ago
One-time world chess grandmaster, who lost to Deep Blue (a chess AI) in 1997 after beating it in 1996. This marked the first time an AI beat a human on the world stage in a game of pure reasoning.

I don't believe AI will cause the world to stagnate at all. I think it will unleash humanity's creativity in a way orders of magnitude greater than history has ever seen.

ehutch79 · 1d ago
Unit tests only tell you that a function isn't working, not what's happening. Convincing actual people to just throw a print statement in the middle of a method to see what something is doing, or what data it's seeing, can be like pulling teeth.
sceptic123 · 1d ago
That's what the debugger is for
ehutch79 · 1d ago
Yes. Sometimes just chucking a print is more convenient.

But yeah, debugger is your friend

ghuntley · 1d ago
Typo on my part. I completely agree. Back pressure is everything.

Edit: I've just updated the post.

elliotbnvl · 1d ago
Nice.

FWIW, since I just realized my only main comment was a criticism, I found your article very insightful. It baffles me how many people will disagree with the general premise or nit-pick one tiny detail. The only thing more surprising to me than the rate at which AI is developing is the number of developers jamming their heads into the sand over it.

ghuntley · 1d ago
Ah, I wrote a blog post specifically about this. Some engineers are not going to make it. https://ghuntley.com/ngmi
elliotbnvl · 1d ago
Oh, I read this the other day on HN and agreed strongly. I hadn't realized this was the same blog. Nice!
foldr · 1d ago
The scope of the ‘not’ is also unclear here. I think they’re saying that you waste time if you don’t do all the things in the list, but it could also be read as saying that not using the debugger is a waste of time, and that adding debug log statements and writing tests is also a waste of time.