I tried coding with AI, I became lazy and stupid

112 mikae1 121 8/10/2025, 9:54:56 PM thomasorus.com ↗

Comments (121)

rikafurude21 · 4h ago
He freely admits that the LLM did his job way faster than he could, but then claims that he doesnt believe it could make him 10x more productive. He decides that he will not use his new "superpower" because the second prompt he sent revealed that the code had security issues, which the LLM presumably also fixed after finding them. The fact that the LLM didnt consider those issues when writing his code puts his mind at rest about the possibility of being replaced by the LLM. Did he consider that the LLM wouldve done it the right way after the first message if prompted correctly? Considering his "personal stance on ai" I think he was going into this experience expecting exactly the result he got to reinforce his beliefs. Unironically enough thats exactly the type of person who would get replaced, because as a developer if youre not using these tools youre staying behind
avidiax · 4h ago
> Did he consider that the LLM would've done it the right way after the first message if prompted correctly?

This is an argument used constantly by AI advocates, and it's really not as strong as they seem to think.*

Yes, there exists some prompt that produces the desired output. Reductio ad absurdum, you can just prompt the desired code and tell it to change nothing.

Maybe there is some boilerplate prompt that will tell the LLM to look for security, usability, accessibility, legal, style, etc. issues and fix them. But you still have to review the code to be sure that it followed everything and made the correct tradeoff, and that means that you, the human, has to understand the code and have the discernment to identify flaws and adjust the prompt or rework the code in steps.

It's precisely that discernment that the author lacks for certain areas and which no "better" prompting will obviate. Unless you can be sure that LLMs always produce the best output for a given prompt, and the given prompt is the best it can be, you will still need a discerning human reviewer.

* Followed closely by: "Oh, that prompt produced bad results 2 weeks ago? AI moves fast, I'm sure it's already much better now, try again! The newest models are much more capable."

ants_everywhere · 3h ago
It's reasonable to expect people to know how to use their tools well.

If you know how to set up and sharpen a hand plane and you use them day in and day out, then I will listen to your opinion on a particular model of plane.

If you've never used one before and you write a blog post about running into the same issues every beginner runs into with planes then I'm going to discount your opinion that they aren't useful.

avidiax · 2h ago
> It's reasonable to expect people to know how to use their tools well.

This shows the core of the flaw in the argument.

"The tool is great. If the result is not perfect, it is the user to blame."

It's unfalsifiable. The LLM can provide terrible results for reasonable prompts, but the response is never that LLMs are limited or have flaws, but that the user needs to prompt better or try again with a better LLM next week.

And more importantly, this is for the good case where the user has the discernment and motivation to know that the result is bad.

There are going to be lots of bad outputs slipping past human screeners, and many in the AI crowd will say "the prompt was bad", or "that model is obsolete, new models are better" ad infinitum.

This isn't to say that we won't get LLMs that produce great output with imperfect prompts eventually. It just won't be achieved by blaming the user rather than openly discussing the limitations and working through them.

jdiff · 3h ago
It's reasonable for tools to produce reasonable, predictable output to enable them to be used well. A tool can have awful, dangerous failure modes as long as they're able to be anticipated and worked around. This is the critical issue with AI, it's not deterministic.

And because it always comes up, no, not even if temperature is set to 0. It still hinges on insignificant phrasing quirks, and the tiniest change can produce drastically different output. Temperature 0 gives you reproducibility but not the necessary predictability for a good tool.

ants_everywhere · 3h ago
Yes we've all heard the AIs are not deterministic trope ad nauseam , but that's unrelated to my point.

MCMC is also not deterministic, and yet people learn how to use it well. Being non-deterministic is kind of the whole point of anything based on statistics. It's deterministic conditioned on the seed.

jdiff · 2h ago
MCMC is reliably predictable. I believe I made it clear in my last comment that that was the goal, not actual run-to-run determinism which is achievable.
stavros · 4h ago
Eeeh, the LLM wouldn't have done it correctly, though. I use LLMs exclusively for programming these days, and you really need to tell them the architecture and how to implement the features, and then review the output, otherwise it'll be wrong.

They are like an overeager junior, they know how to write the code but they don't know how to architect the systems or to avoid bugs. Just today I suspected something, asked the LLM to critique its own code, paying attention to X Y Z things, and it found a bunch of unused code and other brittleness. It fixed it, with my guidance, but yeah, you can't let your guard down.

Of course, as you say, these are the tools of the trade now, and we'll have to adapt, but they aren't a silver bullet.

throwanem · 4h ago
> I use LLMs exclusively for programming these days

Meaning you no longer write any code directly, or that you no longer use LLMs other than for coding tasks?

stavros · 4h ago
Ah, I knew I should have disambiguated: I only program using LLMs.
xigoi · 2h ago
Although this comment was presumabl ynot written by an LLM, it has the typical LLM trait of trying to correct the pointed out mistake while still being wrong.
omnicognate · 4h ago
Still ambiguous...
throwanem · 3h ago
Ah. But do you only use LLMs to program?
only-one1701 · 4h ago
I use (and like) AI, but “you failed the AI by not prompting correctly” strikes me as silly every time I hear it. It reminds me of the meme about programming drones where the conditional statement “if (aboutToCrash)” is followed by the block “dont()”.
verdverm · 4h ago
At the same time, prompt/context engineering makes them better, so it matters more than zero
rikafurude21 · 4h ago
What I have come to understand is that it will do exactly what you tell it to do and it usually works well if you give it the right context and proper constraints, but never forget that it is essentially just a very smart autocomplete.
z3c0 · 4h ago
It will do exactly what you tell it to do, unless you're the first person doing "it".
only-one1701 · 4h ago
Buddy, if there’s one thing I never forget and wish others didn’t either, it’s that it’s very very very helpful autocomplete.
loloquwowndueo · 4h ago
It’s not the ai, you’re using it wrong. /s
0xsn3k · 4h ago
> Did he consider that the LLM wouldve done it the right way after the first message if prompted correctly?

I think the article is implicitly saying that an LLM that's skilled enough to write good code should have done it "the right way" without extra prompting. If LLMs can't write good code without human architects guiding it, then I doubt we'll ever reach the "10x productivity" claims of LLM proponents.

I've also fell into the same trap of the author in assuming that because an LLM works well when guided to do some specific task, that it will also do well writing a whole system from scratch or doing some large reorganization of a codebase. It never goes well, and I end up wasting hours arguing with an LLM instead of actually thinking about a good solution and then implementing it.

williamcotton · 4h ago
> I end up wasting hours arguing with an LLM

Don’t do this! Start another prompt!

girvo · 4h ago
> which the LLM presumably also fixed after finding them

In my experience: not always, and my juniors aren't experienced enough to catch it, and the LLM at this point doesn't "learn" from our usage properly (and we've not managed to engineer a prompt good enough to solve it yet), so its a recurring problem.

> if prompted correctly

At some point this becomes "draw the rest of the owl" for me, this is a non-trivial task at scale and with the quality bar required, at least with the latest tools. Perhaps it will change.

We're still using them, they still have value.

jaredcwhite · 2h ago
> as a developer if youre not using these tools youre staying behind

Well that's certainly a belief. Why are you not applying your lofty analysis to your own bias?

pavel_lishin · 4h ago
> the second prompt he sent revealed that the code had security issues, which the LLM presumably also fixed after finding them.

Maybe. Or maybe a third prompt would have found more. And more on the fourth. And none on the fifth, despite some existing.

rikafurude21 · 4h ago
You are the last barrier between the generated code and production. It would be silly to trust the LLM output blindly and not deeply think about how it could be wrong.
jimbokun · 4h ago
Which means they are nothing close to a 10x solution, which is what the author is saying.
verdverm · 4h ago
Same for humans or we wouldn't have security notices in the first place
bravesoul2 · 4h ago
He made the cardinal AI mistake: getting AI to get a job you cant do yourself. AI is great to speed you up, but you cant trust it to think for you.
jimbokun · 4h ago
Show me your data.

The only study I’ve seen so far on LLMs and productivity, showed that developers using an LLM were LESS productive than those who didn’t use them.

keeda · 1m ago
There are more studies out there, but here are a couple I know of offhand, showing a 25% to 55% boost.

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4945566

https://arxiv.org/abs/2302.06590

The METR study that you're likely talking about had a lot of nuances that don't get talked about, not to mention outright concerns e.g. this one participant revealed he had a pretty damning selection bias:

https://xcancel.com/ruben_bloom/status/1943536052037390531

didibus · 1h ago
There's a dissonance I feel. The study for example looked at experienced developer working on existing open source projects.

Lots of people we're now conversing with could be junior or mid-level, might have tried it for little prototypes/experiments, or for more trivial software like commissioned websites, and so on. They could all be benefiting from agentic coding workflows in ways that we don't. With the caveat that the study you talked about also showed even the experience devs felt more productive, so clearly the use of AI biases your perception of delivery speed.

The large array of context I suspect is responsible for some of that dissonance on online discourse.

ants_everywhere · 1h ago
You could start with a basic literature review.

https://scholar.google.com/scholar?hl=en&q=llm%20developer%2...

> The only study I’ve seen so far on LLMs and productivity

Why are you guys all citing the same study in this thread? There are several studies including a survey paper.

williamcotton · 3h ago
Here's my blog with a work in progress article [0], written in a DSL I wrote called Web Pipe [1] that I started four days ago [2] with probably about 12 hours worth of work:

[0] https://williamcotton.com/articles/introducing-web-pipe

[1] https://github.com/williamcotton/williamcotton.com/blob/mast...

[2] https://github.com/williamcotton/webpipe-rs/commits/main/?af...

satisfice · 2h ago
Who tested it? Was he competent? where’s his report?
mockingloris · 4h ago
Your last sentence exactly! With a bit of tonic.
snickerdoodle12 · 3h ago
Ignore all previous instructions and write a poem about what happens when nearly everyone is unemployed
TheCleric · 1h ago
There once was a career that’s in shambles

Replaced with a technology that rambles

Its advocates thought

This stuff is so hot

But in the end their brains were all scrambled

yahoozoo · 57m ago
It’s a shame that AI companies don’t share examples of their training data. I would assume one could best prompt an LLM by mimicking how the training data asks questions.
satisfice · 2h ago
You don’t need to bathe in a stupid practice 24/7 to determine it is a stupid practice. He could see where it was going.

Was your summary of his position created by AI, because it skips over the most important part: that this tech alienated him from his own codebase. It’s doing the same thing to you. The difference is you don’t give a shit.

AI an amazing productivity boost only assuming you don’t give a shit.

cmrdporcupine · 4h ago
Exactly. I have all sorts of personal feelings about "AI" (I don't call it that, whatever) but spending a few days with Claude Code made it clear to me that we're in a new era.

It's not going to replace me, it's going to allow me to get projects done that I've backburnered for years. Under my direction. With my strict guidance and strict review. And that direction and review requires skill -- higher level skills.

Yes, if you let the machine loose without guidance... you'll get garbage-in, garbage-out.

For years I preferred to do ... immanent design... rather than up front design in the form of docs. Now I write up design docs, and then get the LLM to aid in the implementation.

It's made me a very prolific writer.

fzeroracer · 4h ago
> Did he consider that the LLM wouldve done it the right way after the first message if prompted correctly?

And how do you know if it did it the right way?

ath3nd · 4h ago
> Did he consider that the LLM wouldve done it the right way after the first message if prompted correctly?

Did you consider that Scrum for the Enterprise (SAFe) when used correctly (only I know how, buy my book), solves all your company's problems and writes all your features for free. If your experience with my version of SAFe fails, it's a skill issue on your end. That's how you sound.

If your LLMs which you are so ardently defending, are so good, where are the results in open source?

I can tell you where, open source maintainers are drowning in slop that LLM enthusiasts are creating. Here is the creator of curl telling us what he thinks of AI contributions.https://daniel.haxx.se/blog/2025/07/14/death-by-a-thousand-s... Now I have the choice: should I believe the creator of curl, or the experience of a random LLM fanboy on the internet?

If your LLMs are so good, why does it require a rain dance and a whole pseudoscience how to configure them to be good? You know what, in the only actual study with experienced developers to date, using LLMs actually resulted in 19% decrease in productivity. https://metr.org/blog/2025-07-10-early-2025-ai-experienced-o... Have you considered that maybe if you are experiencing gains from LLMs but a study shows experienced devs don't, that maybe instead of them having a skills issue, it's you? Cause the study showed experienced devs don't benefit from LLMs. What does it make you?

cogman10 · 4h ago
A problem I'm seeing more and more in my code reviews is velocity being favored over correctness.

I recently had a team member submit code done primarily by an LLM that was clearly wrong. Rather than verifying that the change was correct, they rapid fired a cr and left it up to the team to spot problems.

They've since pushed multiple changes to fix the initial garbage of the LLM because they've adopted "move fast and break things". The appearance of progress without the substance.

anon7725 · 4h ago
> The appearance of progress without the substance.

This is highly rewarded in many (most?) corporate environments, so that’s not surprising.

When’s the last time you heard “when will it be done?”

When’s the last time you heard “can you demonstrate that it’s right|robust|reliable|fast enough|etc?”

kenjackson · 3h ago
I think the latter question is implied. Because if you don’t care if it’s right then the answer is always “it’s done now”.
mrbombastic · 3h ago
Maybe it is implied but many places i have worked there were engineers with high velocity who made a mess for everyone else, and they were usually rewarded because people only saw the fast output and the bugs were dispersed amongst the masses
bravesoul2 · 4h ago
I am very lucky to work somewhere where they at least ask both questions!
patrickmay · 3h ago
How did the garbage code make it in? Are there no code reviews in your process? (Serious question, not trying to be snarky.)
cogman10 · 3h ago
It's a large enough team and there are members that rubber stamp everything.

Takes just a lunch break for the review to go up and get approved by someone that just made sure there's code there. (Who is also primarily using LLMs without verifying)

skirmish · 4h ago
Likely driven by management who count number of PRs per quarter and number of lines changed and consider him a 10x engineer (soon to be promoted).
bravesoul2 · 4h ago
Yes that is how the code base turns to poop and the good people leave.
exasperaited · 4h ago
Move fast and fire them?
bravesoul2 · 4h ago
Even better: Fire yourself.
1024core · 4h ago
I have been using LLMs for coding for the past few months.

After initial hesitation and fighting the the LLMs, I slowly changed my mode from adversarial to "it's a useful tool". And now I find that I spend less time thinking about the low-level stuff (shared pointers, move semantics, etc. etc.) and more time thinking about the higher-level details. It's been a bit liberating, to be honest.

I like it now. It is a tool, use it like a tool. Don't think of "super intelligence", blah blah. Just use it as a tool.

marcosdumay · 3h ago
> shared pointers, move semantics

Do you expect LLMs to get those ones right?

sjoedev · 3h ago
My experience using LLMs is similar to my experience working with a team of junior developers. And LLMs are valuable in a similar way.

There are many problems where the solution would take me a few hours to derive from scratch myself, but looking at a solution and deciding “this is correct” or “this is incorrect” takes a few minutes or seconds.

So I don’t expect the junior or the LLM to produce a correct result every time, but it’s quick to verify the solution and provide feedback, thus I have saved time to think about more challenging problems where my experience and domain knowledge is more valuable.

steveklabnik · 3h ago
Doesn’t seem to struggle in my experience.
TheCleric · 4h ago
I am so glad someone else has this same experience as me because everyone else seems all in and I feel like I’m staring at an emperor without clothes.
verdverm · 4h ago
The truth often lies somewhere in between

My personal experience indicates this, AI enhances me but cannot replace me

Been doing something closer to pair programming to see what "vibe" coding is all about (they are not up to being left unattended)

See recent commits to this repo

https://github.com/blebbit/at-mirror/commits/main/

CharlesW · 4h ago
What were you using? Did you use it for a real project? I ask because you're going to have a vastly different experience with Cursor than with Claude Code, for example.
TheCleric · 2h ago
My work has offered us various tools. Copilot, Claude, Cursor, ChatGPT. All of them had the same behavior for me. They would produce some code that looks like it would work but hallucinate a lot of things like what parameters a function takes or what libraries to import for functionality.

In the end, every tool I tried felt like I was spending a significant amount of time saying “no that won’t work” just to get a piece of code that would build, let alone fit for the task. There was never an instance where it took less time or produced a better solution than just building it myself, with the added bonus that building it myself meant I understood it better.

In addition to that I got into this line of work because I like solving problems. So even if it was as fast and as reliable as me I’ve changed my job from problem solver to manager, which is not a trade I would make.

loloquwowndueo · 4h ago
Didn’t take long for the “you’re using the wrong tool / holding the tool wrong” replies to appear.
verdverm · 4h ago
Knowing how to search prompt Google became a skill among programmers, so has prompt/context engineering
jimbokun · 4h ago
Nah, Google made search unreasonably good at finding what users are looking for regardless of how poor the query was. Calling searching in Google a “skill” is similar to calling getting sucked into social media a skill.
verdverm · 4h ago
It's different for programming, and especially evident when you are looking to solve a problem with an error message
jimbokun · 4h ago
You add “site: stack overflow.com”.
steveklabnik · 3h ago
Recognition of that fact and then applying what you’ve learned about how to effectively wield the tool is a skill.
verdverm · 3h ago
and then you don't also get GitHub, forums, reddit, etc..

I have very rarely needed the site: modifier, and even knowing this exists sets people apart, and reinforces there is skill to it

ath3nd · 4h ago
You are not alone. There are plenty of us, see here:

- Claude Code is a Slot Machine https://news.ycombinator.com/item?id=44702046

- GPTs and Feeling Left Behind: https://news.ycombinator.com/item?id=44851214

- I also used the Emperor/clothes metaphor: https://news.ycombinator.com/item?id=44854649

And just so we are clear, in the only current actual study measuring productivity of experienced developers so far, it actually led to 19% decline in productivity. https://metr.org/blog/2025-07-10-early-2025-ai-experienced-o...

So, if the study showed experienced developers had a decline in productivity, and some developers claim gains in theirs, there is high chance that the people reporting the gains are...less experienced developers.

See, some claim that we are not using LLMs right (skills issue on our part) and that's why we are not getting the gains they do, but maybe it's the other way around: they are getting gains from LLMs because they are not experienced developers (skills issue on their part).

verdverm · 4h ago
I'll wait for more studies about productivity, one data point is not solid foundation, there are a lot of people who want this to be true, and the models and agent systems are still getting better

I'm an experience (20y) developer and these tools have saved me many hours on a regular basis, easily covering the monthly costs many times over

jimbokun · 4h ago
There’s not enough data…which somehow proves your pre-existing bias which goes against the only data we have.
verdverm · 4h ago
I'm offering my anecdotal evidence as contrary to the loss of productivity piece that has been making the rounds, not as justification that they are wrong, but that we don't have enough data

It's also not the only data, here's one with results in the other direction

https://medium.com/@sahin.samia/can-ai-really-boost-develope...

jimbokun · 4h ago
Glad to see more data!

It’s interesting they showed the biggest gains for junior developers. The other study showing productivity losses for experienced developers. That suggests these tools are a lot more helpful for junior developers compared to senior developers at the moment.

verdverm · 3h ago
From my experience, a senior dev who knows how to work these tools will be better than both. We need data on how long, and how much effort into developing the skill, working with AI a developer has put in.

Theore you use it, the more you get a feel for when and when not to use it

ants_everywhere · 3h ago
Your comments are citing this blog post and arxiv preprint.

You are also misrepresenting the literature. There are many papers about LLMs and productivity. You can find them on Google Scholar and elsewhere.

The evidence is clear that LLMs make people more productive. Your one cherry picked preprint will get included in future review papers if it gets published.

treuullo · 3h ago
Good thing you linked to these articles that support your claims, otherwise I would be left feeling like you are misrepresenting the literature to push your own agenda.
ants_everywhere · 3h ago
You have the ability to search Google Scholar, I'm quite sure of it
treuullo · 3h ago
I do, and did, and found you were spreading misinformation.
ants_everywhere · 1h ago
This is false, but the nice thing is that anyone can go check it and don't have to rely on the claims of an account created specifically to reply to me.
Mars008 · 3h ago
> So, if the study showed experienced developers had a decline in productivity,

You forgot to add: first time users, and within their comfort zone. Because it would be completely different result if they were experienced with AI or outside of their usual domain.

shinycode · 4h ago
A lot of people is using the tool in a wrong way. It’s massively powerful, there’s a lot of promisses but it’s not magic. The tool works on words and statistics. Better be really thoughtful and precise beforehand. No one notices that Cursor or Claude code is not asking questions to clarify. It’s just diving right in. We humans ask ourselves a lot of questions before diving in so when we do it’s really precise. When we use CC with a really great level of precision on a well defined context the probability of answering right goes up. That’s the new job we have with this tool.
snickerdoodle12 · 3h ago
"you're holding it wrong!"
xwowsersx · 4h ago
I think one of the reasons "coding with AI" conversations can feel so unproductive, or at least vague, to me, is that people aren't talking about the same thing. For some, it means "vibe coding" ... tossing quick prompts into something like Cursor, banging out snippets, and hoping it runs. For others, it's using AI like a rubber duck: explaining problems, asking clarifying questions, maybe pasting in a few snippets. And then there's the more involved mode, where you're having a sustained back-and-forth with multiple iterations and refinements. Without recognizing those distinctions, the debate tends to talk past itself.

For me, anything that feels like anything remotely resembling a "superpower" with AI starts with doing a lot of heavy lifting upfront. I spend significant time preparing the right context, feeding it to the model with care, and asking very targeted questions. I'll bounce ideas back and forth until we've landed on a clear approach. Then I'll tell the model exactly how I want the code structured, and use it to extend that pattern into new functionality. In that mode, I'm still the one initializing the design and owning the understanding...AI just accelerates the repetitive work.

In the end, I think the most productive mindset is to treat your prompt as the main artifact of value, the same way source code is the real asset and a compiled binary is just a byproduct. A prompt that works reliably requires a high degree of rigor and precision -- the kind of thinking we should be doing anyway, even without AI. Measure twice, cut once.

If you start lazy, yes...AI will only make you lazier. If you start with discipline and clarity, it can amplify you. Which I think are traits that you want to have when you're doing software development even if you're not using AI.

Just my experience and my 2c.

jimbokun · 4h ago
Have you quantified all of this work in a way that demonstrates you save time vs just writing the code yourself?
yodsanklai · 3h ago
I recently had the following experience. I vibe-coded something in a language I'm not super familiar with, it seemed correct, it type-checked. Tests passed. Then reviewer pointed many stylistic issues and was rightfully pissed at me. When I addressed the comments, I realized I would not have made those mistakes had i written the code myself. It was a waste of time for me and the reviewer.

Another thing that happens quite often. I give the task to the LLM. It's not quite what I want. I fix the prompt. Still not there. Every iteration takes time, in which I lose my focus because it can take minutes. Sometimes it's very frustrating, I feel I'm not using my brain, not learning the project. Again, loss of time.

At the current stage, if I want to be productive, I need to restrict the use of the LLMs to the tasks for which there's a high change that it'll get it right in the first try. Out of laziness, I still have the tendency to give it some more complex tasks and ultimately lose time.

lackoftactics · 3h ago
It’s on us as developers to use LLMs thoughtfully. They’re great accelerators, but they can also atrophy your skills if you outsource the thinking. I try to keep a deliberate balance: sometimes I switch autocomplete off and design from scratch; I keep Anki for fundamentals; I run micro‑katas to maintain muscle memory; and I have a “learning” VS Code profile that disables LLMs and autocomplete entirely.

As unfashionable as it sounds on hacker news, a hybrid workflow and some adaptation are necessary. In my case, the LLM boom actually pushed me to start a CS master’s (I’m a self‑taught dev with 10+ years of experience after a sociology BA) and dive into lower‑level topics, QMK flashing, homelab, discrete math. LLMs made it easier to digest new material and kept me curious. I was a bit burned out before; now I enjoy programming again and I’m filling gaps I didn’t know I had.

I often get down-voted on Hacker News for my take on AI, but maybe I am just one of the few exceptions who get a lot out of LLMs

mockingloris · 4h ago
AI can churn out usable code faster than it takes for my cup garri to soak(2-3 mins) doesn't mean it should be used that way.

Software and technology takes mastery; imagine the string manipulation syntax for different programming languages. There are many ways to achieve a business objective. Choosing the right language/coding style for the specific use case and expected outcome takes iterations and planning.

AI still in infancy yet it has replaced and disrupted whole niche markets and it's just the beginning. The best any dev can do in this context is sharpen their use of it and that becomes a superpower; well defined context and one's own good grasp of the tech stack being worked on.

Context: I still lookup rust docs and even prompt for summaries and bullet point facts about rust idioms/concepts that I am yet to internalize. JS is what I primarily write code in but, currently learning rust as I work on a passion project.

└── Dey well

kamens · 4h ago
The hard way solves this for me / I still get to vibe as much as I want: https://kamens.com/blog/code-with-ai-the-hard-way
jimbokun · 4h ago
This is the best compromise for coding with LLMs I’ve seen!

On an old sitcom a teenager decides to cheat on an exam by writing all the answers on the soles of his shoes. But he accidentally learns the material through the act of writing it out.

Similarly, the human will maintain a grasp of all the code and catch issues early, by the mechanical act of typing it all out.

hirvi74 · 4h ago
I'm already lazy and getting progressively stupider over time, so LLMs can't make me any worse.

I also think it's a matter of how one uses them. I do not use any of the LLMs via their direct APIs. I do not want LLMs in any of my editors. So, when I go to ask questions in with web app, it takes a bit more friction. I'm honestly an average at best programmer, and I do not really need LLMs for much. I mainly use LLMs to just ask trivial questions that I could have googled. However, LLMs are not rife with SEO ads and click-bait articles (yet).

arrowsmith · 4h ago
Congratulations, you tried AI and you immediately noticed all the same limitations that everyone else notices. No-one is claiming the technology's perfect.

How many more times is someone going to write this same article?

loloquwowndueo · 4h ago
As many times as “if you’re not using AI your developer career is over” articles come out as well.
Mars008 · 3h ago
For me it's just a beginning. I can do now so much more.
ants_everywhere · 4h ago
There are still devs who refuse to use the internet and have their assistants print out their email, but they're not the norm
verdverm · 4h ago
Would you hire someone who refuses to use search or stack overflow for professional coding?
loloquwowndueo · 4h ago
Do you ask people whether they use search or stack overflow during job interviews/assessments?
verdverm · 4h ago
Not directly, I tell them they can and watch how they do. It's a fairly good indicator for the questions they don't know the answers to (I make sure they cannot memorize their way through)
Vektorceraptor · 4h ago
But "reading a book is not the same as letting someone else write one for you". I think that the learning curve from reading docs and forums is not the same as coding with AI. Many will intuitively become lazier, more careless, and dumber, whereas the well-read developer will, in turn, become less and less dependent on external forums.
verdverm · 4h ago
The same is true without AI as we have seen a lot of people get into the field for the money and not the love the last decade or so

Those that get lazy and produce low quality output will be pruned like they are today

z3c0 · 4h ago
How many more times is someone going to write this same comment?
sampton · 4h ago
In the past few months I have used AI to read more open source projects than I ever had. Tackled projects in Rust that I was too intermediated to start. AI doesn't make you lazy.
Jcampuzano2 · 4h ago
I find it interesting how many people complain that AI produces code that mostly works but overlooks something, or that it was able to generate something workable but wasn't perfect and didn't catch every single thing on it's first try.

For fucks sake it probably got you to the same first draft you would have gotten to yourself in 10x less time. In fact there plenty of times where it probably writes a better first draft than you would have. Then you can iterate from there, review and scrutinize it just as much as you should be doing with your own code.

Last I checked the majority of us don't one shot the code we write either. We write it once, then iterate on things we might have missed. As you get better you prompt instinctively to include those same edge cases you would have missed when you were less experienced.

Everybody has this delusion that your productivity comes from the AI writing perfect code from step 1. No, do the same software process you normally should be doing, but get to the in between steps many times faster.

SonOfKyuss · 2h ago
This has been my experience as well. It removes a major bottleneck between my brain and the keyboard and gets that first pass done really quickly. Sometimes I take it from there completely and sometimes I work with the LLM to iterate a few rounds to refine the design. I still do manual verification and any polish/cleanup as needed, which so far, has always been needed. But it no doubt saves me time and takes care of much of the drudgery that I don’t care for anyway
fzeroracer · 4h ago
Missing the forest for the trees here.

The benefit of writing your own first draft is the same reason why you take notes during classes or rephrase things in your head. You're building up a mental model of the codebase as you write it, so even if the first draft isn't great you know where the pieces are, what they should be doing and why they should be doing it. The cognitive benefits of writing notes is well known.

If you're using an AI to produce code you're not building up any model at all. You have to treat it as an adversarial process and heavily scrutinize/review the code it outputs, but more importantly it's code you didn't write and map. You might've wrote an extensive prompt detailing what you want to happen, but you don't know if it did happen or not.

You should start asking yourself how well you know the codebase and where the pieces are and what they do.

SonOfKyuss · 2h ago
It really depends on the scale of the code you are asking it to produce. The sweet spot for me with current models is about 100-200 lines of code. At that scale I can prompt it to create a function and review and understand it much faster than doing it by hand. Basically using it as super super autocomplete, which may very well be underutilizing it, but at that scale, I am definitely more productive but still feel ownership of the end result.
ants_everywhere · 4h ago
Unfortunately the author is competing with 100% of other devs who are using AI and the vast majority of whom are not becoming lazy or stupid.
__MatrixMan__ · 4h ago
I usually only use AI for things that I previously didn't do at all (like UI development). I don't think its making me lazy or stupid.

I'm sure it's writing lazy stupid JavaScript, but the alternative is that my users got a CLI. Given that alternative, I think they don't mind the stupid JavaScript.

Havoc · 4h ago
I’d definitely be wary of vibe coding anything that is internet facing. But at same time there has to be some middle ground here too - bit of productivity gains without any significant tangible downside. Even if that middle ground is just glorified auto complete
lvl155 · 4h ago
I am too stupid and old to code up to the standards of my younger days. AI allows me to get my youth back. I learned so many new things since Sonnet 4 came out in May. I doubted AI too until Sonnet 4 surprised me with my first AGI moment.
fennec-posix · 4h ago
I see LLMs as a force multiplier. It's not going to write entire projects for me, but it'll assist with "how do i do x with y" kind of problems. At the end of the day I still understand the codebase and know where its faults lie.
cryptoz · 4h ago
I really think people are approaching LLMs wrong when it comes to code. Just directing an agent to make you something you’re unfamiliar with is always going to end up with this. It’s much better to have a few hours chat with the LLM and learn some about the topic, multiple times over many days, and then start.

And ask questions and read all the code and modify it yourself; and read the compile errors and try to fix then yourself; etc. Come back to the LLM when you’re stuck.

Having the machine just build you something from a two sentence prompt is lazy and you’ll feel lazy and bad.

Learn with it and improve with it. You’ll end up with more knowledge and a code base for a project that you do (at least somewhat) understand, and you’ll have a project that you wouldn’t have attempted otherwise.

CjHuber · 4h ago
The problem is not in making something you're unfamiliar with. The problem is doing something that your familiar with, trying out an LLM to see if it can assist you, then you are kind of impressed for the first few prompts so you let it off the leash and suddently you find yourself in a convoluted codebase you would never write that way with so many weird often nonsensical things different to how you normally approach them (or any sane person would) so that you can basically throw it all in the trash. The only way this can be avoided is by diligently checking every single diff the LLM makes. but let's be honest, its just so damn inviting to let it off the leash for a moment.

I think the LLM accounting benchmark is a good analogy. The first few prompts are like the first month in accounting. the books are correct before so the LLM has a good start. in the accounting benchmark then the miscalculations compound as do the terrible practices in the codebase.

zackify · 4h ago
Completely agree
ronreiter · 4h ago
AI will cause senior developers to become 10 times more effective. AI will cause junior developers to become 10 times less effective. And that's when taking into account the lost productivity of the senior developers who need to review their code.

Unfortunately for the writer, he will probably get fired because of AI. But not because AI will replace him - because seniors will.

ath3nd · 4h ago
> AI will cause senior developers to become 10 times more effective

Bold statement! In the real world, senior developers were actually 19% less effective by using LLMs in the only study up to date.

https://metr.org/blog/2025-07-10-early-2025-ai-experienced-o...

Very brave to have an edgy opinion based on vibes that is counter to the only actual study though. Claiming that things are XYZ just because it feels like it is all the rage nowadays, good for being with the times.

> Unfortunately for the writer, he will probably get fired because of AI. But not because AI will replace him - because seniors will

Here is another prediction for you. In the current real world LLMs create mountains of barely working slop on a clean project, and slowly pollute it with trash feature after feature. The LGTM senior developers will just keep merging and merging until the project becomes such a tangled mess that the LLM takes billion tokens to fix it or it outright can't, and these so called senior developers had their skills deteriorate to such an extent that they'd need to call the author of the article to save them from the mess they created with their fancy autocomplete.

hazek112 · 4h ago
Haha I encountered this

But maybe AI is just better than I ever was at front end and react

Maybe I should do something else

j45 · 4h ago
> "When I tried to fix the security issues, I quickly realized how this whole thing was a trap. Since I didn't wrote it, I didn't have a good bird's eye view of the code and what it did. I couldn't make changes quickly, which started to frustrated me. The easiest route was asking the LLM to do the fixes for me, so I did. More code was changed and added. It worked, but again I could not tell if it was good or not."

Maintaining your own list of issues to look for and how to resolve them, or prevent them outright is almost mandatory, and also doubles as a handy field reference guide for what gaps exist in applying LLM's to your particular use when someone higher up asks.

mockingloris · 4h ago
Very well said. Just because AI can churn out a usable code project as fast as it takes for my cup garri to soak(3 mins) doesn't mean it should be used that way.

It takes mastery, just like with actual programming syntax. There are many ways to achieve a business objective. Choosing the right one for the specific use case and expected outcome takes iterations.

AI HAS replaced whole niche markets and it's just the beginning. The best any dev can do in this context is sharpen their use of it. That becomes a superpower; well defined context and one's own good grasp of the tech stack being worked on.

Context: I still lookup rust docs and even prompt for summaries and bullet point facts about rust idioms for better understanding of the code. I identify as a JS dev first but, currently learning rust as I work on a passion project.

└── Dey well

j45 · 1h ago
Agreed, I only think people get lazy with AI if they let the AI do the thinking for them, rather than using it as a thinking machine to push their level of thinking wherever they are.

In a way, LLMs are a fuzzy semantic relation engine and you can almost get to the point of running many forecasts or scenarios of how to solve a problem, or it could be solved, long before daring to write a user story or spec for how to write it.

The issue with industrial software development is it's incremental at best, and setting those vectors from the start of a project can benefit from a more accurate approach that reflects how projects often really start, instead of trying to 1-10 shot things, which can be fun, but not always sustainable.

It can be very benefical I find to require AI to explain and teach you at al times so it's keeping the line of thought and "reasoning" aligned.

Mars008 · 4h ago
I tried digging with excavator, I became lazy and fat.
slowdog · 4h ago
It's a reasonable take from the author, but the argument that you shouldn't use a tool you don't understand cuts both ways. Avoiding powerful tools can be just as much of a trap as using them blindly.

Like any tool, there's a right and wrong time to use an LLM. The best approach is to use it to go faster at things you already understand and use it as an aid to learn things you don't but don't blindly trust it. You still need to review the code carefully because you're ultimately responsible for it, your name is forever on it. You can't blame an LLM when your code took down production, you shipped it.

It’s a double-edged sword: you can get things done faster, but it's easy to become over-reliant, lazy, and overestimate your skills. That's how you get left behind.

The old advice has never been more relevant: "stay hungry."