Why I Won't Use AI

60 milen 42 6/18/2025, 12:32:06 PM agentultra.com ↗

Comments (42)

maeln · 8h ago
The part about productivity also remind me that we still often pay for handmade goods, despite them being often a common industrialized good.

For example, if you enjoy cooking, or it is your job, you might be willing to pay for an artisan knife, even though you can buy a good knife for a few bucks. Same with clothes. They are extremely industrialized, but there is still a lot of tailor living of making bespoke clothe.

We might do it for no other reason than an appreciation of the craft, but a lot of time, it is driven by a desire for high quality (and/or customization).

This makes me wonder if one day we will see artisan software developer (I mean, the idea of software craftmanship is already here). LLM&Co are good at outputting a lot of code very quickly, but they are often not good at producing quality code. And I sincerely doubt that it will get any better, this seems to be more or less a consequence of the core technology making LLM. So unless we have a significant paradigm shift, I don't think it will improve much more. It already feels like we reach the point of diminishing return on this specific tech.

So what about making smaller software, but better software, for client wanting nothing but the upmost quality ? Just like bladesmith, we will have a bunch of new fancy tool at our disposal to be able to code better and faster, but the whole point of the exercice will still to be in control of the process and not give all the decision power to a machine.

bko · 8h ago
> This makes me wonder if one day we will see artisan software developer (I mean, the idea of software craftmanship is already here).

Would you rather the software that drives your car be "artisan software", where labels were carefully chosen by a human?

Software isn't the end product. It's a tool that's supposed to do something we want. We may want an artisan to design our house or iPhone, but we don't want them using "artisan hand crafted" rulers and compilers.

jackthetab · 7h ago
Just because I want/use an artisan knife or bake my own bread in my kitchen doesn't mean I want an artisan microwave or raise my own cows.
archagon · 3h ago
I would never feel confident driving over a “vibe coded” bridge. Some products demand careful human attention to every aspect of the design.
giraffe_lady · 8h ago
> This makes me wonder if one day we will see artisan software developer (I mean, the idea of software craftmanship is already here). LLM&Co are good at outputting a lot of code very quickly, but they are often not good at producing quality code.

The reason I think we might not see this for software even though we do other goods is that the output of a developer is not code it is software. It's possible for good (fit for purpose, easy to use, fast, pretty, whatever metric) software to be built on bad code. The craft of the code is not necessarily apparent in the product in the same way it can be with physical goods.

Whether or not LLMs can consistently output "good" software is less clear to me and I'm not interested in trying to make a prediction about it. But if they do I don't see "hand crafted" code being a thing. No one cares about code.

fragmede · 4h ago
Those who write code do care about the code though, if only because it makes their job harder when the code is complete shit. It might be like welding, where only another welder can appreciate a good weld, and spot oxidization and shitty welds and judge the welds as being good/bad. If there's some janky software that desperately wants to be refactored/rewritten, then (LLM written or not) fellow programmers should rightly judge the code as being crap and choose to stay away from it or refactor it into being better. It might be, as pointed out, functional code, and the end user might not notice (or they might; shitty code is brittle and prone to having bugs), but all else being equal, if there are two products and one has shitty code and one is beautifully architected, I know which one I'd rather use, assuming the state of the code is known.
Tistel · 2h ago
I did CS and have been writing code for 20+ years. For the past two years, helping clients adopt LLMs into their business logic. It has lots of good use cases.

But, I can recognize when it makes mistakes due to my years of manual learning and development. If someone uses LLMs through their entire education and life experience, will they be able to spot the errors? Or will they just keep trying different prompts until it works? (and not know why it works)

Its like the auto-correct spell checker. I can't spell lots of words, I can just get close enough to the right spelling until it fixes it for me. I am a bit nervous about having the same handicap with LLMs taking that away from me in: thinking, logic and code.

Fully aware I can be dismissed as a dinosaur.

karmakaze · 6h ago
> I enjoy learning. And learning is a difficult chore. There is no golden road to knowledge. I am constantly frustrated when my assumptions and theories are thwarted. I can’t get the program to compile. The design of this system I was sure was right is missing a key assumption I missed. I bang my head against the wall, I read references, and I ask people questions. I try a different approach and I verify my results. Finally, eventually, the problem gives way and the solution comes out.

> It is during the struggle that I learn the most and improve the most as a programmer. It is this struggle I crave. It is what I enjoy about programming.

This explains the whole post to me. First of all this is an area where using an AI can streamline design considerations before getting to the head-banging-walls. Since this is an aspect that the writer enjoys, there's no saving them. They decided that they like things as they are without AI. The rest of all the cited reasons are post-decision justifications.

ednite · 8h ago
The author makes a strong case against the current use of AI, and I agree that today’s tools can’t replace the deep thinking, creativity, and intuition that good programming requires. At best, they’re sophisticated parrots, useful in some ways, but fundamentally lacking understanding, maybe this will change down the road.

That said, there’s another angle worth considering. AI has introduced a new kind of labor: prompt engineers.

These aren’t traditional programmers, but they interact with models to produce code-like output that resembles the work of software developers. It’s not perfect, and it raises plenty of concerns, but it does represent a shift in how we think about code generation and labor.

Regardless of which side of the fence you're on, I think we can all agree that this paradigm shift is happening, and arguments like the authors raise valid and important concerns.

At the same time, there are also compelling reasons some people choose to embrace AI tools.

In my opinion, the most crucial piece of all this is government policy. Policymakers need to get more involved to ensure we're prepared for this fast-moving and labor-disruptive technology.

Just my two cents and thanks for sharing.

add-sub-mul-div · 8h ago
Are we better off since the new labor category of mass producing garbage food was innovated? Is the convenience worth the costs to society? Sure there are people compelled to deliver this product but are their interest aligned with ours, minus the momentary need for convenience? Did government policy effectively counter public health consequences?
ednite · 8h ago
I get what you’re saying, and agree there’s definitely a big risk right now of mass-producing poor-quality results.

I see it happening every day. It’s especially concerning when the person using the tool doesn’t really understand the output. That kind of disconnect can be dangerous. But at the same time, I’ve also read about cases where AI helped scientists accelerate research and save years of work. So there’s clearly potential for good.

In my case, I’m genuinely worried about where this technology could lead us. But I’m still hopeful. With enough awareness through voices like the author’s and continued public pressure, maybe policymakers will step up and take it seriously before things get out of hand. Thanks for your comment.

bandoti · 8h ago
Lots of nice thoughts that I agree with. But there is a lot of value creation in AI as well, beyond building things.

For example, how can doctors save time and spend more time one-on-one with patients? Automate the time-consuming, business-y tasks and that’s a win. Not job loss but potential quality of life improvement for the doctors! And there are many understaffed industries.

The balancing point will be reached. For now we are in early stages. I’m guessing it’ll take at least a decade or two for THE major breakthrough—whatever that may be. :)

fumeux_fume · 8h ago
I seriously question the premise that productivity gains from the use of AI (if they really exist) will translate into quality of life improvements. If 20 years of work experience has taught me anything, it's that higher productivity typically results in more busy work. More busy work or more work that gives the employer the most value rather than the customer or employee. So the doctor in your example gets more patients rather than higher quality interactions. Some people will get to see a doctor sooner but they still get low quality interactions.
linsomniac · 6h ago
>Some people will get to see a doctor sooner but they still get low quality interactions.

Or: The AI tooling will be able to allow the lay-person to double-check the doctor's work, find their blind spots, and take their health into their own hands.

Example: I've been struggling with chronic sinus infections for ~5 years. 6 weeks ago I took all the notes about my doctor interactions and fed them into ChatGPT to do deep research on. In particular it was able to answer a particularly confusing component: my ENT said he visually saw indications of allergic reaction in my sinuses, but my allergy tests were negative. ChatGPT found an NIH study with results that 25% of patients had localized allergic reactions that did not show up on allergy tests elsewhere on their body (the skin of my shoulder in my case). My ENT said flat out that isn't how allergies work and wanted to start me on a CPAP to condition the air while I was sleeping, and a nebulizer treatment every few months. I decided to run an experiment and I started taking an allergy pill before bed, while waiting for the CPAP+nebulizer. So far, I haven't had even a twinge of sinus problems.

bandoti · 5h ago
Given this anecdote I can imagine doctors having AI access to a network of the latest studies will certainly help better inform everyone.

Ultimately, doctors are the experts doing the studies, but AI being there to help will certainly add value.

Avoiding any percentage of misdiagnoses is a huge win and time saver.

orev · 5h ago
Some allergy pills (diphenhydramine) are also so good at causing drowsiness they’re sold as sleep aides. Make sure you control for that in your personal testing.
linsomniac · 3h ago
I'm using Zyrtec (Cetirizine Hydrochloride), which among the second-gen allergy pills is more likely to cause drowsiness. My primary indicator is lack of sinus headaches at night and in the morning, there could be some correlation to sleeping through a headache if I'm drowsy because of it, but I also seem to be clear during the morning and day, and before going down this path I was lucky if I could go a month without just being miserable due to a headache. It's probably worth it for me to try another of the second gen options.
bandoti · 7h ago
Only time will tell! But it’s worth a try.
leosanchez · 8h ago
> how can doctors save time and spend more time one-on-one with patients?

Do you think they will spend more time with patients or take in more patients?

From what I have seen in my country they would do the latter.

kbelder · 5h ago
>Do you think they will spend more time with patients or take in more patients?

Well, taking in more patients per doctor is what will decrease the cost for the patient (so would increasing the number of doctors). Often, I'd rather be shuffled in and out in half the time, and be charged less, than charged the same and be given more time to talk with the doctor.

orev · 4h ago
People working in specialized fields (doctors, programming, etc.) don’t get paid by the hour, they get paid for their expertise, so less time spent doesn’t mean a lower price.
actuallyrizzn · 3h ago
"Why I won't use moveable type."

shrug

greybox · 6h ago
Why was this flagged?
bko · 8h ago
> they were protesting the fact that capital owners were extracting the wealth from their labour with this new technology and weren’t reinvesting it to protect the labourers displaced by it.

This capital versus labor dynamic is very common and an interesting way to frame things. But suppose you do take the view that all the wealth accrues to capital. What are the implications?

One implication would be to skip college, take that money and invest it in the stock market. Why invest in labor when capital grows faster? Although I don't think anyone with this mindset would offer that advice, but rather dwell in the fact that they are laborers by design with no hope of ever breaking that. Sure enough:

> I’m a labourer. A well-compensated one with plenty of bargaining power, for now. I don’t make my living profiting from capital. I have to sell my time, body, and expertise like everyone else in order to make the profits needed to support me and my family with life’s necessities.

Another point, in regards to productivity

> Did you know there’s no conclusive evidence that static type checking has any effect on developer productivity?

We don't need "conclusive evidence" for everything. You see this a lot with a lot of ridiculous claims. I don't need some laboratory controlled environment to prove that static type checking is more productive. How do I know? Because I've used statically typed and non-statically typed language and I'm more productive in statically typed. I remember the first time I introduced Flow, a static type checker to my javascript project and the amount of errors I had was really mind-boggling.

A lot of people agree with me and statically typed languages are popular and dynamically typed languages like Python are constantly adding typing tooling. It's a test of the market. People like these tools, presumably because they're making them more productive. Even if they're not, people like using them for whatever reason and that translates to something, right?

This scientism around everything is exhausting.

JohnFen · 7h ago
> I don't need some laboratory controlled environment to prove that static type checking is more productive. How do I know? Because I've used statically typed and non-statically typed language and I'm more productive in statically typed.

So you don't know. There are a couple of devs where I work that started using llms heavily to support their work. They earnestly claim that they are more productive, however their other team member disagree with their self-assessment and say that they are no more or less productive than before using llms.

How to know whose assessment is more accurate? You need some sort of test that eliminates, as far as possible, subjectivity.

samrus · 8h ago
> Why invest in labor when capital grows faster?

Because capital (in the sense that your referring to, which is money) isnt "real". The economy isnt based on money, its based on goods and services, which require labour and natural resources. Money is just the lubricant to make sure these two things can be optimally distributed.

In simpler terms, if everyone skipped college and just invested in the stock market, then the market will collapse because no one is producing the goods and services the market is trading kn. and thats the rub (tragedy) of economics, its a cooperative system, you are a part of it, not the temporarily embarrassed billionaire you might fancy yourself as. so if you have a strategy, you have to account for everyone using it, and see if its sustainable

This applies to cost reduction through wage supression or tax avoidance. If onky you do it, then it works great for you, but if all the companies in the economy do it, there suddenly consumers have no money to buy your products and the public infrastructure you need.

This is why im kinda on the side of the luddites, as presented in the post. They've been portrayed as neadrathals, backwards and scared of technological progress. But if they are people who understand that capitalists (not fans of capitalism but people who own and deploy capital for a living) will ruin the whole economy with their greed and must be brought back in line, then i support that. The original luddite movement might have failed but the industrial revolution did neccesitate revolts later on where people had to get violent to ensure basic rights like child labour laws, minimum wage, the weekend, paid leave. I think the information revolution will have that as well. Probably something like the 19th- early 20th century labour rights protests, with a butlerian twist

bko · 8h ago
> In simpler terms, if everyone skipped college and just invested in the stock market, then the market will collapse because no one is producing the goods and services the market is trading kn... you have to account for everyone using it, and see if its sustainable

This is just silly. I can give advice to my son "you should go to college" and on the margin, I could think more people should go to college, and then somebody like you comes in "If everyone went to college who would pick up garbage??" as though its some profound statement.

If you think capital grows faster than labor (conditional not 100% of the population), on the margin you should invest more in capital.

You're tilting at windmills here.

bloppe · 7h ago
The first part about labor is a bit incoherent. It bemoans the accumulation of wealth by the capital class, then the part on profitability claims that the capital class is actually losing by investing in AI. It also seems to imply that laborers should be entitled to demand for their services. Imagine if your car mechanic wanted to charge you no matter whether your car needed work or not. The lump of labor fallacy is alive and well.

Some good points in the rest of the post.

paulddraper · 42m ago
A Software Engineer Luddite is an interesting person.
kurtis_reed · 5h ago
Some people are just committed to the idea that using AI is bad. This article didn't succeed in presenting a good argument as to why.
josefritzishere · 8h ago
I tried using AI last week for some lazy research. The results were highly questionable. I asked it to cite sources. It did but those sources were even more questionable. AI has no concept of objective reality. Sometimes it is glaringlyobvious that it's just scraping 4chan and facebook and barfing up trash.
uberman · 8h ago
I was of the same mindset and did a similar test that also failed spectacularly. I asked for authoritative pros and cons on a subject in my field and asked it to cite sources. The pros and cons were ok, but the sources completely made up.

So, not so great there.

Then recently I had to modernize a python app written by someone who no longer works for the organization and was circa python 3.6. Several of the key libraries no longer supported the interfaces being called and there was no documentation at all.

On a whim I asked an LLM to help modernize the code, file by file and it cut the effort in half.

So, pretty great there.

actuallyrizzn · 3h ago
i hate being a broken record on this, but skill issue.

if you're a senior researcher that understands how to research, and you send a baby junior researcher off to do the work and did not give them tips or parameters to improve the research process, that's on you, not the junior.

mattgreenrocks · 8h ago
LLMs are really good for things that don’t have to be 100% correct, presuming that the artifact generated is checked by someone with expertise. We used it to suggest some places to visit in Italy the past year. Great use case for AI: it brainstorms spots for an itinerary that we then check for ourselves to see if it fits our requirements. The machine helps us think beyond the usual destinations, and we vet the results. Hallucinations are not an issue.

Hallucinations seem fundamental to how LLMs work now, so AI is probably still a ways off from being able to absorb responsibility for decisions like a human can.

I’m sure this comment will offend both pro and anti AI camps. :)

coro_1 · 8h ago
We found AI is surprisingly good at programming (when no one thought this would be the frontier) which says a lot about the specific nature of the tools.

You can for example, do minimal input (some peppered phrases) and see a rich answer. More specificity and research on your own, brings it out to play. The designs appear to level their self to your own capabilities.

simonw · 8h ago
Which tool and model?
insane_dreamer · 5h ago
What's up with all the flagging these days? This shouldn't be flagged.
amitprayal · 8h ago
Its not our decision to make we are powerless
mattgreenrocks · 8h ago
Individually, yes. Collectively, no.

Belief that we are powerless plays right into their hands. And it is too psychologically damaging to hold over the long term. Better to acknowledge the reality that the propaganda machine is turned to 11, things are quite uncertain, but the game isn’t over yet.

JohnFen · 7h ago
The most common way that people give up their power is by believing that they don't have any.
insane_dreamer · 3h ago
This reminds me of a convo I had a few days ago with my mid-20s daughter, who is a dev at a large successful tech company. When I asked her how the AI rollout was going, she said she hated it, for 2 reasons:

1) it's being crammed down their throats from "up high" without real thought being put into it, more like "AI everything" is some kind of executive mantra; that is a common refrain in companies

2) AI is taking away the aspect of the job she enjoys -- and the reason she switched to dev in the first place (her degree is in chemE) -- which is to write code, and replacing it with the aspect of the job she dislikes, which is PQA. So now instead of being a developer -- which she worked hard to get to and is quite good at, she's being reduced to a QA person, going over agent-generated code (generated by her or more likely, others on her team; she's one of the senior devs). It's sapping her creativity and inspiration, and pretty soon she's just going to be phoning it in. It's a shame. It saddened me to hear this and makes me think how this might affect society in a negative way. It's not that AI itself is the core problem, but the way that companies are "implementing" it, _is_ a problem.

3) AI doesn't actually properly do the actual mundane and time-consuming-but-soul-sucking tasks that she would like to offload to AI. This has more to do with how it's integrated into the company, their code base, etc. It's like people who say "I want AI to do my dishes so I can write letters instead of writing my letters so I can do the dishes."

Some people just want to see results and don't care about the process. So writing an LLM prompt or figuring out the code, is the same to them. But for others, the journey is the goal. It's like how some people still want to craft furniture by hand when they could just get a machine to spit it out.

jtrn · 2h ago
Muddled thinking galore. This write-up is just to scream into the void out of frustration... On Luddites: "They were protesting the fact that capital owners were extracting the wealth from their labour." This makes no sense, and it would be true before and after the mechanized looms. They were protesting because they were out of a job.

"That wealth is going into the hands of the ultra-wealthy." It's going to the people that someone want to give their money to. I hate Microsoft with a passion, but I don't think Bill Gates went around stealing money from poor people to become rich; he got it from businesses and other high-wealth individuals. And if you really cared that much about large corporations becoming rich, you should use the hell out of their large services since they are reported to actually be losing money on heavy users.

And if you don't want to give anybody any money, just don't give them any. Use a free open-weight/source model.

On Productivity - the author claims there is no increase in productivity. I am strongly starting to believe that the people who are unable to increase their productivity with AI are those who are EXTREMELY rigid and score low on creativity. Even if all you ever wanted to do was fiddle around with some extremely esoteric, complex area of software development, like microkernel optimization or something like that, and even if you were in a position where an AI was completely and truly useless in assisting you in that work due to its cutting-edge and extremely esoteric nature, a person's inability to utilize AI to be more productive IS STILL baffling. YES, it might not be helpful for the thing you spent 25 years becoming an expert on. But are there truly NO other frameworks, tools, software patterns, or utility functions in the world that you could use EVER FOR ANYTHING to assist you? What about having the AI throw together a tool that scans through an old PDF with OCR and extracts specific topic information that you need for something? Or a simple webpage to host a sign-up form for an event you are organizing for your workplace? Never mind the specific examples, but you DON'T know even a fraction of what is out there, and not even having the interest and ability to use AI to make something simple but useful, that are just beyond your capabilities, but easy for an AI agent, that can assist you in you, shows a massive lack of creativity.

With regards to productivity, I am a CTO of a small firm with about 8 people in our department. But holy hell, it's obvious that it's helpful with AI-assisted coding (If you are allowed to give anecdotes, so am I).

On Enjoyment: The argument seems to be Programming challenges drive the author's growth, with a focus on refactoring and simplifying code. They criticize AI tools for lacking human understanding and find rote coding unengaging but essential for learning patterns and improving skills. Fine. Then do that, but either admit that you are self-centered and not primarily focused on creating value for others. Find a job where you can do what you want, or work for yourself. But don't whine at others if they expect to get usefulness out of you when you want money from them.

On Ethical Training Do you apply the same level of ethical demand to all areas of your life, or do you just suddenly focus on it when it regarding "your thing"? Pharmaceutical companies jack up drug prices, hoard patents to block generics, and let people die for profit—literally life-and-death stakes. Oil giants lobby to keep pumping carbon while the planet chokes. Private equity firms gut healthcare systems, firing nurses to pad their margins. Every car uses cobalt from the DRC from child labor. I am willing to bet that most people railing against AI are happily buying modern electronics, driving a car, buying Viagra instead of losing weight, and overconsuming at a rate that dwarfs most of the people on the earth. Those are direct, tangible harms—way worse than some AI bot cribbing your code snippets. But nah, you’re here writing manifestos about LLMs. So why the selective outrage? Because it annoys you that people are using modern tools, because you’d rather the world just stick with an axe for wood chopping, because that’s what you know and enjoy.

On the Environmental Impact: See above. And - are you this worried about the environment when you shop for houses, cars, and other stuff you don't need? Or is it just when the environment suffers AND it's related to something you don't like?

There is no singularity: So what? Should I not have used an AI agent to help me create an XML-to-JSON parser for a custom application just 1 hour ago because there is no singularity?

What Makes Us Better Programmers? What if you become a worse programmer because you are not branching out with AI to utilize new tools and functionality? What if you are prevented from deeper learning because you spend too much time on small details? You could spend your whole life studying and asymptotically improving in Chess. But don't DEMAND that people automatically recruit you for that.

It's easy to demonize firms. But I am a small CTO in a small health software system. We genuinely try to make psychologists' lives easier and more productive with the stuff we make. We don't make much, and it's really hard sometimes. But I think we are making a difference to some people. And to be judged by people when I say that we should expect productivity and usefulness from the people we hire pisses me off to no extent. We don't have enough money to pay you to fiddle around with what YOU WANT. People, use all the tools we have to create something useful!

And by the way, I have also been a clinical psychologist for many years, so I can say with some experience; The whole post by the author can be summarized as a tour de force of motivated reasoning.

/rant