Ask HN: To what extend have you stopped or limited your use of AI?
20 dosco189 34 7/12/2025, 2:41:35 AM
Hi HN, I'm a researcher trying to understand the ways in which you have limited or stopped using AI tools.
Knowledge work is evolving, and I'm trying to understand the lived experiences of how you are actually working. There's plenty of content out there in the genre of AI for "X", using tools etc - but I'm curious to learn if you adopted AI as part of some area of work - but have now chosen to stop it. What was the context? What did or did not work?
I believe we are heading toward a world where AI offers easy mental shortcuts for nearly everything, similar to how cheap carbs became widespread in our diets. I do not yet know how I will deal with that. For now, I am just a kid in a candy store, enjoying the novelty.
I don’t want to pay for top notch AI, just like I don’t pay for top notch kernels (i.e., linux), top notch version systems (i.e., git) and so on.
I'd also disagree Linux is the top notch kernel. It might be the most universal one because of drivers and licensing and that makes it my personal favorite because it "just works" with pretty much no fuss (technical or social) but there are a number of kernels out there with better features.
I've tried for years to build writing tools with AI. I think for the most part, it doesn't work well and they have become worse (more unnatural) since GPT-3, with the exception of GPT-4.5 and Gemini 1.5 Flash.
There are bits you can delegate to AI: Writing punchy intro paragraphs. Brainstorming titles. Starting off dialogue in a certain style, but it can't sustain it for very long. Or dialogue as another person - you often don't want two characters with similar language.
Writing is thinking. You can rubber duck it for ideas. And it does bounce back some good ones. But you can't expect it to do the heavy work.
Lately, I've been reversing the dynamic - getting AI to generate the bullet points while I write the document. The last straw was when I got it to summarize a doc, and then got it to do work based off the doc it wrote. It would get half the work wrong.
I also have no interest in technology that impedes my skill development. I do not want to use anything that makes me a worse writer over time.
YMMV, I am answering the OP not evangelizing. Counter arguments will be ignored.
Reminds me of the Monty Python Arguing sketch.
Many things appear to work at first, right. Most of the time, using AI seems great, until one spends a lot of time working out lots of important details. A bunch of prompts later...
Yeah.
Sometimes it is nice to begin with something, even if it is wrong. AI is great for that.
Funny how often it is we can write in response to errors! Out it comes! Like that fire hose trope.
In that vein:
Proposal templates, and other basic create tasks can start with a boost.
Oh, a surprising one was distilling complex ideas into simple, direct language!
And for code, I like getting a fragment, function, whatever all populated, ready for me to just start working with.
Bonus for languages I want to learn more about, or just learn. There are traps here. You have to run it with that in mind.
Trust, but verify.
What did not work:
Really counting on the things. And like most everyone I suppose, I will easily say I know better. Really, I do, but... [Insert in here.]
Filtering of various kinds.
I may add to this later.
Code completions are fine. Driving code through chat is a complete waste of time (never saves time for me; always ends up taking longer). Agentic coding (where the LLM works autonomously for half an hour) still holds some promise, but my employer isn't ready for that.
Research/queries only for very low stakes/established things (e.g., how do I achieve X in git).
This has largely taken me out of the loop. I give it detailed task like I would a junior engineer, and we discuss approaches and align on direction, priorities, and goals, and it then goes off for literally hours iterating on the code.
I have done about 3 months worth of extremely complex engineering work in about a week since I started doing this.
It is a step change from trying to use the chat interface and copy/pasting snippets.
Once it’s going it writes code like a staff engineer.
There are some obscure bugs it can run into that it needs my 20 years of experience to unblock or unwind when it goes down a rabbit hole.
But it has accelerated my development 500x and while it’s iterating I’m not filling my mind with code and syntax and so on, I’m more like a tech lead or manager now. I’m in another room playing with my dog.
I have started testing Copilot for fun, my wife needs a web-based project that is not maintained for a while now and written in PHP.
I asked Copilot (Agent mode) to translate it to rust just for the fun of it, seeing how far it would come - I expected nothing out of it. I broke down the tasks into manageable chunks, directed it in design choices, and asked it to use some specific frameworks.
So far it wrote 40k lines of rust on its own, and keeps track of what functionality is missing compared to the original project. It was impressive seeing it iterate alone for 30m+ at a time.
I'm no programmer, more systems/cloud engineer, so a rewrite like this would have likely costed me >2 years of work and still ending up useless for all intents and purposes. I'm pretty sure that the end result won't work at first try, and I'll need to fix stuff manually or direct Copilot to fix it, but after two weeks of 1-2h iterating at night, I have 90% of something that would have required someone full time for at least a couple years.
The two things I found most valuable (also in other things, like shorter bash and python scripts):
1. Syntax boilerplate: if your task is specific enough, it normally gets it right 99.99% of the time, and my brain can look at the actual logic rather than the commas, brackets and (in python's case) spaces
2. Documentation: I spend 95% less time looking at documentation for something, I don't need to comb the entire language/package/class for the specific things, it normally gets it right, and worst case I can ask it to do a refactor with the most modern standards for specific library version X
I'm not delegating my thinking to a machine that can't think.
Learned helpesness as a service isn't a thing I want and I worry that long term it will make me think less deeply in ways I can't predict.
I do limit my use today, compared to a few months ago.
Most of that is having successfully mapped out use cases that make sense, I find myself doing less seeking. Where it is a net gain, go; otherwise, why bother?
Given the above, it's useful as hell for generating templates and usable starters for creating your own work when you're feeling stuck, and that's mainly it for me.
I’m also type 1 diabetic and this is like asking me to what extent I have stopped or limited my use of insulin.
AI and insulin (to different extents) make my life better in significant ways. Why would I stop or limit that?
I've observed colleagues who have used it extensively, I've often been a late adopter for things that carry unspecified risk; and AI was already on par with Pandora's box in my estimation when the weights were first released; I am usually perceptually pretty far ahead of the curve naturally (and accurately so).
Objectively, I've found these colleagues attitude, mental alacrity, work product, and abstract reasoning skills have degraded significantly in reference to their prior work pre-AI. They tried harder, got more actual work done, and were able to converse easily and quickly before. Now its, let me get back to you; and you get emails which have been quite clearly put through an LLM, with no real reasoning happening.
What is worse, is its happened in ways they largely do not notice, and when objective observations are pointed out, they don't take kindly to the feedback despite it not being an issue with them, but with their AI use, and the perceptual blindspots it takes advantage of. Many seem to be adopting destructive behaviors common to junkies, who have addiction problems.
I think given sufficient time, this trend will be recognized; but not before it causes significant adverse effects.
Researchers have called much less intelligent things AI since 1956.
Before there were GPTs, there were RNNs and CNNs. AI is the field of study.
Good logic.