Ask HN: To what extend have you stopped or limited your use of AI?

19 dosco189 29 7/12/2025, 2:41:35 AM
Hi HN, I'm a researcher trying to understand the ways in which you have limited or stopped using AI tools.

Knowledge work is evolving, and I'm trying to understand the lived experiences of how you are actually working. There's plenty of content out there in the genre of AI for "X", using tools etc - but I'm curious to learn if you adopted AI as part of some area of work - but have now chosen to stop it. What was the context? What did or did not work?

Comments (29)

sandwichsphinx · 8h ago
I stopped using AI for code, everything it gives me requires manual checking and editing to work but now I have the overhead of not having the deep understanding I would otherwise have had if I figured out things myself from the start. I also stopped using AI as a sort of replacement for quick google searches in general, but I didn't go back to using google because now I feel those results are too shallow. I been trying to figure things out from primary sources as much as possible and thinking for myself since it seems to me the value going forward is having depth and breadth over the competition that you only get by doing things the very old and slow way.
dakiol · 18m ago
The main reason I’m slowly using less and less AI is price. I’m relying on the free tiers but I know that’s not gonna last forever.

I don’t want to pay for top notch AI, just like I don’t pay for top notch kernels (i.e., linux), top notch version systems (i.e., git) and so on.

mleroy · 5h ago
I haven't limited my AI use. In fact, it has increased. It still feels experimental, and I often use it even when it does not save time, simply to avoid effortful thinking. That concerns me.

I believe we are heading toward a world where AI offers easy mental shortcuts for nearly everything, similar to how cheap carbs became widespread in our diets. I do not yet know how I will deal with that. For now, I am just a kid in a candy store, enjoying the novelty.

muzani · 1h ago
Writing.

I've tried for years to build writing tools with AI. I think for the most part, it doesn't work well and they have become worse (more unnatural) since GPT-3, with the exception of GPT-4.5 and Gemini 1.5 Flash.

There are bits you can delegate to AI: Writing punchy intro paragraphs. Brainstorming titles. Starting off dialogue in a certain style, but it can't sustain it for very long. Or dialogue as another person - you often don't want two characters with similar language.

Writing is thinking. You can rubber duck it for ideas. And it does bounce back some good ones. But you can't expect it to do the heavy work.

Lately, I've been reversing the dynamic - getting AI to generate the bullet points while I write the document. The last straw was when I got it to summarize a doc, and then got it to do work based off the doc it wrote. It would get half the work wrong.

rzz3 · 3h ago
For me, it’s increasing and increasing by the day. I have no interest in limiting or stopping it, and every day I’m working to improve my workflows and interactions either AI. And I say that as a software engineer with 20-something years of experience; I’m not a new kid on the block.
iainctduncan · 8h ago
I personally do not use LLMs (knowingly) at all, largely for environmental reasons. The improvements are, to me, not worth the attrocious energy use. I will happily pay 20% or whatever more in both money and time.

I also have no interest in technology that impedes my skill development. I do not want to use anything that makes me a worse writer over time.

YMMV, I am answering the OP not evangelizing. Counter arguments will be ignored.

harvey9 · 40m ago
You don't owe anyone a back and forth discussion anyway, so the last line comes off as redundant.
throwaway3b03 · 7h ago
> YMMV, I am answering the OP not evangelizing. Counter arguments will be ignored.

Reminds me of the Monty Python Arguing sketch.

edg5000 · 3h ago
My only negative thing about AI is that currently it's hard to do it locally, you rely on a third party who now suddenly has access to a large portion of the code. That is really a downgrade from before AI, when I had absolutely everything locally except Google search. But other than that, the bottleneck is with me, not with the AI. I think I can get a lot more out of it by getting more experienced in offloading work to AI. It's just so good how I can just let it run bash commands on my system and figure out my problems. Too good of a utility to pass up on.
ddingus · 9h ago
What did not work?

Many things appear to work at first, right. Most of the time, using AI seems great, until one spends a lot of time working out lots of important details. A bunch of prompts later...

Yeah.

Sometimes it is nice to begin with something, even if it is wrong. AI is great for that.

Funny how often it is we can write in response to errors! Out it comes! Like that fire hose trope.

In that vein:

Proposal templates, and other basic create tasks can start with a boost.

Oh, a surprising one was distilling complex ideas into simple, direct language!

And for code, I like getting a fragment, function, whatever all populated, ready for me to just start working with.

Bonus for languages I want to learn more about, or just learn. There are traps here. You have to run it with that in mind.

Trust, but verify.

What did not work:

Really counting on the things. And like most everyone I suppose, I will easily say I know better. Really, I do, but... [Insert in here.]

Filtering of various kinds.

I may add to this later.

klauserc · 7h ago
Using gen AI for anything artistic (illustrations, music, video, creative writing) is a dead end. The results are soulless and bland. People notice immediately.

Code completions are fine. Driving code through chat is a complete waste of time (never saves time for me; always ends up taking longer). Agentic coding (where the LLM works autonomously for half an hour) still holds some promise, but my employer isn't ready for that.

Research/queries only for very low stakes/established things (e.g., how do I achieve X in git).

noir_lord · 3h ago
I don't use it at all.

I'm not delegating my thinking to a machine that can't think.

Learned helpesness as a service isn't a thing I want and I worry that long term it will make me think less deeply in ways I can't predict.

austin-cheney · 2h ago
I use AI in Google search results because I have not bothered to turn it off and it’s just there. Otherwise I have avoided AI.
nottorp · 3h ago
I did neither stop nor limit. That's because I use it in moderation when it makes sense (at least to me) and not with religious fervor.
ddingus · 9h ago
I did not stop.

I do limit my use today, compared to a few months ago.

Most of that is having successfully mapped out use cases that make sense, I find myself doing less seeking. Where it is a net gain, go; otherwise, why bother?

Kiyo-Lynn · 5h ago
I haven’t stopped using AI, but I use it less than I did a few months ago. Now I mostly turn to it when I’m stuck or need inspiration. Using it less actually made me more efficient.
kadushka · 4h ago
I’m thinking about limiting the extent I use it for coding. Just to stay sharp. Need to exercise my brain more.
edbaskerville · 6h ago
Haven't tried it yet. I hear it's having some impact!
paulcole · 1h ago
None. I try to use AI more every day.

I’m also type 1 diabetic and this is like asking me to what extent I have stopped or limited my use of insulin.

AI and insulin (to different extents) make my life better in significant ways. Why would I stop or limit that?

southernplaces7 · 3h ago
I tried it for writing, and while the main LLMs do a decent job of vomiting out somewhat wordy but essentially okay text if you want some kind of generic content on a specific subject, there's always a distinctly generative feel to it, at least in my impression. The real problems emerge when you ask for technical or datum-rich writing. The little invented or "mistaken" details are just too frequent to let it be useful unless you do enough editing that you almost might as well write what you wanted yourself.

Given the above, it's useful as hell for generating templates and usable starters for creating your own work when you're feeling stuck, and that's mainly it for me.

dahuangf · 8h ago
"I've been using it continuously without restrictions, trying to find ways to make AI smarter because it really helps improve efficiency!"
trod1234 · 4h ago
I don't use AI at all, primarily because I believe its harmful and I am quite mindful of things.

I've observed colleagues who have used it extensively, I've often been a late adopter for things that carry unspecified risk; and AI was already on par with Pandora's box in my estimation when the weights were first released; I am usually perceptually pretty far ahead of the curve naturally (and accurately so).

Objectively, I've found these colleagues attitude, mental alacrity, work product, and abstract reasoning skills have degraded significantly in reference to their prior work pre-AI. They tried harder, got more actual work done, and were able to converse easily and quickly before. Now its, let me get back to you; and you get emails which have been quite clearly put through an LLM, with no real reasoning happening.

What is worse, is its happened in ways they largely do not notice, and when objective observations are pointed out, they don't take kindly to the feedback despite it not being an issue with them, but with their AI use, and the perceptual blindspots it takes advantage of. Many seem to be adopting destructive behaviors common to junkies, who have addiction problems.

I think given sufficient time, this trend will be recognized; but not before it causes significant adverse effects.

Webstir · 8h ago
Not at all. I'm an attorney. Fast law is bad law. If you're using LLMs (stop calling it AI ffs -- only morons parrot the marketing hype) to do law, you're just asking toget slapped sooner or later. Like the moron that is My Pillow Mike Lindell's lawyer. See here: https://ia801706.us.archive.org/34/items/gov.uscourts.cod.21...
sshine · 4h ago
> LLMs (stop calling it AI ffs -- only morons parrot the marketing hype)

Researchers have called much less intelligent things AI since 1956.

Before there were GPTs, there were RNNs and CNNs. AI is the field of study.

Webstir · 8h ago
Extent. The word you're looking for is extent. Sorry. Word nerd.
beardyw · 4h ago
I've been reading John Evelyn's diary, among others from the 17th century. There is absolutely no sense there is a correct spelling of a word, as writing is correctly seen as speaking written down. Correct spelling is a modern invention of limited value (but helpful for AI).
topato · 7h ago
That was mean. I'm sorry... But the point stands.
sixtram · 4h ago
topato · 7h ago
Not really a "word nerd" situation. It's a really obvious typo. Might as well have said, "tee hee, I'm, like, autistic for words".