I’m a dev working with AI to build tools for others, but I don’t use them personally. Why? Because they make your writing sound like everyone else, they produce shoddy and broken code (unless you’re doing something really commonplace), and they dull your own creativity. If you’re relying on someone else to do your work, you’re going to lose the ability to think for yourself.
AI is built essentially on averages. It’s the summary of the most common approach to everything. All the art, writing, podcasts, and code look the same. Is that the bland, unimaginative world we’re looking for?
I love the bit in the study about the “fear” of AI. I’m not “afraid” it’ll produce bad code. I know it will; I’ve seen it do it 100 times. AI is fine as one tool to help you learn and think about things, but don’t use it as a replacement for thinking and learning in the first place.
standardUser · 2h ago
> they produce shoddy and broken code
We must have dramatically different approaches to writing code with LLMs. I would never implement AI-written code that I can't understand or prove works immediately. Are people letting LLMs write entire controllers or modules and then just crossing their fingers?
stackskipton · 1h ago
Yes. VAST majority of developers are working in feature factories where they pick a Jira ticket off the top, it probably has timeboxed work amount. Their goal is close Jira ticket within timeboxed amount by getting build to go green and PM to accept "Yep, that feature was implemented." If badly written LLMs will get build to go green and feature to be accepted, whatever, Jira ticket closed, paycheck collected. Any downstream problems are tomorrow problem when tech debt piles up high enough and Jira tickets to fix the tech debt are written.
ofjcihen · 2h ago
In my experience: Yes.
Doing security reviews for this content can be a real nightmare.
To be fair though I have no issue with using LLM created code with the caveat being YOU MUST BE UNDERSTAND IT. If you don’t understand it enough to be able to review it you’re effectively copying and pasting Stack Overflow
standardUser · 1h ago
At least with Stack Overflow there's upvotes and comments to give me some confidence (sometimes too much confidence). With LLMs I start hyper-skeptical and remain hyper-skeptical - there's really no way to develop confidence in it because the mistakes can be so random and dissimilar to the errors we're used to parsing in human-generated content.
Having said that, LLMs have saved me a ton of time, caught my dumb errors and typos, helped me improve code performance (especially database queries) and even clued me into to some better code-writing conventions/updated syntax that I hadn't been using.
stackskipton · 1h ago
Also with most Stack Overflow copy and pasted code, you can Google the suspicious code, find the link to it and read over the question/comments and somewhat grok the decision and maybe even find a fix in the comments.
Most AI Code does not have prompts and even if it does, there is not guarantee that same prompt will produce the same output so it's like reading human code except human can't explain themselves even if you have access to them.
ofjcihen · 2h ago
I need to second all of these points and add in the additional reason I don’t use it often: unless it’s a very common use case and I just need some boilerplate starting code I already know I’m going to spend more time fixing the issues it creates than if I just write it myself.
BoiledCabbage · 2h ago
> AI is built essentially on averages.
It is, but that also means if you prompt it correctly it will give you the answer of the average graduate student working on theoretical physics, or the average expert on the historical inter-cultural conflict of the country you are researching. Averages can be very powerful as well.
andy99 · 1h ago
Research has no average, there's opinion and experience and nuance. This whole "graduate level" thing (no idea if that's what the parent comment refers to) is so stupid, and marketing at people who have never done research or advanced studies.
Getting an average response by necessity gives you something dumbed down and smoothed over that nobody in the field would actually write (except maybe to train and LLM or contribute an encyclopedia entry).
Not that having general knowledge is a bad thing, but LLM output is not representative of what a researcher would do or write.
Jach · 54m ago
One thing the "graduate level" concept reminds me of is Terence Tao's semi endorsement almost a year ago: https://mathstodon.xyz/@tao/113132502735585408 People quote the "The experience seemed roughly on par with trying to advise a mediocre, but not completely incompetent, (static simulation of a) graduate student." part but ignore all the rest of the nuance in the thread like "It may only take one or two further iterations of improved capability (and integration with other tools, such as computer algebra packages and proof assistants) until the level of "(static simulation of a) competent graduate student" is reached, at which point I could see this tool being of significant use in research level tasks." or "I inadvertently gave the incorrect (and potentially harmful) impression that human graduate students could be reductively classified according to a static, one dimensional level of “competence”."
ofjcihen · 2h ago
I see this argument all the time. That the user must not be prompting correctly.
In my experience the way you prompt is less important than the “averageness” of the answer you’re looking for.
Jach · 1h ago
Talking about averages is really misleading. Talk about capabilities instead, framed in tool language if you must.
Take it from someone in the business of exploiting race conditions for money: that’s about as average as you can get. Additionally, whatever Azure is considering “traditional” methods may be bare bones poorly optimized automated code reviews given the egregious issues they’ve had in the past.
As a side note:LLMs by definition do not demonstrate “understanding” of anything.
bawolff · 2h ago
The weirdest thing about AI is how shocked people that like AI are that some people don't use it.
While i'm sure its a useful tool in some situations, and i don't begrudge anyone who finds value in it, it simply doesn't fit into my life as something that would be useful on a regular basis.
analog31 · 2h ago
There's an element of political correctness here. It's hard to look your friend in the eye and tell them that the stuff they're writing isn't worth reading.
In a similar vein, when people find out that I ride a bicycle, their first question is why I don't ride an e-bike.
ofjcihen · 1h ago
Any time I’ve had someone express this kind of thinking they’ve either been a non-dev or someone who writes CRUD exclusively.
Devs and others recognize that the tech use very useful but not “magic” .
tforcram · 2h ago
I just had that conversation with a coworker last week, they started with 'I wonder if there is anyone left who isn't using AI daily?', and I had to reply with.. 'um well me actually'.
I only occasionally try it out for specific tasks and have never felt the inclination to try making it a part of any daily process, but his mindset was such that he couldn't perceive of anyone not wanting to fully dive in everyday and that those who didn't were missing out on significant value to their lives.
mdaniel · 2h ago
I wish OP had linked to the actual study, because that blurb is a press release
My disuse is all about flow and value, not fear. The ways I use it is in refining ideas at a higher level, not outputting code/content/etc (except for rote work).
lucas_membrane · 22m ago
Fear? I suppose that any negative evaluation can be stereotyped as fear or lack of intrepidity, but perhaps that repeated use of that label is projection -- that the article was written by an AI believer who fears that AI might have to recognize some realities beyond its purview. Or maybe the article was written by AI that has learned that fear is what we fear.
Human thought is implemented by a system that has adapted for hundreds of millions of years in diverse environments. We are adapted to huge variations in resources, threats of innumerable kinds, climates, opportunities, social and ecological relationships, etc, and many of its adaptations may be adaptations to control, balance or modify its other adaptations. It would be crazy to expect human intelligence to be what we could describe as optimized for something, and it would be crazy to expect humans to be able to figure out what that something is even if that were true. Perhaps our minds have gotten us here, and they cannot get us out of here, but they maintain some pretty strong links to our natural environment, which is still our landlord.
AI, OTOH, is a new kind of creature of a single time and a monoculture -- the internet. I don't talk to AI; perhaps someone has asked AI how much fear we should have of AI, and what the odds are.
Jach · 1h ago
Programmers are usually the minority. The introduction mentions that ChatGPT reached 100 million users faster than any other consumer technology in history. There aren't even that many programmers worldwide. In their table 3 of non-use scenarios, programming isn't an explicit one while "creating poetry" is. (Despite mentioning CoPilot use as one of their pre-screen options. Perhaps in the 24 situation codes they came up with, one of the 4 they removed for table 3 due to having the greatest reported AI usage was programming, as this study is more about non-usage.) To put yourself in the mindset of a study participant, go through each of those scenarios and ask yourself if you've used the AI for that (and would use it again) or not, and why.
They also only surveyed a few hundred people via Prolific.
The product success (millions of users) implies that for most people, concerns over "ease of use" (which is what I'd code your reason of "flow" as) aren't common, because it's quite easy to use for many scenarios. But I'd still expect the concern to come up for those talking about using it for artwork because even with things like inpainting in a graphics editor it's still not exactly easy to get exactly what you want... The study mentions they consolidated 29 codes into the 8 in table 2 (you missed the two general concerns, Societal and Barrier). Perhaps "ease of use" slides ito "Barrier", as they highlight "lack of skill" as fitting there and that's similar. It would be nice to see a sample of the actual survey questions and answers and coding decisions but hey what is open data am I right.
Anyway, the table headings are "General Concerns" and "Specific Concerns". I wouldn't get too hung up on the use of the term "fear" as the authors seem to use it interchangeably with "concern". I'd also read something like "Output Quality: fears that GenAI output is inaccurate..." synonymously as "has low confidence in the output quality of GenAI". (I'd code your "value" issue as one about output quality.) All of these fears/concerns/confidence questions can be justifiable, too, the term is independent of that.
fpoling · 1h ago
When ChatGPT appeared I have been working at a small startup for couple of months that was planning to hire another programmer.
CTO became extremely enthusiastic about ChatGPT and said that the programming would be a dying job and tried to show during presentation how good ChatGPT was and asked it to write a basic code related to our tasks. It produced total garbage that could not be used even as a starting point. CTO tried to prompt it to the needed directions, but it made things worse.
After the presentation I tried to search for the task from the presentation. It turned out there were very few StackOverflow or GitHub entries about it as the topic was rather specialized and ChatGPT tried to average those into the task.
In a month I and another recent hire departed from the company. And a year later the company was hiring programmers again.
Out of curiosity I repeated the task few times with different models all the time resulting in the same garbage.
So my rule of the thumb is that if a task generates a lot of search hit, then perhaps a LLM can average the knowledge into something reasonable. If not, averaging is not possible.
diggan · 1h ago
Re the ethics part, something I haven't quite understand myself yet:
On one hand, training it isn't "copying" per se, but "learning", so maybe it isn't straight up copyright infringement, unless it can reproduce large parts identically. It could also allow small team/individuals to have much large impact in the world and could lower the barrier to entry for research and experimentation, maybe even other endeavors. It certainly could help with knowledge sharing and accessibility, where downstream creativity and usefulness can outweigh diffuse individual harm. Maybe it expands the creative field rather than shrinks it, that'd be a good thing.
But then on the other hand, many models (datasets) are built with copyrighted works without permission or royalties, with the effect that LLM availability could reduce demand for human livelihoods, leading to eroding fields instead of expanding them. Most releases today are kind of opaque with their training datasets, most are undisclosed and it's hard if not impossible for authors to have agency over if their work is included or not. Maybe if LLMs remain it'll be hard to sustain cultural production instead, that'd be good for no one.
So then what is the best approach for someone who doesn't want the forfeit the usefulness they themselves experience, but also not go directly against what the ethical considerations bring up? In the end I don't know if there is an easy or right side to take, I guess usually the optimum sits somewhere around the middle, not at the extremes at least.
johncole · 2h ago
> While some people might worry about an AI apocalypse,
Wait, so if you’re worried about an ai apocalypse, you’re not using it. What does that solve?
> Steffen and Wells found that most non-users are more concerned with issues like trusting the results, missing the human touch or feeling unsure if GenAI is ethical to use.
Are the ethical users avoiding LLMs entirely? Or only for certain use cases?
add-sub-mul-div · 2h ago
> Wait, so if you’re worried about an ai apocalypse, you’re not using it. What does that solve?
It solves acting according to one's principles and against what one perceives as harmful? I don't understand the question.
analog31 · 2h ago
For me what matters is how I ration my attention. Time spent reacting to the AI could be spent thinking or working.
With that said, getting it to create boilerplate code is pretty useful, but not all that important a part of my job.
ksynwa · 2h ago
> A photo illustration created by AI depicting someone skeptical of using AI.
> Photo by Nate Edwards/BYU Photo
So is it a photograph or AI generated?
Retr0id · 2h ago
Just looking at it, I don't think I've ever been more on-the-fence about whether an image is AI or not. It has AI vibes, but no especially obvious AI artefacts. Nate Edwards is definitely a real photographer, too.
mixmastamyk · 2h ago
I don’t use them, yet. Am open to it, but no longer trust most tech companies. Probably an open model in several years on a beefy yet affordable machine I’ve yet to purchase.
Also am at the peak of my game, and automated templates, snippets, and stackoverflow lookup a decade+ ago. I prefer reading a discussion of tradeoffs to approaches before picking one. It may take up to ten more minutes up front but save hours later.
So waiting for the dust to settle.
TimorousBestie · 2h ago
Work kind of guilt-tripped me into giving it a shot and my experience with Claude was such a time-waster that I’ve kinda been ignoring it and continuing to do my own thing.
Maybe other people are better at prompting or designing VSCode integrations but what I’ve experienced so far has been a mess. Utterly nonsensical design decisions, it doesn’t seem to understand basic linear algebra or the LAPACK API. (I tried adding the Fortran or C source to its context to no avail.) I asked it to rewrite a well-documented scalar function using AVX intrinsics and. . . woof. No good.
Hopefully the field either improves dramatically in a couple years or goes back into hibernation for the next AI winter.
TheCleric · 2h ago
Same. I gave up after 30 minutes of correcting it when it kept suggesting code to use that was completely invalid.
herbst · 2h ago
There are several companies with proper privacy terms offering available models as pay per use to a fair price.
gunnarmorling · 2h ago
The other day, I came across a blog post by someone I really value and it sounded very much like written by AI. So I decided to be very explicit and transparent about my own ways of using (and not using) AI for my blog: https://www.morling.dev/ai/.
TL,DR: I don't use it for writing (I want to say something original in my own voice), but I do use it for copy editing (improving wording, helping with title ideas, etc.).
Cornbilly · 2h ago
For me, it's only really useful as an enhanced search engine and using it to do any real work usually just leads to more issues than if I just did it manually.
The vast majority of uses are your typical Silicon Valley hype, jargon filled bullshit that sells half-baked products to the tech illiterate folks in the C-suite.
drivingmenuts · 1h ago
Of the four top concerns (Output Quality, Ethical Implications, Risk and Human Connection), I agree with the first three and am ambivalent about the fourth. I think the first three are also inseparably interlinked. The Human Connection issue is a bit different - that's more about the individual than the technology, to my mind. As long as no one is forced to use an AI and as long as final decisions are made by responsible humans, we might be OK.
ost-ing · 2h ago
Im pretty close to not using it, mainly because I rage trying to explain something akin to a 6 year old while its smearing shit on the walls.
True engineering requires discipline, anything short of this philosophy is brain rot and you will pay the price in the long term
AI is built essentially on averages. It’s the summary of the most common approach to everything. All the art, writing, podcasts, and code look the same. Is that the bland, unimaginative world we’re looking for?
I love the bit in the study about the “fear” of AI. I’m not “afraid” it’ll produce bad code. I know it will; I’ve seen it do it 100 times. AI is fine as one tool to help you learn and think about things, but don’t use it as a replacement for thinking and learning in the first place.
We must have dramatically different approaches to writing code with LLMs. I would never implement AI-written code that I can't understand or prove works immediately. Are people letting LLMs write entire controllers or modules and then just crossing their fingers?
Doing security reviews for this content can be a real nightmare.
To be fair though I have no issue with using LLM created code with the caveat being YOU MUST BE UNDERSTAND IT. If you don’t understand it enough to be able to review it you’re effectively copying and pasting Stack Overflow
Having said that, LLMs have saved me a ton of time, caught my dumb errors and typos, helped me improve code performance (especially database queries) and even clued me into to some better code-writing conventions/updated syntax that I hadn't been using.
Most AI Code does not have prompts and even if it does, there is not guarantee that same prompt will produce the same output so it's like reading human code except human can't explain themselves even if you have access to them.
It is, but that also means if you prompt it correctly it will give you the answer of the average graduate student working on theoretical physics, or the average expert on the historical inter-cultural conflict of the country you are researching. Averages can be very powerful as well.
Getting an average response by necessity gives you something dumbed down and smoothed over that nobody in the field would actually write (except maybe to train and LLM or contribute an encyclopedia entry).
Not that having general knowledge is a bad thing, but LLM output is not representative of what a researcher would do or write.
In my experience the way you prompt is less important than the “averageness” of the answer you’re looking for.
Quoting https://buttondown.com/hillelwayne/archive/ai-is-a-gamechang... about https://zfhuang99.github.io/github%20copilot/formal%20verifi... "In the post, Cheng Huang claims that Azure successfully used LLMs to examine an existing codebase, derive a TLA+ spec, and find a production bug in that spec." This is not the behavior of the "average" anything.
As a side note:LLMs by definition do not demonstrate “understanding” of anything.
While i'm sure its a useful tool in some situations, and i don't begrudge anyone who finds value in it, it simply doesn't fit into my life as something that would be useful on a regular basis.
In a similar vein, when people find out that I ride a bicycle, their first question is why I don't ride an e-bike.
Devs and others recognize that the tech use very useful but not “magic” .
I only occasionally try it out for specific tasks and have never felt the inclination to try making it a part of any daily process, but his mindset was such that he couldn't perceive of anyone not wanting to fully dive in everyday and that those who didn't were missing out on significant value to their lives.
Resistance to Generative AI: Investigating the Drivers of Non-Use - https://scholarspace.manoa.hawaii.edu/server/api/core/bitstr...
All the reasons given are fears:
My disuse is all about flow and value, not fear. The ways I use it is in refining ideas at a higher level, not outputting code/content/etc (except for rote work).Human thought is implemented by a system that has adapted for hundreds of millions of years in diverse environments. We are adapted to huge variations in resources, threats of innumerable kinds, climates, opportunities, social and ecological relationships, etc, and many of its adaptations may be adaptations to control, balance or modify its other adaptations. It would be crazy to expect human intelligence to be what we could describe as optimized for something, and it would be crazy to expect humans to be able to figure out what that something is even if that were true. Perhaps our minds have gotten us here, and they cannot get us out of here, but they maintain some pretty strong links to our natural environment, which is still our landlord.
AI, OTOH, is a new kind of creature of a single time and a monoculture -- the internet. I don't talk to AI; perhaps someone has asked AI how much fear we should have of AI, and what the odds are.
They also only surveyed a few hundred people via Prolific.
The product success (millions of users) implies that for most people, concerns over "ease of use" (which is what I'd code your reason of "flow" as) aren't common, because it's quite easy to use for many scenarios. But I'd still expect the concern to come up for those talking about using it for artwork because even with things like inpainting in a graphics editor it's still not exactly easy to get exactly what you want... The study mentions they consolidated 29 codes into the 8 in table 2 (you missed the two general concerns, Societal and Barrier). Perhaps "ease of use" slides ito "Barrier", as they highlight "lack of skill" as fitting there and that's similar. It would be nice to see a sample of the actual survey questions and answers and coding decisions but hey what is open data am I right.
Anyway, the table headings are "General Concerns" and "Specific Concerns". I wouldn't get too hung up on the use of the term "fear" as the authors seem to use it interchangeably with "concern". I'd also read something like "Output Quality: fears that GenAI output is inaccurate..." synonymously as "has low confidence in the output quality of GenAI". (I'd code your "value" issue as one about output quality.) All of these fears/concerns/confidence questions can be justifiable, too, the term is independent of that.
CTO became extremely enthusiastic about ChatGPT and said that the programming would be a dying job and tried to show during presentation how good ChatGPT was and asked it to write a basic code related to our tasks. It produced total garbage that could not be used even as a starting point. CTO tried to prompt it to the needed directions, but it made things worse.
After the presentation I tried to search for the task from the presentation. It turned out there were very few StackOverflow or GitHub entries about it as the topic was rather specialized and ChatGPT tried to average those into the task.
In a month I and another recent hire departed from the company. And a year later the company was hiring programmers again.
Out of curiosity I repeated the task few times with different models all the time resulting in the same garbage.
So my rule of the thumb is that if a task generates a lot of search hit, then perhaps a LLM can average the knowledge into something reasonable. If not, averaging is not possible.
On one hand, training it isn't "copying" per se, but "learning", so maybe it isn't straight up copyright infringement, unless it can reproduce large parts identically. It could also allow small team/individuals to have much large impact in the world and could lower the barrier to entry for research and experimentation, maybe even other endeavors. It certainly could help with knowledge sharing and accessibility, where downstream creativity and usefulness can outweigh diffuse individual harm. Maybe it expands the creative field rather than shrinks it, that'd be a good thing.
But then on the other hand, many models (datasets) are built with copyrighted works without permission or royalties, with the effect that LLM availability could reduce demand for human livelihoods, leading to eroding fields instead of expanding them. Most releases today are kind of opaque with their training datasets, most are undisclosed and it's hard if not impossible for authors to have agency over if their work is included or not. Maybe if LLMs remain it'll be hard to sustain cultural production instead, that'd be good for no one.
So then what is the best approach for someone who doesn't want the forfeit the usefulness they themselves experience, but also not go directly against what the ethical considerations bring up? In the end I don't know if there is an easy or right side to take, I guess usually the optimum sits somewhere around the middle, not at the extremes at least.
Wait, so if you’re worried about an ai apocalypse, you’re not using it. What does that solve?
> Steffen and Wells found that most non-users are more concerned with issues like trusting the results, missing the human touch or feeling unsure if GenAI is ethical to use.
Are the ethical users avoiding LLMs entirely? Or only for certain use cases?
It solves acting according to one's principles and against what one perceives as harmful? I don't understand the question.
With that said, getting it to create boilerplate code is pretty useful, but not all that important a part of my job.
> Photo by Nate Edwards/BYU Photo
So is it a photograph or AI generated?
Also am at the peak of my game, and automated templates, snippets, and stackoverflow lookup a decade+ ago. I prefer reading a discussion of tradeoffs to approaches before picking one. It may take up to ten more minutes up front but save hours later.
So waiting for the dust to settle.
Maybe other people are better at prompting or designing VSCode integrations but what I’ve experienced so far has been a mess. Utterly nonsensical design decisions, it doesn’t seem to understand basic linear algebra or the LAPACK API. (I tried adding the Fortran or C source to its context to no avail.) I asked it to rewrite a well-documented scalar function using AVX intrinsics and. . . woof. No good.
Hopefully the field either improves dramatically in a couple years or goes back into hibernation for the next AI winter.
TL,DR: I don't use it for writing (I want to say something original in my own voice), but I do use it for copy editing (improving wording, helping with title ideas, etc.).
The vast majority of uses are your typical Silicon Valley hype, jargon filled bullshit that sells half-baked products to the tech illiterate folks in the C-suite.
True engineering requires discipline, anything short of this philosophy is brain rot and you will pay the price in the long term