I've never been so conflicted about a technology

35 speckx 38 5/15/2025, 3:45:32 PM marcjenkins.co.uk ↗

Comments (38)

JKCalhoun · 9m ago
> Firstly, there’s the environmental impact.

Surely, like everything else in tech, this too shall pass. I expect power requirements to fall away since there is no doubt strong incentives to do so.

> My naive optimism led me to believe that technology would help us fight climate change.

Yeah, proceeding full steam ahead with planet destruction and praying tech will save us is kind of naive. You're not alone though.

> There are also ethical concerns regarding the methods used to obtain data for training AI models.

This one hasn't registered as problematic for me. Maybe I'm unethical?

> Content creators can't even determine which parts of their work were used to train the model…

"Content creators" don't develop their style from a vacuum either. I'm not conflicted about this one either.

> LLMs are also contaminating the web with generic content produced without care or thought.

The web has been contaminated ever since SEO. Maybe AI will kill the web. So it goes.

BugsJustFindMe · 5m ago
> I expect power requirements to fall away

This sounds extremely naive to my ears. Performance demands currently outpace resource utilization reductions and it's not clear that we're at the point where that will change soon. Also: https://en.wikipedia.org/wiki/Jevons_paradox

> > There are also ethical concerns regarding the methods used to obtain data for training AI models.

> This one hasn't registered as problematic for me. Maybe I'm unethical?

Yeah, the real ethical concern comes from the explosion of pernicious slop output that is utterly destroying everything good about the internet and media, not the training.

roywiggins · 22m ago
> Models require massive amounts of electricity and water (to cool servers).

Does it?

https://prospect.org/environment/2024-09-27-water-not-the-pr...

> training GPT-3 used as much water as just under twice what is required for the average American’s beef consumption. In other words, just two people swearing off beef would more than compensate.

https://andymasley.substack.com/p/a-cheat-sheet-for-conversa...

> It would be a sad meaningless distraction for people who care about the climate to freak out about how often they use Google search. Imagine what your reaction would be to someone telling you they did ten Google searches. You should have the same reaction to someone telling you they prompted ChatGPT.

Aurornis · 49s ago
I agree with your general point, but water usage numbers for farming animals are notorious for being exaggerated. There are studies that take all of the rainfall multiplied by all of the grazing land the cows are allowed on and include that as “water used”.

Water usage is also a nuanced concept because water isn’t destroyed when we use it. You have to consider the source and incidentals of processing it, as well as what alternatives it’s taking away from. None of this fits into convenient quotes though.

I think energy usage is a much better metric because we can quantify the impacts of energy usage by considering the blended sum of the sources going into the grid.

perrygeo · 3m ago
I don't understand the "whatboutism" angle. Running LLMs requires massive amounts of electricity and water. Period.

That beef consumption and other activities also require massive amounts of resources is independent. Both can be true. And both need to be addressed. We no longer have the luxury to pick and choose which climate interventions we find most convenient; we need to succeed on all fronts. If the patient is dehydrated and bleeding, do doctors sit around debating if they should give them water or gauze? No, they do both immediately.

jebarker · 13m ago
> “If you’re going to use generative tools powered by large language models, don’t pretend you don’t know how your sausage is made.”

This isn't a very hopeful quote given how many people continue to eat sausages even though we all know how sausages are made.

bdangubic · 1m ago
and why focus on LLMs... there is a shitton of other things that use power/resources/... if we are going to be start worrying here we should consistently apply this across everything...
staunton · 4m ago
> we all know how sausages are made.

I'm doubtful. There's a few documentaries showing the process and I've had people tell me multiple times that they were genuinely shocked by it. I'm assuming those people "knew" and it's just that knowing "it's all the waste at the butcher getting stuffed into guts" is not quite the same as seeing it first hand.

spyckie2 · 25m ago
This author could live in the Bronze Age and not a single thing would change about this article.

“I’ve never been so conflicted about a technology. Of course we are talking about iron smelting and its effects on the environment. Look I used an iron hoe and was impressed but have you seen how it was made? Look at all the waste and smoke and wood burned.

If you use an iron tool yourself, at least know how it was made.”

frereubu · 18m ago
I see where you're coming from, but the planet was not on the verge of a climate breakdown at the time. I think that's the implicit context in the block post that your reply misses.
roywiggins · 15m ago
The marginal impact of AI on global climate probably rounds down to zero, or at least is not the thing driving us over the cliff:

> Global energy consumption in 2023 was around 500,000 GWh per day. That means that ChatGPT’s global 3GWh per day is using 0.0006% of Earth’s energy demand.

https://andymasley.substack.com/p/a-cheat-sheet-for-conversa...

lucianbr · 7m ago
How much of that 500 TWh would disappear if you round to zero everything that consumes 3GWh per day or less? And keep in mind you are calculating for a single app (ChatGPT), nor for an entire industry or tech. Don't look at "civil aviation" but at each airline in particular.

I suspect nearly all of it. You framed it such that it appears insignificant, but with this framing nothing is significant.

frereubu · 10m ago
That may be true, but I guess my approach to it would be that as a general principle we shouldn't be using things that increase energy usage for little real gain, and this is something that is in my professional field that I can do something about personally. It does come somewhat from a place of feeling like the world as a whole is absolutely failing to get to grips with the significant large-scale changes that are needed.
roywiggins · 3m ago
a single hot shower uses more Watt-hours than most people spend on ChatGPT in a week, or a month. if people really want to cut down on how many watt-hours they're using there are dozens of things they should do first. if people are spending time worried about their ChatGPT energy use more than they are spending time going vegan then they have been grossly misinformed about what the relative impact is

(to be clear: I'm not vegan. but most people, incl. myself, could be quite easily, and it would save several orders of magnitude more energy and water)

casey2 · 10m ago
The reason we have any environmental regulations at all was from mass death and disease caused by technology. Far more people died due to the agricultural revolution then will ever die due to climate change in the next few decades in the worst predictions.

The Black Death alone was 25-50 million people in 7 years

roywiggins · 20m ago
acomjean · 16m ago
I think AI is interesting, but I wonder what it will do to progress when used very extensively.

Trained on all the things/ideas of the past and creating a larger barrier of entry for new ideas.

Thinking about new computer languages, who would use one that AI couldn’t help you code in?

blizdiddy · 7m ago
When chatGPT came out, i thought we would stagnate with new languages, but not anymore. Now we have small models that can be fine tuned for less than $20 and we have huge models that can learn a new programming language in-context if you provide the docs.
lordnacho · 29m ago
I think it's harsh to start the accounting so soon after the breakthrough.

Yes, of course AI uses a lot of energy. But we have to give it a bit of time to see if there are benefits that come with this cost. I think there will be. Whether the tradeoff was worthwhile, I think we are not even close to being able to conclude.

Something like social media, which has a good long while behind it, I could accept if you started to close the book on the plus-minus.

jebarker · 16m ago
You can't have it both ways, i.e. deploy the technology as rapidly and widely as possible but then say you have to wait to do any accounting of the pros/cons.
_rpxpx · 16m ago
"I've never been so conflicted"? Err, where's the sense of conflict? All I see is: "it sucks, it's helping kill the planet, but I need a job."
codydkdc · 32m ago
> Firstly, there’s the environmental impact. Models require massive amounts of electricity and water (to cool servers). An AI chatbot contributes more to the climate crisis than a Google search.

I really dislike the "people should be better and use less energy" argument for solving macro-problems like this

> My naive optimism led me to believe that technology would help us fight climate change. I was wrong: AI and Crypto are net negatives in this regard.

...why? why would technology that specifically requires a lot of energy help "fight climate change"?

this entirely article is missing any point to me -- it's very vague and speaking in generalities.

> “If you’re going to use generative tools powered by large language models, don’t pretend you don’t know how your sausage is made.”

why? if you generate code with a LLM, then read and deeply understand the code, what's wrong?

> I can’t help but feel the web would be a better place if the technology had never existed in the first place.

if we didn't invent the wheel, we wouldn't have so many cars/trucks polluting the planet

lucianbr · 4m ago
> ...why? why would technology that specifically requires a lot of energy help "fight climate change"?

There's no good reason it would. Nevertheless, proponents of both AI and crypto have claimed multiple times it would. So I think it is quite fair to bring it up.

https://ia.samaltman.com/

bitpush · 34m ago
Disappointed with the article, especially because it uses "think of the planet" as a weak argument against this technology.

> Firstly, there’s the environmental impact

Their own blog contributes to climate crisis they are now crying about. We can argue someone in a developing country writing a similar article saying "all these self publishing technologists are making climate crisis worse" and it'll have a stronger point.

I say this without discounting the real environmental costs associated with technology, but LLMs / AI isnt uniquely problamatic.

Your latest macbook, iphone, datacenter, ssds.. all have impact.

frereubu · 30m ago
Your argument seems to be flattening everything so it's just the same - "everything has an impact" - but different things have different impacts, and each needs to be measured against their utility. Me ordering a coffee at my local cafe has an impact, but it's a good deal less than me driving from London to Edinburgh. The author's argument, as I read it and generally agree, is that we don't really need LLMs to get things done, but their use comes with a large environmental cost. I don't think that will stop its use, unfortunately. There's just too much capital behind them at the moment.
Uehreka · 1m ago
Their use does not come with a large environmental cost. The average American lifestyle has a “water footprint” of 1200 “bottles of water” per day. 10-50 ChatGPT queries == 1 bottle of water. If you decide to use ChatGPT but shorten your daily shower by a second or two you will more than offset your total water usage increase.

Thus LLMs don’t have to be that useful to be worth it. And if used in certain ways they can be very useful.

Source (with links to further sources): https://andymasley.substack.com/p/a-cheat-sheet-for-conversa...

zdragnar · 16m ago
We don't really need LLMs like we don't really need the internet. We don't really need that blog, nor do we really need Netflix, porn or Facebook.

Individuals value things differently, so attempting to do society-wide prioritization is always going to be a reductive exercise.

For example: Your local cafe doesn't need to exist at all. You could still drink coffee, you'd just have to make it yourself. That cafe is taking up space, running expensive commercial equipment, keeping things warm even when there aren't customers ordering, keeping food items cool that aren't going to be eaten, using harsh commercial chemicals for regular sanitization, possibly inefficient cooling or heating due to heavy traffic going in and out the door, so on and so forth.

Imagine the environmental impact of turning all cafes into housing and nobody driving to go get a coffee.

frereubu · 9m ago
Again, this just feels like throwing your hands up in the air and saying "it's too hard to decide!" But we have to take decisions somehow if we're going to do anything.
roywiggins · 7m ago
Take one fewer hot shower a week and you've saved enough energy to power a lot of ChatGPT queries. Play one fewer hour of Minecraft. Turn off raytracing. Eat one fewer burger per month. All of those things would save more energy than forgoing a few ChatGPT conversations.
roywiggins · 8m ago
It's not a large environmental cost, though.
kelseyfrog · 31m ago
Correct. We have to order by impact and refrain from spending effort on all but the issue at the top of the list.

If we choose to affect any lower-priority issue, it is an example of hypocriticality that de-legitimizes the whole project.

nicce · 28m ago
If we had stayed farmers, we wouldn't have these issues. Most people would be still happy. Intelligence is both destruction and savior.
kirubakaran · 4m ago
"Many were increasingly of the opinion that they’d all made a big mistake in coming down from the trees in the first place. And some said that even the trees had been a bad move, and that no one should ever have left the oceans." - Douglas Adams
criddell · 18m ago
> farmers are 3.5 times more likely to die by suicide than the general population

https://www.fb.org/in-the-news/modern-farmer-farmers-face-a-...

nicce · 9m ago
Do you think that study applies if we would go 2000 years back? World was kinda different.
vaylian · 21m ago
You use whataboutism[1] to make your point. The web site is certainly not perfect, but in terms of resource usage it is many orders of magnitude less wasteful than current AI models.

[1] https://en.wikipedia.org/wiki/Whataboutism

jll29 · 6m ago
Bias Scanner (https://biasscanner.org) analysis report:

1. There’s no doubt in my mind that AI can boost a developer’s productivity. It’s not perfect, and letting it write code for you has its pitfalls, but if you know your stuff, it can really speed things up. (Opinionated Bias: 0.6) - The sentence presents a subjective opinion on the benefits of AI for developers' productivity as a fact, without acknowledging differing perspectives or the need for a balanced view.

2. It’s easy to see how AI is going to make game-changing advances in all kind of industries. It really is an impressive technology. (Opinionated Bias: 0.6) - This sentence uses subjective language to express a positive opinion about AI's future impact, implying a level of certainty and inevitability that may not be universally agreed upon.

3. But there are huge downsides. (Word Choice Bias: 0.6) - The use of the word 'huge' to describe the downsides of AI introduces a negative connotation, suggesting a significant imbalance in the discussion without providing evidence for the magnitude of these downsides.

4. My naive optimism led me to believe that technology would help us fight climate change. I was wrong: AI and Crypto are net negatives in this regard. (Opinionated Bias: 0.9) - This sentence not only expresses a strong negative opinion about the environmental impact of AI and Crypto but also labels them as 'net negatives' without a comprehensive analysis, presenting a subjective judgment as an established fact.

5. LLMs are also contaminating the web with generic content produced without care or thought. This has been coined “AI slop”: (Emotional Sensationalism Bias: 0.6) - The use of emotionally charged language such as 'contaminating' and 'AI slop' to describe the impact of LLMs on web content quality introduces a sensationalist perspective, aiming to evoke a strong negative emotional response rather than a balanced critique.

6. This mass-produced slop is flooding blogs and social media, often replacing original and helpful content in search results. It’s actively making the web worse. (Emotional Sensationalism Bias: 0.6) - The assertion that AI-generated content is 'mass-produced slop' and 'actively making the web worse' employs hyperbolic language to sensationalize the negative impact, neglecting a nuanced discussion on the coexistence of AI-generated and human-generated content.

7. LLMs are well known for making shit up. The inconsistent or made-up things that LLMs return are known as “hallucinations” and are a fundamental flaw in the underlying architecture. It’s easy to forget that LLMs aren’t intelligent and don’t understand what is right or wrong. Misinformation and fake news have proliferated over the past decade, primarily thanks to social media, and sadly LLMs are only contributing to this trend. (Emotional Sensationalism Bias: 1) - This paragraph employs highly charged and provocative language ('making shit up', 'hallucinations', 'fundamental flaw', 'easy to forget', 'sadly') to emphasize the negative impact of LLMs on information integrity, aiming to evoke strong emotional reactions and oversimplify complex issues.

8. I’ve spoken to dozens of people who use AI and appear oblivious to these issues or bury their heads in the sand and use it anyway (and I count myself in this latter group). (Opinionated Bias: 0.6) - The sentence conveys a judgmental perspective towards individuals' awareness or acceptance of the negative aspects of AI usage, suggesting a lack of critical thinking or willful ignorance without providing a balanced exploration of the complexities involved.

9. There’s a fear that you’ll get left behind if you don’t use AI. Or that AI is coming to replace your job. (Speculation Bias: 0.6) - This statement introduces speculative concerns about the future impact of AI on job security and individual competitiveness, framing these possibilities as immediate and significant threats without a thorough examination of the broader socioeconomic context or the diverse range of potential outcomes.

10. It’s easy to rely on AI because of its utility while ignoring the damage it causes. (Opinionated Bias: 0.6) - This sentence implies a moral judgment on the use of AI, suggesting that individuals overlook or dismiss the negative consequences of AI for the sake of convenience, without acknowledging the complexities of decision-making in adopting technological tools.

11. But the more I think about AI and the companies behind it, the less comfortable I feel using it, and the more gross it seems. I can’t help but feel the web would be a better place if the technology had never existed in the first place. (Opinionated Bias: 0.9) - The concluding statement expresses a strong personal aversion towards AI and its industry proponents, using emotionally charged language ('the more gross it seems', 'the web would be a better place if the technology had never existed') to convey a highly subjective and negative perspective without a comprehensive examination of the technology's multifaceted impact.

The article is biased regarding the topic of AI, particularly in its discussion of AI's environmental impact, ethical concerns, and content generation capabilities. It exhibits a general bias tendency towards portraying AI in a negative light, emphasizing its drawbacks and speculative harms while downplaying or omitting potential benefits and counterarguments.

Percentage of biased sentences: 40

Most frequent bias: Opinionated Bias (6 Most frequent bias) Average bias strength:: 0.69 Overall rating: 0.5449999999999999 Imprint

jll29 · 5m ago
(I thought it would be funny - in a self-referential way - to use our especially fine-tuned LLM to analyze bias in the OP in order to prove that LLMs can be useful by showing how they may be usefully applied to the OP itself.)