I find it rich that this article seems to glorify the proliferation of the Internet itself, completely ignoring that social media has been driving mass addiction since the late '00s and that the rich have been growing exponentially richer long before LLMs and the like made their splash. Maybe this is because WIRED's business model requires the existing 21st century media landscape to function.
I'm not exactly an "AI optimist", but this is not constructive journalism by any means. There are countless unaddressed, explicitly tech-related issues that would only further metastasise if we arbitrarily reverted and then halted progress to 2021.
UncleEntity · 4h ago
I supposed it could be argued that the proliferation of the internet lead to a massive increase in job growth.
AI, on the other hand, is seeming to (promise, at least) lead to the polar opposite.
amanaplanacanal · 4h ago
Computers originally felt like they were going to obsolete a lot of jobs too. The Internet is just really good at delivering entertainment to people. Maybe the same thing will happen with AI.
123yawaworht456 · 3h ago
so?
asdf6969 · 3h ago
so most people expect to lose more from AI than they will gain. No amount of AI is going to make up for downward mobility and lost income
Growing backlash yet used by hundreds of millions of people and growing.
DonsDiscountGas · 1h ago
Yes millions of people like it and millions of other people hate it. This isn't a contradiction
Incipient · 4h ago
Yeah this feels like a lot of "backlash" scenarios. The media makes a song and dance about a few loud voices, but even if everyone is thinking it, little will really change - rightly or wrongly.
SkyeCA · 4h ago
By this logic we could deduce that alcohol is also a net-positive for society. A lot of people doing something doesn't necessarily make that thing right, good, or beneficial.
jusssi · 3m ago
How do you know alcohol is not a net positive?
rpdillon · 3h ago
You're conflating whether or not AI is a net good with whether or not there's a backlash against it. Those are different.
kylebenzle · 4h ago
And same with opium, literally MILLIONS of contented users yet people STILL say their is a backlash?
arctics · 3h ago
This article doesn't talk much about mass hiring during COVID period due to high demand, what we see now is unwinding of that trend, feels like people behind this type of narrative are interested in regulating what goes into these models.
yahoozoo · 3h ago
LLMs are plateauing. At this point, anyone who has cared enough knows their fundamental limitations. Don’t get me wrong, they do provide immense value and have grown quickly in just a few years. But, they will probably never get past the “we must treat it like a junior engineer” phase, especially as LLM outputs inevitably leak back into the training data since everyone is using it (or being forced to by employers). Notice that everything major from these companies lately hasn’t been a vastly improved model from the previous iteration, but products and tooling: see Anthropic allowing you to create in-Claude apps, every companies own version of agentic CLI, and so on.
atemerev · 3h ago
I don't know. Like, only eight months ago or so, I was explaining my colleagues how using AI for coding can actually work, in some cases. Now, I don't know any engineers who don't use AI tools, at least at work (our own pet projects are still fun to do without AI, but at work, AI-assisted productivity is now expected).
If that's "plateauing"...
nicksergeant · 3h ago
It doesn't feel like LLM-assisted coding has gotten materially better in the last 6 months, though. The tooling, sure, but I'm still wrestling with the exact same problems I was at the beginning of the year. That doesn't feel like the exponential improvement people have been prophesizing about.
atemerev · 2h ago
I am fine with constant linear improvement too. What model do you use? SOTA is Gemini 2.5-pro. We definitely didn't have anything comparable in 2024.
yahoozoo · 2h ago
I wasn’t talking about their adoption. I’m talking about their abilities.
atemerev · 2h ago
In eight months, we got SOTA from 50% to 83% on Aider's Polyglot benchmark (225 Exercism problems). That increase matches my own experience.
johnea · 1h ago
The article seemed a little fluffy, and I wondered if at least part of it was LLM generated.
However I do agree with the general sentiment.
I find the current hype cycle of LLMs to be similar to the petro industry: there are many useful applications of petroleum, without having to set it on fire. And yet the industry is loath to give up on a use that consumes copious quantities of it's product.
LLMs have many beneficial applications, in deep data analysis and pattern matching. Yet industry is intent on applying the tech to problems where its results are dubious, because of the mass market of those applications.
So much for the magical black box of the market 8-/ As in every documented case ever: product vendors will take every dollar possible, to hell with consequences.
Think of the radioactive skin cream of the '50s 8-/ Sure, it's "good" for you, and "more doctors choose camel"... And never mind that whole libtard fake-news that the planet's ecosystem is going to shit... Every patriot knows the little bebe jesus put it all here for us to trash, obviously...
It's in this context that LLMs are definitely the best way to decide whether or not your insurance company pays for you to get a kidney transplant 8-/
As long as I'm on a roll, and related to my first sentence, I _really_ dislike the em dashes 8-/ and their use has really spread with the prevalence of LLM generated text. Not just in the LLM text, but in the text of humans that are influenced by the LLM text.
If people would just use them "incorrectly", that is with spaces around them, then they would follow the general rule of english writing, to delimit words and phrases with a space.
bpodgursky · 4h ago
What do people think is a realistic outcome here?
In a global economy, no country can stop deployment of consumer AI for digital goods unless you go full North Korea.
If you want some kind of international moratorium I'm all ears, but whining at people for buying AI digital art for $0 instead of graphic designers for $10,000 is utterly pointless. At best, you'll get them to buy art from Philippine designers... who are using AI... for $.50.
I have reservations about AI, but what do you gain with this approach? Guilt and national protectionism is utterly pointless.
old_man_cato · 4h ago
What approach is that?
Some people like AI. They should use it, talk about why they like it and use products that leverage it.
Other people don't like it. They should avoid using it, talk about why they don't like it and not use products that leverage it.
Each side's enthusiasm for their perspective can be shared for the purposes of convincing others that theirs is the correct perspective.
That all sounds pretty fine to me?
milofeynman · 4h ago
I think a reasonable thing folks want is to not have their art or design used to train these models without permission.
123yawaworht456 · 3h ago
did the author of the post you're replying to give you permission to read his comment?
kylebenzle · 4h ago
But that is utterly UNreasonable, of course, that is obviously why the courts did not side with that opinion (also because it's a stupid and naive opinion).
Unless they are saying they don't want anyone "trained" on "their data", it's a phrase that simply makes no sense and only expressed by people who don't know how the real world works at all.
spacemadness · 3h ago
You can tell who is trying to make money off AI by calling any criticism of scraping creators works naive and stupid.
amanaplanacanal · 4h ago
The courts interpret the laws as they exist. I think GP is suggesting a change to the law.
ringeryless · 3h ago
as the rights holder, creator of content, of COURSE it is reasonable for me to assign these rights as I see fit, and scraping my works without my consent is brazen theft.
I am free to not grant usage to pattern rip-off machines, that explicitly are NOT humans consuming my work in manners i have in mind.
spare me the false equivalence of pattern theft from human beings learning or reacting.
the express purpose of said pattern ripoff machines is to obviate me and regurgitate likenesses of my work, so, no, i shall not participate in my own murder.
votepaunchy · 2h ago
The courts have ruled that you do not have these rights to gatekeep, as with the right of first sale.
EA-3167 · 4h ago
> What do people think is a realistic outcome here?
Same as with Crypto, the bubble pops, and while the tech doesn't magically vanish it stops the seemingly geometric growth driven by speculative investment. The assumption built into all of these debates is so... uncritical, and can be summed up as, "The issues we see with 'AI', scaling, energy/water demand, hallucinations, running out of data and having to cannibalize itself, etc... can be solved within the existing frameworks." There is no indication of that being true, and frankly a lot of indications that it isn't.
Buuuuut a chunk of the US economy in the form of NVIDIA is tied up in pretending that it's true, a lot of companies are going whole hog into this, and of course a ton of people here are drawing their salaries based on that assumption. For those of us who aren't in that position however, much like Crypto was a real tech with hugely overblown marketing, 'AI' is the same.
Winter is coming.
atemerev · 3h ago
Of course there will be displacement of workers. That's entirely the point. "We are in the business of unemploying people", like software engineering in general, but on steroids.
RcouF1uZ4gsC · 4h ago
This drivel is worse than ChatGPT.
There is no deep insight.
It is highly formulaic and mechanical:
1. Find a controversy
2. Invent a trend
3. Get quotes from some people who agree with you
4. Mention “ethics” all at a very superficial level.
5. Publish
You would be much better served by having a 15 minute conversation with ChatGPT about this topic than reading this article.
ringeryless · 3h ago
i think there is a fundamental rather simple insight you should probably grasp, instead of shooting the messenger:
the humans, writ large, do not want this LLM revolution.
a handful of well connected folks overruling this and ramming AI everything down everyone's throats will not alter this sentiment to become acceptance instead.
polling is consistent with the articles simple conclusion, so your claim of manufactured outrage rings hollow.
Tagbert · 4h ago
It’s Wired, so there.
jaggs · 4h ago
Clickbait
elif · 4h ago
All the negativity really is an understandable response to so many negative changes to the status quo way of life, but I can't help but take a step back as a realist and say that Pandora's box is open.
That is not to say we are powerless to regulate aspects of how this technology is legally utilized, but this sort of quasi-luddite "turn it off" response is totally infeasible.
The best bet we have for making the AI future suck less is by building things which make life more compatible with AI. That means a revamp of how we handle intellectual property, how we handle education, how we manage an economy humanely in a world where human labor potential is comparatively diminished.
We need to focus upward toward the things that can make humanity able to handle the transition. Not downward toward the sand where we can pretend it isn't happening.
EDIT: please respond if you're going to take away my karma for my honest opinion. I use AI almost every waking hour but we have to be realists about the impacts on society
spacemadness · 3h ago
Yeah I’m not sure why you’re getting downvoted so heavily. I don’t fully agree but you’re not trolling in any sense.
I'm not exactly an "AI optimist", but this is not constructive journalism by any means. There are countless unaddressed, explicitly tech-related issues that would only further metastasise if we arbitrarily reverted and then halted progress to 2021.
AI, on the other hand, is seeming to (promise, at least) lead to the polar opposite.
If that's "plateauing"...
However I do agree with the general sentiment.
I find the current hype cycle of LLMs to be similar to the petro industry: there are many useful applications of petroleum, without having to set it on fire. And yet the industry is loath to give up on a use that consumes copious quantities of it's product.
LLMs have many beneficial applications, in deep data analysis and pattern matching. Yet industry is intent on applying the tech to problems where its results are dubious, because of the mass market of those applications.
So much for the magical black box of the market 8-/ As in every documented case ever: product vendors will take every dollar possible, to hell with consequences.
Think of the radioactive skin cream of the '50s 8-/ Sure, it's "good" for you, and "more doctors choose camel"... And never mind that whole libtard fake-news that the planet's ecosystem is going to shit... Every patriot knows the little bebe jesus put it all here for us to trash, obviously...
It's in this context that LLMs are definitely the best way to decide whether or not your insurance company pays for you to get a kidney transplant 8-/
As long as I'm on a roll, and related to my first sentence, I _really_ dislike the em dashes 8-/ and their use has really spread with the prevalence of LLM generated text. Not just in the LLM text, but in the text of humans that are influenced by the LLM text.
If people would just use them "incorrectly", that is with spaces around them, then they would follow the general rule of english writing, to delimit words and phrases with a space.
In a global economy, no country can stop deployment of consumer AI for digital goods unless you go full North Korea.
If you want some kind of international moratorium I'm all ears, but whining at people for buying AI digital art for $0 instead of graphic designers for $10,000 is utterly pointless. At best, you'll get them to buy art from Philippine designers... who are using AI... for $.50.
I have reservations about AI, but what do you gain with this approach? Guilt and national protectionism is utterly pointless.
Some people like AI. They should use it, talk about why they like it and use products that leverage it.
Other people don't like it. They should avoid using it, talk about why they don't like it and not use products that leverage it.
Each side's enthusiasm for their perspective can be shared for the purposes of convincing others that theirs is the correct perspective.
That all sounds pretty fine to me?
Unless they are saying they don't want anyone "trained" on "their data", it's a phrase that simply makes no sense and only expressed by people who don't know how the real world works at all.
spare me the false equivalence of pattern theft from human beings learning or reacting.
the express purpose of said pattern ripoff machines is to obviate me and regurgitate likenesses of my work, so, no, i shall not participate in my own murder.
Same as with Crypto, the bubble pops, and while the tech doesn't magically vanish it stops the seemingly geometric growth driven by speculative investment. The assumption built into all of these debates is so... uncritical, and can be summed up as, "The issues we see with 'AI', scaling, energy/water demand, hallucinations, running out of data and having to cannibalize itself, etc... can be solved within the existing frameworks." There is no indication of that being true, and frankly a lot of indications that it isn't.
Buuuuut a chunk of the US economy in the form of NVIDIA is tied up in pretending that it's true, a lot of companies are going whole hog into this, and of course a ton of people here are drawing their salaries based on that assumption. For those of us who aren't in that position however, much like Crypto was a real tech with hugely overblown marketing, 'AI' is the same.
Winter is coming.
There is no deep insight.
It is highly formulaic and mechanical:
1. Find a controversy
2. Invent a trend
3. Get quotes from some people who agree with you
4. Mention “ethics” all at a very superficial level.
5. Publish
You would be much better served by having a 15 minute conversation with ChatGPT about this topic than reading this article.
the humans, writ large, do not want this LLM revolution. a handful of well connected folks overruling this and ramming AI everything down everyone's throats will not alter this sentiment to become acceptance instead.
polling is consistent with the articles simple conclusion, so your claim of manufactured outrage rings hollow.
That is not to say we are powerless to regulate aspects of how this technology is legally utilized, but this sort of quasi-luddite "turn it off" response is totally infeasible.
The best bet we have for making the AI future suck less is by building things which make life more compatible with AI. That means a revamp of how we handle intellectual property, how we handle education, how we manage an economy humanely in a world where human labor potential is comparatively diminished.
We need to focus upward toward the things that can make humanity able to handle the transition. Not downward toward the sand where we can pretend it isn't happening.
EDIT: please respond if you're going to take away my karma for my honest opinion. I use AI almost every waking hour but we have to be realists about the impacts on society