Hinton is too speculative and inconsistent for me. A reporter outside the AI field even called him out for saying with confidence that only blue collar work will survive AI by pointing out a few years back he said with the same confidence that only creative work will survive.
I can't but compare his takes with Stuart Russell takes, which are so well grounded, coherent and easily presented. I often revisit Stuart Russell discussion with Steven Pinker on AI for the clarity he brings to the topic.
theologic · 23m ago
Hinton's citations is 5x what Russell has. There's a good reason why he's won both the Turing and the Nobel Prize. He is just an incredible researcher. I would make the argument that sometimes when you're an incredibly bright, talented person in terms of understanding problems that many other people simply are incapable of following, you're not the right person to be setting expectations how fast a product may ramp into mainstream society.
Russell is much more measured in his statements and much more policy driven.
In my mind you need to listen to both and try to figure out where they're coming from.
chubot · 38m ago
I guess it's worth reminding people that in 2016, Geoff Hinton said some pretty arrogant things that turned out to be totally wrong:
Let me start by saying a few things that seem obvious. I think if you work as a radiologist, you're like the coyote that’s already over the edge of the cliff but hasn’t yet looked down
It’s just completely obvious that within five years deep learning is going to do better than radiologists.… It might be 10 years, but we’ve got plenty of radiologists already.”
His words were consequential. The late 2010s were filled with articles that professed the end of radiology; I know at least a few people who chose alternative careers because of these predictions.
---
According to US News, radiology is the 7th best paying job in 2025, and the demand is rising:
The Radiologist Shortage: Rising Demand, Limited Supply, Strategic Response
(Ironically, this article feels spammy to me -- AI is probably being too credulous about what's written on the web!)
---
I read Cade Metz's book about Hinton and the tech transfer from universities to big tech ... I can respect him for persisting in his line of research for 20-30 years, while others saying he was barking up the wrong tree
But maybe this late life vindication led to a chip on his shoulder
The way he phrased this is remarkably confident and arrogant, and not like the behavior of respected scientist (now with a Nobel Prize) ... It's almost like Twitter-speak that made its way into real life, and he's obviously not from the generation that grew up with Twitter
gobdovan · 14m ago
Yeah, even forgot about that... I suppose that the same kind of confidence made him stick with neural nets for so long too, despite mainstream AI thinking it's a dead end. But that's the thing in academia, bold claims get encouraged, as ideas still get you the credit, even if they prove useful decades later and not in the way you imagined.
giardini · 22m ago
I wouldn't be too hard on Hinton. Researchers in image processing, geophysics and medicine have been saying the same thing since at least the early 1980's. There was always something coming that was just over the next hill that would take the human out of the loop. That special something always evaporated with time. I suppose it did keep funding coming in.
treyfitty · 52m ago
Eh, idk who Hinton is, but I’d cut him some slack for making both statements- I could imagine a case where “creatives” can semantically be understood as “new blue collar.” Musicians, dancers, photographers… are not blue color manufacturing employees, but they are fiscally more similar than their white collar counterparts. It’s possible he used inconsistent terms because he really means “low-wage employees who are far away from the monetary benefit creation decisions,” but that’s a mouthful
gobdovan · 40m ago
Hinton is the guy from the article. He is a big figure in AI research.
For context: he once argued AI could handle complex tasks but not drawing or music. Then when Stable Diffusion appeared, he flipped to "AI is creative." Now he's saying carpentry will be the last job to be automated, so people should learn that.
The pattern is sweeping, premature claims about what AI can or can't do that don't age well. His economic framing is similarly simplified to the point of being either trivial or misleading.
glitchc · 49m ago
If you don't know who Geoffrey Hinton is, I suggest you make a trip to Wikipedia post haste. Our modern LLM renaissance wouldn't exist without him.
nurettin · 22m ago
Ehhh it sounds like he's a poster boy who rode on the success of others (LeCun, Deepmind) and says whatever the current popular opinion is until proven wrong and shows no hint of predictive capability.
Freedom2 · 22m ago
> too speculative and inconsistent for me.
I wonder if he's a HN commenter as well, in that case.
I do appreciate your mention of Stuart Russell however. I've recently been watching a few of his talks and have found them very insightful.
protocolture · 1h ago
Considering that the general trend has been that a few people get rich and everyone else gets poorer, this is a nothing statement.
Gimpei · 30m ago
This is not true. Look at median household income in FRED. It is unequivocally up for everyone across the board.
Inequality has increased but it’s no longer clear that it’s as severe an increase as Piketty and Saez once argued. So, yes, things could certainly be much better. The US could, for example, benefit from a more progressive taxation and a stronger social safety net. But at the same time, we aren’t all headed to hell.
pizza · 1h ago
Isn't the difference between "capital -> labor -> capital" and "capital -> AI -> capital" (which is basically just "capital -> capital -> capital") that it's about the elimination of labor through its financialization? Not just that the poor get poorer, but from the POV of the rich that they don't even exist. (Conditional of course upon AI actually really being that much more productive than people and not dependent on them.)
bitmasher9 · 53m ago
AI is just capital, you said it yourself. AI just adds more capital to the capital + labor = more capital equation. The more value capital brings in comparison to labor, the more value capital takes from the result compared to labor.
The issue is the increasing imbalance of capital being overvalued compared to labor, and how that has a negative impact on most individuals.
SubiculumCode · 1h ago
Well, when AI replaces us as consumers, the economy will truly leave us behind
some_guy_nobel · 47m ago
Really makes you take a step back and wonder why Luigi was so celebrated. /s
pkrecker · 40m ago
At least for the US (and I suspect for most other countries), that is incorrect. When adjusted for inflation both median wealth and median wages have increased in the last 30 years.
arduanika · 1h ago
Fake news, based on confounding intra-country data with international data.
imiric · 35m ago
It's not. It directly contradicts the false utopia that tech bros have been selling us of AI solving world hunger, curing all diseases, bringing prosperity to everyone, and similar nonsense.
A statement like this from someone influential is important to break that narrative, despite the HN crowd finding it obvious.
ggm · 3h ago
I am surprised by the direct quotes where Hinton says "by any measure AI is intelligent" because I think these words will come back to haunt him, much as prognostications from Chomsky dogged his heels.
By narrow measures of outcome, AI synthesises answers which meet needs in questioners. I think intelligence includes aspects of behaviour (such a word) of a system which go beyond simply providing answers. I don't think AI can do this, yet if ever.
Uehreka · 3h ago
> I think intelligence
That very phrasing belies the problem with the word: There is no consensus on what intelligence is, what a clear test for it would be or whether such a test could even exist. There are only people on the internet with personal theories and opinions.
So when people say AI is not intelligent, my next questions are whether rocks, trees, flies, dogs, dolphins, humans and “all humans” are intelligent. The person will answer yes/no immediately in a tone that makes it sound like what they’re saying must be obvious, and yet their answers frequently do not agree with each other. We do not have a consensus definition of intelligence that can be used to include some things and exclude others.
peterashford · 1h ago
We have a model for what intelligence is - what humans do. If we produce a human-like AI I think we'll agree it's intelligent.
The fact that there are degrees of intelligence (dogs > flies) isn't that big of an issue, imo. It's the logically night is day argument - just because we can't point to a clear cut off point between these concepts, doesn't mean they aren't distinct concepts. So it follows with intelligence. It doesn't require consensus, just the same way that "is it night now?" doesn't require consensus
ggm · 44m ago
> I think we'll agree it's intelligent.
If there's one thing I've found never came true for me, it's almost any sentence of substantive opinion about "philosophy" which starts with "I think we'll agree"
And I do think this AI/AGI question is a philosophy question.
I don't know if you'll agree with that.
Whilst your analogy has strong elements of "consensus not required" I am less sure that applies right now, to what we think about AI/AGI. I think consensus is pretty .. important, and also, absent.
ggm · 3h ago
Exemplifies why I think Hinton was on dangerous ground. He is normally far more cautious in his use of language.
imiric · 17m ago
It's frustrating to read this type of response whenever this topic is raised. It does nothing but derail the conversation into absurdism.
Yes, we don't have clear definitions of intelligence, just like we don't for life, and many other fundamental concepts. And yet it's possible to discuss these topics within specific contexts based on a generally and colloquially shared definition. As long as we're willing to talk about this in good faith with the intention to arrive at some interesting conclusions, and not try to "win" an argument.
So, given this, it is safe to assert that we haven't invented artificial intelligence. We have invented something that mimics it very well, which will be useful to us in many domains, but calling this intelligence is a marketing tactic promoted by people who have something to gain from that narrative.
ggm · 1m ago
I am doing this, because normally Hinton is my go-to for cautious, useful input to a debate. When he makes this kind of sweeping statement, my hackles get up. The rest of the article had nothing I didn't expect. I did NOT expect him to make such a sweeping assertion.
They're useful. They're not intelligent. He invited the reproach.
nerpderp82 · 2h ago
His unhinged attacks on Chomsky are comical at this point.
loughnane · 3h ago
I think this is true for most capital equipment, at least until governments step in to take the edge off.
Whether AI has a more powerful effect than it's predecessors remains to be seen. It could.
bdangubic · 1h ago
yes, the governments (especially of the united states) love nothing more than to take shit from rich and give it to the poor
roflulz · 58m ago
name a country that is better that isn't a petrostate?
laweijfmvo · 3h ago
I have zero faith in the US stepping in at this point
SonOfKyuss · 2h ago
Oh they are stepping in. It’s just that they are stepping in to prevent any regulations
nextworddev · 2h ago
Reverse regulation
morkalork · 1h ago
And not just in the US either, they've been threatening Europe with tariffs and sanctions over attempting to regulate American tech companies in Europe.
loughnane · 2h ago
Agreed. I think when the history of this time is written the failure of the government to spread around the gains of capitalism and free trade will be seen as what led to the end of a political era.
starchild3001 · 51m ago
"I guess it’s nice that he’s become more optimistic here. He usually just talks about how it’ll probably kill us all." (From Reddit)
mallowdram · 33m ago
Hinton never notices that information isn't psychological, his generation failed at resolving the conduit metaphor, ie making the arbitrary in any ways specific (unless it's about diagnostic targets). Of course Hinton is a doomer, he's bullish on his own tech's explosive growth, which, as semantic dilution and/or delusion, is really just a huge bust, generally. Hinton will be seen as another version of Tucker, not Tesla.
xnx · 1h ago
Thank you for your contributions Geoffrey. Enjoy your retirement. You do not need to be a "thought leader".
arduanika · 1h ago
But wait, what if he says a thing that agrees with all my pre-existing assumptions? Can't he be a "thought leader" then?
deadbabe · 1h ago
More of a thought shepherd. A thought leader is a person who comes up with radical new ideas that start trending as other people subscribe to it.
p1dda · 1h ago
Exactly, his contributions lies in the past.
johnrob · 1h ago
Businesses fall into 2 categories:
B2C (sell to people)
B2B (sell to B2C companies)
If the “C” is broke, it seems like there won’t by any rich people. In other words, if the masses are poor and jobless, who is sending money to the rich?
9rx · 10m ago
> if the masses are poor and jobless, who is sending money to the rich?
What would they need it for? Remember, money is just the accounting of debt. Under the old world model, workers offer a loan to the rich — meaning that they do work for the rich and at some point in the future the rich have to pay that work back with something of equal value [e.g. food, gadgets, etc.].
But if in the new world the rich have AI to do the work they want done, the jobless masses can simply be cut out of the picture. No need for their debt in the first place.
seanmcdirmid · 41s ago
We can summarize this as “wealth can exist without money”, all you need is to accumulate lots of land, resources, and labor (robot or otherwise), and you can be wealthy also.
xhrpost · 36m ago
The rich class will have to shrink. Think about poor countries like North Korea, there's still a rich/powerful class, it's just a lot smaller.
Genego · 1h ago
There will always be plenty of money to go around no matter what situation we end up with. It will end up being much more centralized than things are today. But there won't ever be a situation where everyone is going to be poor, even if the masses are.
deadfoxygrandpa · 1h ago
what about fat government contracts
nick49488171 · 46m ago
Isn't that the result of every innovation ever, except for complete revolution?
kristoffer · 38m ago
No. Poverty has significantly declined during the 20th century ..
umeshunni · 24m ago
It's the opposite.
Most revolutions (bolsheviks, cuba, iran, arab spring etc) have made people significantly poorer, while most innovations have made people significantly richer (railroads, electricity, first and second agricultural revolutions, manufacturing)
ks2048 · 8m ago
I would be interested in some quantitative answer to this - although I imagine it would be hard to define. You skipped lots of revolutions (American, French) and of course there are non economic “costs” (the Terror of France, etc) as well as the problem of short-vs-long term effects.
Make it 50% of the sales price like with cigarettes, since "AI" makes people dumber.
pseudocomposer · 1h ago
This is true of everything anyone invents who happens to be unlucky enough to live under capitalism.
Rich, greedy people ruin everything.
briandw · 1h ago
The average person in the capitalist west is so ridiculously wealthy compared people that lived pre Industrial Revolution, or in any communist or socialist country.
deadbabe · 1h ago
Just because you fail to get rich with AI
does not mean you cannot get rich by some other means
Kapura · 46m ago
yeah man this is just capitalism. everybody should read piketty.
I can't but compare his takes with Stuart Russell takes, which are so well grounded, coherent and easily presented. I often revisit Stuart Russell discussion with Steven Pinker on AI for the clarity he brings to the topic.
Russell is much more measured in his statements and much more policy driven.
In my mind you need to listen to both and try to figure out where they're coming from.
Let me start by saying a few things that seem obvious. I think if you work as a radiologist, you're like the coyote that’s already over the edge of the cliff but hasn’t yet looked down
It’s just completely obvious that within five years deep learning is going to do better than radiologists.… It might be 10 years, but we’ve got plenty of radiologists already.”
https://www.youtube.com/watch?v=2HMPRXstSvQ
This article has some good perspective:
https://newrepublic.com/article/187203/ai-radiology-geoffrey...
His words were consequential. The late 2010s were filled with articles that professed the end of radiology; I know at least a few people who chose alternative careers because of these predictions.
---
According to US News, radiology is the 7th best paying job in 2025, and the demand is rising:
https://money.usnews.com/careers/best-jobs/rankings/best-pay...
https://radiologybusiness.com/topics/healthcare-management/h...
I asked AI about radiologists in 2025, and it came up with this article:
https://medicushcs.com/resources/the-radiologist-shortage-ad...
The Radiologist Shortage: Rising Demand, Limited Supply, Strategic Response
(Ironically, this article feels spammy to me -- AI is probably being too credulous about what's written on the web!)
---
I read Cade Metz's book about Hinton and the tech transfer from universities to big tech ... I can respect him for persisting in his line of research for 20-30 years, while others saying he was barking up the wrong tree
But maybe this late life vindication led to a chip on his shoulder
The way he phrased this is remarkably confident and arrogant, and not like the behavior of respected scientist (now with a Nobel Prize) ... It's almost like Twitter-speak that made its way into real life, and he's obviously not from the generation that grew up with Twitter
For context: he once argued AI could handle complex tasks but not drawing or music. Then when Stable Diffusion appeared, he flipped to "AI is creative." Now he's saying carpentry will be the last job to be automated, so people should learn that.
The pattern is sweeping, premature claims about what AI can or can't do that don't age well. His economic framing is similarly simplified to the point of being either trivial or misleading.
I wonder if he's a HN commenter as well, in that case.
I do appreciate your mention of Stuart Russell however. I've recently been watching a few of his talks and have found them very insightful.
Inequality has increased but it’s no longer clear that it’s as severe an increase as Piketty and Saez once argued. So, yes, things could certainly be much better. The US could, for example, benefit from a more progressive taxation and a stronger social safety net. But at the same time, we aren’t all headed to hell.
The issue is the increasing imbalance of capital being overvalued compared to labor, and how that has a negative impact on most individuals.
A statement like this from someone influential is important to break that narrative, despite the HN crowd finding it obvious.
By narrow measures of outcome, AI synthesises answers which meet needs in questioners. I think intelligence includes aspects of behaviour (such a word) of a system which go beyond simply providing answers. I don't think AI can do this, yet if ever.
That very phrasing belies the problem with the word: There is no consensus on what intelligence is, what a clear test for it would be or whether such a test could even exist. There are only people on the internet with personal theories and opinions.
So when people say AI is not intelligent, my next questions are whether rocks, trees, flies, dogs, dolphins, humans and “all humans” are intelligent. The person will answer yes/no immediately in a tone that makes it sound like what they’re saying must be obvious, and yet their answers frequently do not agree with each other. We do not have a consensus definition of intelligence that can be used to include some things and exclude others.
The fact that there are degrees of intelligence (dogs > flies) isn't that big of an issue, imo. It's the logically night is day argument - just because we can't point to a clear cut off point between these concepts, doesn't mean they aren't distinct concepts. So it follows with intelligence. It doesn't require consensus, just the same way that "is it night now?" doesn't require consensus
If there's one thing I've found never came true for me, it's almost any sentence of substantive opinion about "philosophy" which starts with "I think we'll agree"
And I do think this AI/AGI question is a philosophy question.
I don't know if you'll agree with that.
Whilst your analogy has strong elements of "consensus not required" I am less sure that applies right now, to what we think about AI/AGI. I think consensus is pretty .. important, and also, absent.
Yes, we don't have clear definitions of intelligence, just like we don't for life, and many other fundamental concepts. And yet it's possible to discuss these topics within specific contexts based on a generally and colloquially shared definition. As long as we're willing to talk about this in good faith with the intention to arrive at some interesting conclusions, and not try to "win" an argument.
So, given this, it is safe to assert that we haven't invented artificial intelligence. We have invented something that mimics it very well, which will be useful to us in many domains, but calling this intelligence is a marketing tactic promoted by people who have something to gain from that narrative.
They're useful. They're not intelligent. He invited the reproach.
Whether AI has a more powerful effect than it's predecessors remains to be seen. It could.
B2C (sell to people)
B2B (sell to B2C companies)
If the “C” is broke, it seems like there won’t by any rich people. In other words, if the masses are poor and jobless, who is sending money to the rich?
What would they need it for? Remember, money is just the accounting of debt. Under the old world model, workers offer a loan to the rich — meaning that they do work for the rich and at some point in the future the rich have to pay that work back with something of equal value [e.g. food, gadgets, etc.].
But if in the new world the rich have AI to do the work they want done, the jobless masses can simply be cut out of the picture. No need for their debt in the first place.
Most revolutions (bolsheviks, cuba, iran, arab spring etc) have made people significantly poorer, while most innovations have made people significantly richer (railroads, electricity, first and second agricultural revolutions, manufacturing)
An alternative view could be, this is just the same as every other technological innovation.
https://en.wikipedia.org/wiki/Robot_tax
Make it 50% of the sales price like with cigarettes, since "AI" makes people dumber.
Rich, greedy people ruin everything.
does not mean you cannot get rich by some other means