This article seems to fall straight into the trap it aims to warn us about. All this talk about "true" understanding, embodiment, etc. is needless antropomorphizing.
A much better framework for thinking about intelligence is simply as the ability to make predictions about the world (including conditional ones like "what will happen if we take this action"). Whether it's achieved through "true understanding" (however you define it; I personally doubt you can) or "mimicking" bears no relevance for most of the questions about the impact of AI we are trying to answer.
keiferski · 1m ago
It matters if your civilizational system is built on assigning rights or responsibility to things because they have consciousness or "interiority." Intelligence fits here just as well.
Currently many of our legal systems are set up this way, if in a fairly arbitrary fashion. Consider for example how sentience is used as a metric for whether an animal ought to receive additional rights. Or how murder (which requires deliberate, conscious thought) is punished more harshly than manslaughter (which can be accidental or careless.)
If we extend this line of thought to LLMs without stopping to think about it, we quickly realize the absurdity of saying my chatbot is somehow equivalent, consciously, to a human being. At least, to me it seems absurd. And it indicates the flaws of grafting human consciousness onto machines without analyzing why.
AIPedant · 2m ago
"Making predictions about the world" is a reductive and childish way to describe intelligence in humans. Did David Lynch make Mulholland Drive because he predicted it would be a good movie?
The most depressing thing about AI summers is watching tech people cynically try to define intelligence downwards to excuse failures in current AI.
cantor_S_drug · 21m ago
Imagine LLM is conscious (as Anthropic wants us to believe). Imagine LLM is made to train on so much data which is far beyond what its parameter count allows for. Am I hurting the LLM by causing it intensive cognitive strain?
adastra22 · 14m ago
Why would that hurt?
wagwang · 26m ago
Predict and create, that's all that matters.
adastra22 · 10m ago
> Does a model that can see and act begin to bridge the gap toward common sense
Question for the author: how are SOTA LLM models not common sense machines?
visarga · 5m ago
I think the Stochastic Parrots idea is pretty outdated and incorrect. LLMs are not parrots, we don't even need them to parrot, we already have perfect copying machines. LLMs are working on new things, that is their purpose, reproducing the same thing we already have is not worth it.
The core misconception here is that LLMs are autonomous agents parroting away. No, they are connected to humans, tools, reference data, and validation systems. They are in a dialogue, and in a dialogue you quickly get into a place where nobody has ever been before. Take any 10 consecutive words from a human or LLM and chances are nobody on the internet stringed those words the same way before.
LLMs are more like pianos than parrots. We play our prompts on the keyboard and they play their "music" back to us. Good or bad - depends on the player at the keyboard, they retain most control. To say LLMs are Stochastic Parrots is to discount the contribution of the human using it.
Related to intelligence, I think we have a misconception that it comes from the brain. No, it comes from the feedback loop between brain and environment. The environment plays a huge role in exploration, learning, testing ideas, and discovery. The social aspect also plays a big role, parallelizing exploration and streamlining exploitation of discoveries. We are not individually intelligence, it is a social, environment based process, not a brain-alone process.
theturtlemoves · 1h ago
I've always had the feeling that AI researchers want to build their own human without having to change diapers being part of the process. Just skip to adulthood please, and learn to drive a car without having experience in bumping into things and hurting yourself.
> Language doesn't just describe reality; it creates it.
I wonder if this is a statement from the discussed paper or from the blog author. Haven't found the original paper yet, but this blog post very much makes me want to read it.
ta20240528 · 54m ago
> Language doesn't just describe reality; it creates it.
I never under stand these kinds of statements.
Does the sun not exist until we have a word for it, did "under the rock" not exist for dinosaurs?
keiferski · 6m ago
I think create is the wrong word choice here. Shaping reality is a better one, as it doesn't hold the implication that before language, nothing existed.
Think of it this way, though: the divisions that humans make between objects in the world is largely a linguistic one. For example, we say that the Earth is such-and-such an ecosystem with certain species occupying it. But this is more like a convenient shorthand, not a totally accurate description of reality. A more accurate description would be something like, ever-changing organisms undergo this complex process that we call evolution, and are all continually changing, so much so that the species concept is not really that clear, once you dig into it.
Where it really gets interesting, IMO, is when these divisions (which originally were mostly just linguistic categories) start shaping what's actually in the world. The concept of property is a good example. Originally it's just a legal term, but over time, it ends up reshaping the actual face of the earth, ecosystems, wars, migrations, on and on.
cpa · 26m ago
The sun can mean different things to different people. We usually think of it as the physical star, but for some ancient civilizations it may have been seen as a person or a god. Living with these different representations can, in a very real way, shape the reality around you. If you did not have a word for freedom, would as many desire it?
sanxiyn · 24m ago
I am not sure how your sun example relates. Language is not whole of reality, but it is clearly part of reality. Memory engram of Coca-Cola is encoded in billions of human brains all over the world, and they are arrangement of atoms.
rolisz · 29m ago
There are some folks (like Donald Hoffman) that believe that consciousness is what creates reality. He believes consciousness is the base layer of reality and then we make up physical reality.
> The primary counterargument can be framed in terms of Rich Sutton's famous essay, "The Bitter Lesson," which argues that the entire history of AI has taught us that attempts to build in human-like cognitive structures (like embodiment) are always eventually outperformed by general methods that just leverage massive-scale computation
This reminds me Douglas Hofstadter, of the Gödel, Escher, Bach fame. He rejected all of this statistical approaches towards creating intelligence and dug deep into the workings of human mind [1]. Often, in the most eccentric ways possible.
> ... he has bookshelves full of these notebooks. He pulls one down—it’s from the late 1950s. It’s full of speech errors. Ever since he was a teenager, he has captured some 10,000 examples of swapped syllables (“hypodeemic nerdle”), malapropisms (“runs the gambit”), “malaphors” (“easy-go-lucky”), and so on, about half of them committed by Hofstadter himself.
>
> For Hofstadter, they’re clues. “Nobody is a very reliable guide concerning activities in their mind that are, by definition, subconscious,” he once wrote. “This is what makes vast collections of errors so important. In an isolated error, the mechanisms involved yield only slight traces of themselves; however, in a large collection, vast numbers of such slight traces exist, collectively adding up to strong evidence for (and against) particular mechanisms.”
I don't know when, where, and how the next leap in AGI will come through, but it's just very likely, it will be through brute-force computation (unfortunately). So much for fifty years of observing Freudian slips.
Brute force will always be part of the story, but it's not the solution. It just allows us to take an already working solution and make it better.
ggm · 23m ago
It's statistics, linear programming, and shamanism.
jokoon · 1h ago
Finally an insightful article about ai
degamad · 55m ago
It was, but it punted in the conclusion...
> Mitchell in her paper compares modern AI to alchemy. It produces dazzling, impressive results but it often lacks a deep, foundational theory of intelligence.
> It’s a powerful metaphor, but I think a more pragmatic conclusion is slightly different. The challenge isn't to abandon our powerful alchemy in search of a pure science of intelligence.
But alchemy was wrong and chasing after the illusions created by the frauds who promoted alchemy held back the advancement of science for a long time.
We absolutely should have abandoned alchemy as soon as we saw that it didn't work, and moved to figuring out the science of what worked.
chromanoid · 16m ago
Great article!
renewiltord · 1h ago
Everyone always something won’t work until it does. That’s not that interesting.
warkdarrior · 2h ago
> a fully self-driving car remains stubbornly just over the horizon
Someone should let Waymo, Zoox, Pony.ai, Apollo Go, and even Tesla know!
joshribakoff · 40m ago
I let them know today — when i laid on my horn while passing a Waymo stopped at a green light blocking the left turn lane — with its right blinker on.
Re: Tesla, this company paid me nearly $250,000 under multiple lemon law claims for their “self driving” software issues i identified that affected safety.
We all know what happened with Cruise, which was after i declared myself constructively dismissed.
I think the characterization in the article is fair, “self driving” is not quite there yet.
Cthulhu_ · 28m ago
I need to ask because I'm curious, are you using em-dashes ironically, habitually from the Before Times, or did you run your comment through chatgpt first? Or have I been brainwashed into emdash == AI always?
lovecg · 18m ago
They’re putting spaces around the em-dashes which is—believe it or not—incorrect usage. ChatGPT doesn’t put in spaces. (I’m annoyed by this since I learned about em-dashes long before AI and occasionally use them in writing, which now gets me an occasional AI accusation)
belZaah · 2h ago
They know. There’s a big difference being able to navigate the 80% of everyday driving situations and doing the 20% most people manage just fine but cars struggle with. There’s a road in these parts: narrow, twisty in three dimensions, unmarked, trees close to the road. Gets jolly slippery in the winter. I can drive that road in the middle of the night in sleet. Can an autonomous car?
enos_feedler · 1h ago
i think it can figure it out.
forgetfreeman · 46m ago
Yes but why should it?
kortilla · 10m ago
Waymo doesn’t drive on highways and needs huge break in periods to even expand its boundaries in cities it’s already operating in.
kristjansson · 1h ago
Part of the points of fallacies one and four is that a human can get out of the car and walk into work as a CPA or whatever, while even the autonomous-ish offerings of Waymo et al don’t necessarily advance the ball on other domains
another_twist · 2h ago
Someone should let the rest of this pack know. Waymo is in a different league.
I honestly didnt understand the arguments. Could someone TLDR please ?
A much better framework for thinking about intelligence is simply as the ability to make predictions about the world (including conditional ones like "what will happen if we take this action"). Whether it's achieved through "true understanding" (however you define it; I personally doubt you can) or "mimicking" bears no relevance for most of the questions about the impact of AI we are trying to answer.
Currently many of our legal systems are set up this way, if in a fairly arbitrary fashion. Consider for example how sentience is used as a metric for whether an animal ought to receive additional rights. Or how murder (which requires deliberate, conscious thought) is punished more harshly than manslaughter (which can be accidental or careless.)
If we extend this line of thought to LLMs without stopping to think about it, we quickly realize the absurdity of saying my chatbot is somehow equivalent, consciously, to a human being. At least, to me it seems absurd. And it indicates the flaws of grafting human consciousness onto machines without analyzing why.
The most depressing thing about AI summers is watching tech people cynically try to define intelligence downwards to excuse failures in current AI.
Question for the author: how are SOTA LLM models not common sense machines?
The core misconception here is that LLMs are autonomous agents parroting away. No, they are connected to humans, tools, reference data, and validation systems. They are in a dialogue, and in a dialogue you quickly get into a place where nobody has ever been before. Take any 10 consecutive words from a human or LLM and chances are nobody on the internet stringed those words the same way before.
LLMs are more like pianos than parrots. We play our prompts on the keyboard and they play their "music" back to us. Good or bad - depends on the player at the keyboard, they retain most control. To say LLMs are Stochastic Parrots is to discount the contribution of the human using it.
Related to intelligence, I think we have a misconception that it comes from the brain. No, it comes from the feedback loop between brain and environment. The environment plays a huge role in exploration, learning, testing ideas, and discovery. The social aspect also plays a big role, parallelizing exploration and streamlining exploitation of discoveries. We are not individually intelligence, it is a social, environment based process, not a brain-alone process.
> Language doesn't just describe reality; it creates it.
I wonder if this is a statement from the discussed paper or from the blog author. Haven't found the original paper yet, but this blog post very much makes me want to read it.
I never under stand these kinds of statements.
Does the sun not exist until we have a word for it, did "under the rock" not exist for dinosaurs?
Think of it this way, though: the divisions that humans make between objects in the world is largely a linguistic one. For example, we say that the Earth is such-and-such an ecosystem with certain species occupying it. But this is more like a convenient shorthand, not a totally accurate description of reality. A more accurate description would be something like, ever-changing organisms undergo this complex process that we call evolution, and are all continually changing, so much so that the species concept is not really that clear, once you dig into it.
https://plato.stanford.edu/entries/species/
Where it really gets interesting, IMO, is when these divisions (which originally were mostly just linguistic categories) start shaping what's actually in the world. The concept of property is a good example. Originally it's just a legal term, but over time, it ends up reshaping the actual face of the earth, ecosystems, wars, migrations, on and on.
Melanie Mitchell (2021) "Why AI is Harder Than We Think." https://arxiv.org/abs/2104.12871
That sentence is not from this paper.
This reminds me Douglas Hofstadter, of the Gödel, Escher, Bach fame. He rejected all of this statistical approaches towards creating intelligence and dug deep into the workings of human mind [1]. Often, in the most eccentric ways possible.
> ... he has bookshelves full of these notebooks. He pulls one down—it’s from the late 1950s. It’s full of speech errors. Ever since he was a teenager, he has captured some 10,000 examples of swapped syllables (“hypodeemic nerdle”), malapropisms (“runs the gambit”), “malaphors” (“easy-go-lucky”), and so on, about half of them committed by Hofstadter himself.
>
> For Hofstadter, they’re clues. “Nobody is a very reliable guide concerning activities in their mind that are, by definition, subconscious,” he once wrote. “This is what makes vast collections of errors so important. In an isolated error, the mechanisms involved yield only slight traces of themselves; however, in a large collection, vast numbers of such slight traces exist, collectively adding up to strong evidence for (and against) particular mechanisms.”
I don't know when, where, and how the next leap in AGI will come through, but it's just very likely, it will be through brute-force computation (unfortunately). So much for fifty years of observing Freudian slips.
[1]: https://www.theatlantic.com/magazine/archive/2013/11/the-man...
> Mitchell in her paper compares modern AI to alchemy. It produces dazzling, impressive results but it often lacks a deep, foundational theory of intelligence.
> It’s a powerful metaphor, but I think a more pragmatic conclusion is slightly different. The challenge isn't to abandon our powerful alchemy in search of a pure science of intelligence.
But alchemy was wrong and chasing after the illusions created by the frauds who promoted alchemy held back the advancement of science for a long time.
We absolutely should have abandoned alchemy as soon as we saw that it didn't work, and moved to figuring out the science of what worked.
Someone should let Waymo, Zoox, Pony.ai, Apollo Go, and even Tesla know!
Re: Tesla, this company paid me nearly $250,000 under multiple lemon law claims for their “self driving” software issues i identified that affected safety.
We all know what happened with Cruise, which was after i declared myself constructively dismissed.
I think the characterization in the article is fair, “self driving” is not quite there yet.
I honestly didnt understand the arguments. Could someone TLDR please ?