An argument in favor of AI consciousness (which I believe originates with Dennett) is that we already have examples, in the real world today, of things that are not conscious becoming things that are conscious. Gametes.
Sperm and egg cells, which are almost certainly not conscious, become babies in the right conditions, which are almost certainly conscious.
JumpCrisscross · 2d ago
> An argument in favor of AI consciousness (which I believe originates with Dennett) is that we already have examples, in the real world today, of things that are not conscious becoming things that are conscious. Gametes
This is closer to an argument for the existence of consciousness than anything pertaining to AI.
keiferski · 1d ago
This only makes sense if you zoom in at one part of a process and declare it entirely a unique thing, unconnected to the other predictable stages in that process.
AI might become “conscious” in the future, but you’re assuming that current AI is just one stage in a process - which we have not yet experienced, unlike the other processes mentioned.
K0balt · 1d ago
“Consciousness” is most likely one of those “emergent” things that really are just a continuum. Like an airplane taking off, most “emergent” behaviors are actually continuous processes that eventually become pronounced enough to recognize.
Even though through the entire takeoff roll, the plane is just getting lighter and lighter - even after it’s airborne - until the wheels leave the ground it seems to be just a really awkward, noisy bus.
Animals, down to insects, exhibit signs of self awareness and “conscious” behavior. It’s easy to remain incredulous for a variety of reasons, many of which I personally agree with… but I’d say that there appears to be progress towards something we would recognize as consciousness, and the process appears to be ongoing.
keiferski · 1d ago
Even though I probably agree that consciousness is an emergent quality, to be intellectually honest, we have to admit that it's a bit more complicated than that:
But that also doesn't really address the process critique that my comment was about, which is a different argument form.
DoingIsLearning · 2d ago
> in the real world today, of things that are not conscious becoming things that are conscious. Gametes.
This kind of goes into the whole research field of António Damásio on consciousness.
We can easily assess if a human is conscious from a medical point of view. But do we have a deterministic test to prove an abstract entity is conscious?
ben_w · 2d ago
> We can easily assess if a human is conscious from a medical point of view
Well, we can for *some* of the 40 different definitions of the word.
Are you conscious when you dream, or is it just a memory replayed by your waking mind? Or only when it's a lucid dream? Or are lucid dreams themselves just the illusion of consciousness?
What about when you sleepwalk?
What about hypnopompic and hypnogogic states?
What about chemically altered consciousness like blind drunk, stoned, tripping on LSD?
JumpCrisscross · 2d ago
> We can easily assess if a human is conscious from a medical point of view
We can’t. We can infer, from what they say and whether they’re ambulatory, that they’re probably conscious. But we often get it wrong. Empirically, we aren’t that far ahead of Descartes.
csomar · 2d ago
Gametes do not become conscious. They turn into a fetus, then baby, and then 5-6 years later you get consciousness.
OgsyedIE · 2d ago
So, a thing that isn't conscious becomes a thing that is?
csomar · 2d ago
It is not the same thing. By that logic, anything made of Carbon is also conscious.
OgsyedIE · 2d ago
That's not what I'm saying at all.
david-gpu · 2d ago
I'm with you. Perhaps some gametes never become conscious, but still find a way to post on HN.
jqpabc123 · 2d ago
This is the modern version of alchemy.
Without a single example or a shred of scientific evidence, alchemists became convinced it was possible to turn lead into gold with the right chemical reaction. And their "beliefs" persisted for centuries.
AI "believers" today are in exactly the same position.
This is how religion starts. This is more about "beliefs" than science.
ben_w · 2d ago
Other way around.
We started with alchemy on the basis of what little evidence the ancient could observe, and over time their crude ideas, gathered enough evidence to become chemistry with 91 natural elements, and then physics to fill in the Technetium gap (the only unstable element lighter than Uranium).
By this point the physicists had replaced the four (western) elements of earth, air, fire, and water mixed in various proportions to create all things, with — coincidentally — four particles that make all that we humans directly experience: neutrons, protons, electrons, and photons, mixed in various proportions to create everything we can see and touch, and yes, allowing us to transmute lead into gold.
And then the physics got the standard model, and even then we still definitely don't know all of it. But it is science.
We may well still be in the alchemy days for "consciousness". Might be at the "arguing over the periodic table" stage.
akomtu · 1d ago
It's also interesting that our nature is mostly made of four elements (H, C, N, O) and physics stands on four fundamental forces.
david-gpu · 2d ago
That is how every major innovation occurs. Flight was once considered impossible.
I am not saying that this particular effort will succeed, I am saying that revolutions start with once-preposterous ideas.
jqpabc123 · 2d ago
The first record of man made, heavier than air craft (kites) dates back to at least the 5th century BC.
Evidence showing it was possible has existed for a very long time.
The evidence for man made consciousness is ... ??? At this point, we don't even have a good definition of what consciousness is.
Just like alchemists, we're fumbling around with something we don't really understand and expecting some sort of fantastical result.
ben_w · 2d ago
> The evidence for man made consciousness is ... ??? At this point, we don't even have a good definition of what consciousness is.
We have 40 definitions; while I suspect you're right that they are not good enough, some of them may be.
The evidence for machine consciousness is "look at what this does, it's acting in the way we spent the last 76 years saying machines could only act like if they were conscious".
Which is weak evidence given the arguments about what defines consciousness, but it's more than nothing.
To use your alchemy analogy, it might be to actual consciousness as iron pyrite is to gold — shiny, but not the real thing.
jqpabc123 · 1d ago
it might be to actual consciousness as iron pyrite is to gold — shiny, but not the real thing.
You can program a computer to simulate, display and express all sorts of things.
For example, certain characteristics of self awareness.
But this doesn't make it truly self aware. A huge gap in functionality exists between expression and genuine awareness and understanding. I think AI fans tend to lose sight of this.
An LLM is still just a programmed computer. Much of it's programming is derived from statistics and examples instead of explicit logic/code but it is still programmed and as such is still constrained to playback of some variation of it's input.
My favorite example of LLM understanding and "awareness" (of rather the lack thereof) --- a chatbot accused a pro basketball player of vandalism because it found references to him "throwing bricks" during play. In other words, any real understanding just isn't in the cards yet. Further examples/programming may statistically correct this particular nuance but many more unexpected cases will likely still remain.
ben_w · 1d ago
> You can program a computer to simulate, display and express all sorts of things.
Indeed. This is also my counter-argument to anyone who is too eager to say any given AI "must" have any given internal state like hope or fear — I was rather hoping my words were sufficient to point to "we don't know" rather than either "yes" or "no".
> But this doesn't make it truly self aware. A huge gap in functionality exists between expression and genuine awareness and understanding. I think AI fans tend to lose sight of this.
"Self awareness" is the easy part, even quite simple software like the C64 OS has that. Just have some of the sensory input point to the innards themselves rather than to the outside world.
As I say, 40 different definitions of the word "consciousness".
> An LLM is still just a programmed computer. Much of it's programming is derived from statistics and examples instead of explicit logic/code but it is still programmed and as such is still constrained to playback of some variation of it's input.
To say an LLM is "programmed" is like saying the knowledge in our brains comes from our genes. The programming is just the architecture.
And while the limitation you give here is definitely correct, it is so because that's the nature of information and also applies to humans. When it looks like I'm creative, I'm mixing existing ideas and modifying them a little here or there and seeing if the result works. I can create variations of what I learn and experience, and I can choose where I go and what I experience or allow others to do so for me, but I can't speak a word of Igbo language* because I've not experienced any. Ask me to invent a new language from scratch, and a linguist will likely say it's a trivial modification of English or German, because I don't know enough linguistics to even be as alien as Chinese or Latin.
Doing things bit by bit also applies to the species as a whole, which is why we didn't jump straight to the scientific method as soon as we had writing — we had to invent each rung on the ladder one meme (in the Dawkins' sense of the word) at a time, within the constraints of our own minds and societies, because too large a step is what led to Galileo being called a heretic and Ignaz Semmelweis' nervous breakdown.
> My favorite example of LLM understanding and "awareness" (of rather the lack thereof) --- a chatbot accused a pro basketball player of vandalism because it found references to him "throwing bricks" during play. In other words, any real understanding just isn't in there yet.
When I was a very young child, my dad was ill in bed, talking to my mum, and I was also in the room in the way small children often are. He said "the tissue was damaged", so I offered him my handkerchief.
I will quite happily say that LLMs are "stupid", because they need such a ridiculously large number of examples to learn from**, but this is not the same thing as "are they self-aware?" or "are they [insert any other definition of consciousness here]?"
* Except for any loan words that are the same as English or German words, the same way Welsh has "ambiwlans" and "teleffon", but even then I don't know that I know them.
** Caveat: they can be stupid very very quickly, and for some tasks (anything with self-play) that's good enough.
idontwantthis · 1d ago
After gliders were figured out, all we needed was a light and powerful enough engine. There was a clear A->B->C with flight even if some people thought it wasn’t going to happen.
There is no A->B->C for consciousness. No one knows what is missing from the current state.
stormfather · 1d ago
If you had a robot doing all the computations of a LLM on pencil and paper instead of GPUs, would anyone think it was conscious?
Sperm and egg cells, which are almost certainly not conscious, become babies in the right conditions, which are almost certainly conscious.
This is closer to an argument for the existence of consciousness than anything pertaining to AI.
AI might become “conscious” in the future, but you’re assuming that current AI is just one stage in a process - which we have not yet experienced, unlike the other processes mentioned.
Even though through the entire takeoff roll, the plane is just getting lighter and lighter - even after it’s airborne - until the wheels leave the ground it seems to be just a really awkward, noisy bus.
Animals, down to insects, exhibit signs of self awareness and “conscious” behavior. It’s easy to remain incredulous for a variety of reasons, many of which I personally agree with… but I’d say that there appears to be progress towards something we would recognize as consciousness, and the process appears to be ongoing.
https://plato.stanford.edu/entries/consciousness/
But that also doesn't really address the process critique that my comment was about, which is a different argument form.
This kind of goes into the whole research field of António Damásio on consciousness.
We can easily assess if a human is conscious from a medical point of view. But do we have a deterministic test to prove an abstract entity is conscious?
Well, we can for *some* of the 40 different definitions of the word.
But there are people who are misdiagnosed to be in a vegetative state who later wake up and report experiences from when they appeared so: https://en.wikipedia.org/wiki/Vegetative_state#Misdiagnoses
Are you conscious when you dream, or is it just a memory replayed by your waking mind? Or only when it's a lucid dream? Or are lucid dreams themselves just the illusion of consciousness?
What about when you sleepwalk?
What about hypnopompic and hypnogogic states?
What about chemically altered consciousness like blind drunk, stoned, tripping on LSD?
We can’t. We can infer, from what they say and whether they’re ambulatory, that they’re probably conscious. But we often get it wrong. Empirically, we aren’t that far ahead of Descartes.
Without a single example or a shred of scientific evidence, alchemists became convinced it was possible to turn lead into gold with the right chemical reaction. And their "beliefs" persisted for centuries.
AI "believers" today are in exactly the same position.
This is how religion starts. This is more about "beliefs" than science.
We started with alchemy on the basis of what little evidence the ancient could observe, and over time their crude ideas, gathered enough evidence to become chemistry with 91 natural elements, and then physics to fill in the Technetium gap (the only unstable element lighter than Uranium).
By this point the physicists had replaced the four (western) elements of earth, air, fire, and water mixed in various proportions to create all things, with — coincidentally — four particles that make all that we humans directly experience: neutrons, protons, electrons, and photons, mixed in various proportions to create everything we can see and touch, and yes, allowing us to transmute lead into gold.
And then the physics got the standard model, and even then we still definitely don't know all of it. But it is science.
We may well still be in the alchemy days for "consciousness". Might be at the "arguing over the periodic table" stage.
I am not saying that this particular effort will succeed, I am saying that revolutions start with once-preposterous ideas.
Evidence showing it was possible has existed for a very long time.
The evidence for man made consciousness is ... ??? At this point, we don't even have a good definition of what consciousness is.
Just like alchemists, we're fumbling around with something we don't really understand and expecting some sort of fantastical result.
We have 40 definitions; while I suspect you're right that they are not good enough, some of them may be.
The evidence for machine consciousness is "look at what this does, it's acting in the way we spent the last 76 years saying machines could only act like if they were conscious".
Which is weak evidence given the arguments about what defines consciousness, but it's more than nothing.
To use your alchemy analogy, it might be to actual consciousness as iron pyrite is to gold — shiny, but not the real thing.
You can program a computer to simulate, display and express all sorts of things.
For example, certain characteristics of self awareness.
But this doesn't make it truly self aware. A huge gap in functionality exists between expression and genuine awareness and understanding. I think AI fans tend to lose sight of this.
An LLM is still just a programmed computer. Much of it's programming is derived from statistics and examples instead of explicit logic/code but it is still programmed and as such is still constrained to playback of some variation of it's input.
My favorite example of LLM understanding and "awareness" (of rather the lack thereof) --- a chatbot accused a pro basketball player of vandalism because it found references to him "throwing bricks" during play. In other words, any real understanding just isn't in the cards yet. Further examples/programming may statistically correct this particular nuance but many more unexpected cases will likely still remain.
Indeed. This is also my counter-argument to anyone who is too eager to say any given AI "must" have any given internal state like hope or fear — I was rather hoping my words were sufficient to point to "we don't know" rather than either "yes" or "no".
> But this doesn't make it truly self aware. A huge gap in functionality exists between expression and genuine awareness and understanding. I think AI fans tend to lose sight of this.
"Self awareness" is the easy part, even quite simple software like the C64 OS has that. Just have some of the sensory input point to the innards themselves rather than to the outside world.
As I say, 40 different definitions of the word "consciousness".
> An LLM is still just a programmed computer. Much of it's programming is derived from statistics and examples instead of explicit logic/code but it is still programmed and as such is still constrained to playback of some variation of it's input.
To say an LLM is "programmed" is like saying the knowledge in our brains comes from our genes. The programming is just the architecture.
And while the limitation you give here is definitely correct, it is so because that's the nature of information and also applies to humans. When it looks like I'm creative, I'm mixing existing ideas and modifying them a little here or there and seeing if the result works. I can create variations of what I learn and experience, and I can choose where I go and what I experience or allow others to do so for me, but I can't speak a word of Igbo language* because I've not experienced any. Ask me to invent a new language from scratch, and a linguist will likely say it's a trivial modification of English or German, because I don't know enough linguistics to even be as alien as Chinese or Latin.
Doing things bit by bit also applies to the species as a whole, which is why we didn't jump straight to the scientific method as soon as we had writing — we had to invent each rung on the ladder one meme (in the Dawkins' sense of the word) at a time, within the constraints of our own minds and societies, because too large a step is what led to Galileo being called a heretic and Ignaz Semmelweis' nervous breakdown.
> My favorite example of LLM understanding and "awareness" (of rather the lack thereof) --- a chatbot accused a pro basketball player of vandalism because it found references to him "throwing bricks" during play. In other words, any real understanding just isn't in there yet.
When I was a very young child, my dad was ill in bed, talking to my mum, and I was also in the room in the way small children often are. He said "the tissue was damaged", so I offered him my handkerchief.
I will quite happily say that LLMs are "stupid", because they need such a ridiculously large number of examples to learn from**, but this is not the same thing as "are they self-aware?" or "are they [insert any other definition of consciousness here]?"
* Except for any loan words that are the same as English or German words, the same way Welsh has "ambiwlans" and "teleffon", but even then I don't know that I know them.
** Caveat: they can be stupid very very quickly, and for some tasks (anything with self-play) that's good enough.
There is no A->B->C for consciousness. No one knows what is missing from the current state.
https://dmf-archive.github.io