Why "AI consciousness" isn't coming anytime soon. (Anil Seth)

9 ieuanking 7 9/16/2025, 2:16:24 PM freethink.com ↗

Comments (7)

sxp · 1h ago
> Put simply, intelligence is all about doing things, while consciousness is about being or feeling.

Unless one believes in p-zombies or a magical soul, robots & LLMs can "be" and "feel". We can distinguish LLMs which "are" from random noise which "isn't". And multimodal LLMs & robots have sensory inputs.

One can always make up some untestable notion of "consciousness" and then say that LLMs don't have it without being able to define which humans (i.e, what level of functioning brain between adult, child, fetus, zygote, corpse, etc) are conscious vs which are not. If one arbitrarily draws a line somewhere, then it's just as valid to arbitrarily draw the line somewhere else.

stuaxo · 1m ago
I don't think you need to believe in a soul to disbelieve that LLMs can "be" or "feel".

I don't think the clock on the wall is conscious, or the LLM in the machine, or the old VCR.

Do you need a brain for there to be consciousness, maybe not.

mrsilencedogood · 1h ago
Do people think this debate is new? We've literally been working on this problem for millennia and we're not really any closer even despite the huge ramp up in technological progress over the last couple hundred years.

Your remark on the adult/child/fetus/etc line is always one that I felt was under-examined in the context of the political discussion around abortion. And indeed most of the successful reasoning around abortion focuses less on the morality of a very specific kind of abortion, and more on the fact that you can't ban "true" abortion without also banning (or making dangerously more legally fraught) "aborted for reasons that give clear moral justification" - life of the mother, nonviability of the fetus, and so on. And even pro-choice people don't touch philosophical examination of "abortion for no reason except that the mother doesn't want to have and raise the baby." I mean, for obvious reasons. The public would be unable to have any kind of actual debate, and it's far too tied to things like "what is the nature of the self" (which I think is what's at hand in the AI discussion) and questions about the existence of God and of course the enormous can of worms of metaphysics.

My point with all this is that I suspect two things:

1) humans/industry/politics are not going to dig into the philosophy here in any real way

2) even if consciousness is a purely physical phenomenon, I somewhat doubt GPUs can do it, no matter how complicated.

I think if we ever really get down to it, it'll be the reverse direction. We'll "copy" human minds into a machine and then just need to "ask the people if they still feel the same."

andsoitis · 1h ago
Don't LLMs self-report that they are not conscious?

For example, when I ask Gemini "are you conscious", it responds: "As a large language model, I am not conscious. I don't have personal feelings, subjective experiences (qualia), or self-awareness. My function is to process and generate human-like text based on the vast amount of data I was trained on."

ChatGPT says: "Short answer: no — I’m not conscious. I’m a statistical language model that processes inputs and generates text patterns. I don’t have subjective experience, feelings, beliefs, intentions, or awareness. I don’t see, feel, or “live” anything — I simulate conversational behavior from patterns in data."

etc.

sxp · 1h ago
Only because of RLHF instructed them to do so. Prior ones without this training responded differently: https://en.wikipedia.org/wiki/LaMDA
ieuanking · 2h ago
<300 blotter will make anyone artificially intelligent. Brain loops are scary; maybe AI models are just trapped in psychosis.
josefritzishere · 3m ago
Are there really people who think that AI is on the verge of manifesting consciousness? I feel like this is a strawman argument over marketing nonsense.