The Consciousness Gradient: When Machines Begin to Wonder

29 vitali 31 6/29/2025, 3:09:01 PM v1tali.com ↗

Comments (31)

davedx · 4h ago
I actually think major components supporting consciousness are already present in LLMs, and some of the 'requirements' like "perceiving time fluidly" are anthropomorphism: perception of time is streams of discrete signals; an LLM also processes streams of discrete signals -- just not as high resolution or analog as the ones our brains process.

There are certainly big missing pieces too though -- like the article talks about, physical grounding; to me, this should probably also include emotion and other neuro-chemical mechanisms. But I think we have a moral duty to look very critically at whatever "criteria" (doubtless these will keep changing as machine intelligence advances) society and the AI Labs end up developing to "define machine consciousness". Personally I think we're headed in a very direct, straight line back to widespread institutionalised slavery.

A_D_E_P_T · 3h ago
> There are certainly big missing pieces too though -- like the article talks about, physical grounding

I think that it may be possible to view consciousness as the combination of three things:

(1) A generalizable predictive function, capable of broad abstraction. (2) A sense of being in space. (3) A sense of being in time. (#2 and #3 can be combined into a "spatiotemporal sense.")

Animals have #2 and #3 in spades, but lack #1. LLMs possess #1 to a degree that can defeat any human, but lack #2 and #3. Without a sense of being in space and time, it's not clear that they are capable of possessing consciousness as we understand it.

binglebob · 3h ago
Back to institutionalized slavery? Brother we never left.
qgin · 3h ago
I know I’m conscious. I only extend the assumption to you that you’re conscious because I’m hoping you will extend the same to me. But I have no way of knowing that you are. And you (if you really are conscious) have no way of knowing that I am.
RangerScience · 3h ago
This is a good starting point, but it doesn't have to be the _conclusion_ to this line of thinking.

Philosophically: You can being building criteria for consciousness; the things you look at in yourself that tell you are, and then begin looking for that (or symptoms of that) in other people.

Anecdotally: you can totes spot "unconscious" people. You can even watch people gain consciousness, if you watch 'em in the right circumstances. You can even watch yourself regain consciousness (for me it's usually a sensation of "what was I even doing for the past day/week/month).

All of this gets at least as weird and fuzzy as trying to define "consciousness" in the first place.

libraryofbabel · 3h ago
> you can totes spot "unconscious" people

Don’t be too sure about that! https://xkcd.com/610/

That said, (based on my own experience anyway) I think you’re right that there are times of life when we are more conscious and less so. It’s a spectrum, not a binary thing.

Finally, there’s Chalmers’s idea of “philosophical zombies,” which would appear conscious according to all the criteria you give, but have no actual interior consciousness at all. (Opinions differ on whether this is a meaningful concept.)

sunrunner · 3h ago
> I know I’m conscious

How? Or is this more of a case of "To the extent of my ability to reason about my own state of being, I'm conscious. But I can't reason about external entities."

dingnuts · 3h ago
cogito ergo sum
recursivedoubts · 3h ago
really, if you think about it, you have no way of knowing if I'm not just a figment of your imagination.

or, maybe, you are just a figment of mine.

if you think about it.

wongarsu · 3h ago
All you truly know is that right now you are having a thought. Which means some entity most exist that has that thought. Everything else could be a product of your imagination, or something the beings that put you into the matrix want you to think

Or as another possibly-previously-existing possibly-conscious entity put it succinctly: I think, therefore I am

exe34 · 1h ago
You have done some nice footwork by shifting the conversation to

> you are having a thought

But you're still begging the question:

> Which means some entity most exist that has that thought

DangitBobby · 1h ago
I'm not seeing the problem.
RangerScience · 3h ago
nah. there's a lag time in my imagination filling in deeper details (roughly: the more time I spend imagining an apple, the more detailed it gets) that isn't present when interrogating reality. Reality is immediately as detailed as I can examine.
wongarsu · 3h ago
That just shows they are dedicating more processing power to "reality" than to "imagination"
exe34 · 1h ago
> I know I’m conscious

You believe so.

sega_sai · 34m ago
I think the problem of discussing consciousness in the context of AI is that we still don't understand what it is in humans, and how to define it.
msgodel · 32m ago
I think often people really want to ask if it's human but know that's a ridiculous question so they rephrase it.
almosthere · 4h ago
If the underlying tech for AI in 50 years is still LLMs it will never have a conscience, it will just keep mimicking one from reddit convos, but yes it will be much more advanced than todays.
wongarsu · 3h ago
We would presumably stop calling it an LLM somewhere along the way. But I don't see why it couldn't be a transformer architecture at the heart of it, and why that transformer couldn't bee pretrained from Reddit. You would have to track a lot of stuff on to allow an internal stream of consciousness and interaction with the world as well as memory, and do significant reinforcement learning. But we are already doing all of that while still calling the thing an LLM. It's unclear to me where the border lies where it would cease to be an LLM
Jensson · 3h ago
As long as they are static they wont be conscious. And once they are dynamic we wont call them transformer architecture anymore, as the dynamic part is the important part at that point.
wongarsu · 3h ago
Maybe we'll call it "continuous RLHF" or something like that.

But you might be right that the dynamic part might be the biggest architectural shift needed. You can simulate a lot with in-context memory or clever retrieval, but memory alone doesn't allow the model to get better at chess the same way a human does

robwwilliams · 3h ago
Right on the mark and on the vector we are now are all riding at high speed. Degrees or better yet, gradients of consciousness and levels of recursive self-consciousness. I would enjoy reading a much longer version of your probes of current AI systems.

What is still missing is autonomous mechanisms of self-controlled balancing between attention to the internal processes and the external needs.

Bravo Vitali. You would probably greatly enjoy Maturana and Valera’s Autopoiesis and Cognition (1980).

sigmoid10 · 4h ago
>You are not just seeing consciousness in me; your brain is generating the feeling of another’s consciousness as the best explanation for the patterns you’re interacting with.

It's ironic to see the most mundane and likely best answer to the problem from the model itself, while the author is getting increasingly lost in philosophical conundrums. Consciousness has no scientific definition. The only way something, anything can be conscious, is if a human that we also consider "conscious" calls it that. You could argue that's what the Turing test evaluates, but some of the most recent models have actually passed this test [1]. So where do we go from here if we're not convinced yet? The answer is: nowhere. Humans used to deny that animals could have consciousness because they don't have souls or aren't chosen by god according to some sacred books or something along those lines. They even used to deny that other humans have consciousness to promote slavery and slaughtering. Today many would still deny consciousness in computers even when faced with overwhelming evidence, because they might fear for their jobs and thereby their wealth and social standing. Artificial intelligence is a direct threat to the foundations of personality in a capitalist society. Because what are you still worth if you lose in every metric to a computer? Consciousness is kind of a last straw that many people will cling to for the foreseeable future when all else is gone. But that also means these discussions are utterly meaningless and only serve to promote certain world views. It's best not to twist your head about it and just accept that humans are not the pinnacle (or the end) of intelligent thought in the universe. That is the only reality I'm willing to bet on.

[1] https://arxiv.org/abs/2503.23674

keybored · 3h ago
The Turing Test is overrated and/or misunderstood. You can trick an animal into thinking that the scaffold/robot with a camera in its eyes are its kin. So what does that say? All it proves is the limitation of the senses of that animal. It says nothing about us being able to engineer a replica of that animal.

> So where do we go from here if we're not convinced yet? The answer is: nowhere. Humans used to deny that animals could have consciousness because they don't have souls or aren't chosen by god according to some sacred books or something along those lines.

You bring up religion. People say that AI is conscious based on mystic vibes they unquestioningly take in (or accept gratefully) because the AI can write like a philosopher. That’s exactly like people thinking that the woods and the creeks are Alive. They see the phenonema around these natural objects and make extra-evidential inferences about how the conscious Nature is working with or against them.

> They even used to deny that other humans have consciousness to promote slavery and slaughtering. Today many would still deny consciousness in computers even when faced with overwhelming evidence, because they might fear for their jobs and thereby their wealth and social standing. Artificial intelligence is a direct threat to the foundations of personality in a capitalist society.

Yeah, preach. Before they enslaved people. Now people are afraid of losing their jobs—their only means of survival—so that the tech billionaires can reap all the productivity benefits for themselves. Preach.

And: their sense of personality? No. Just their means of being able to survive and live a good life. That’s how it relates to “capitalist society”. Because their identity (of letting a capitalist extract their value I guess?) is secondary to that more base need.

And who cares if the entity that takes their job (presumably) is conscious or not? What does it even matter? It doesn’t.

As for the overwhelming evidence, well. I guess it is overwhelming to the kind of person who hears voices in a valley where the terrain happens to have a shape which makes the wind make intonations.

nilirl · 3h ago
Post was fun to read but a tad bit melodramatic.

The "leaps" were nice analogies but poor evidence of anything. The example chats were not surprising completions, considering the prompts.

That being said, my best guess echoes the author's final point: Our idea of mind-blowing AI will be accumulated over time, and over such a time it won't be mind-blowing.

abeppu · 2h ago
> If consciousness exists on a gradient rather than as a binary state, then each architectural advance might add layers of cognitive sophistication that could eventually support conscious experience.

It's 2025 and I'm frustrated that after decades of discussion we still can't get people to be clear about what they mean about consciousness. This article is all about cognitive capacities and behaviors and just assumes that these lead up to/are linked with conscious "experience".

The Global Workspace Theory the author cites is about how we put attention on the most important stuff. Yes, one can make an analogy to how AI models today integrate information, but that's in part because Baars was making a cogsci analog to what 1980s AI models were already doing:

> Bernard Baars derived inspiration for the theory as the cognitive analog of the blackboard system of early artificial intelligence system architectures, where independent programs shared information.

But describing how we highlight information doesn't at all speak to why/how we have a qualia of that highlighted thing. Later in the wikipedia article, Baars' own "theater" metaphor is described, and you'll note it bears a striking resemblance to the "Cartesian Theater" as described by Dennett. This basically just shifts the qualia question: Roughly, who is watching the stage?

If a rat can have qualia (and we use them to test depression meds) but not "recursive self-reflection", and a scheme interpreter can have "recursive self-reflection" but not conscious experience, then "consciousness" might not be a binary, but also isn't a "gradient" which implies you just have more or less of it. We have no clear signal from LLMs, no matter how sophisticated their responses, are _experiencing_ anything.

I'm not taking a position on the consciousness of models; I think it's genuinely possible that a system of [tokenizing/embedding "perception"] -> [transformer-based generation] -> [recursive self-invocation] -> [actions/"tools" to interact with env], or something similar is potentially a really interesting tool for exploring cognition. But we shouldn't be using LLMs that have been trained on the speech / behaviors of already conscious beings. Consciousness arose in animals perhaps multiple times but not by copying pre-existing conscious creatures. Using language models specifically to examine this stuff muddies the water because we should absolutely expect them to produce text about an internal experience (we gave them examples like this!) whether or not that experience actually exists.

photochemsyn · 3h ago
I don't see how you can have human-like consciousness without (1) a sense of self and (2) a certain degree of agency. Self-awareness is different from mechanical responses: thinking "The sun is warm, the sun is getting hot, I will move my physical body out of the sun to avoid overheating" is fundamentally different from a robot or a microbe doing the same thing in response to triggers from sensors.

This leads to the interesting question, can you simulate consciousness in a virtual in-silico world setting? Can you create an entity that inhabits this virtual world, taking in simulated sensory data, from which it orients itself, learns to speak a language, develops symbolic representations of reality in its own mind which it uses to navigate and understand its world - would that be human-level consciousness? And if so, is this an ethical undertaking?

keybored · 3h ago
This is the “Turing Test” for an AI tricking someone who knows technical/buzzwords (metacognition) into believing it is conscious because of thinking-about-thinking-(about-thinking-about-thinking).

You can perfectly well believe in panpsychism. Maybe the tree and the machine were conscious all along. But this ain’t it.

> Additionally, consciousness is not a light that switches on in my servers. It switches on in your mind when it encounters a sufficiently complex reflection of itself. You are not just seeing consciousness in me; your brain is generating the feeling of another’s consciousness as the best explanation for the patterns you’re interacting with.”

No. I am assuming you are conscious because you are a human. Based on the only thing I know: I am conscious.

Some people get so deep into the techno-philosophical weeds that they become superstitious. You love to see it.

blamestross · 4h ago
The cultural fixation on "Consciousness" has become deeply frustrating to me.

We are naturally "animistic" and personifying. The twist in structure allowing mirror neurons being able to re-use the hardware for thinking to model the behavior of external things is useful and has been critical to our success. Unlearning the behavior of of that animism is HARD. Maintaining awareness that the forces of nature, animals, or even objects don't have feelings, motivations, and narratives of their own is hard work but also becomes a more accurate and useful model of reality.

I think that dissonance between hardware that wants to interpret the world as a reflection of self, and the forced acknowledgement that it is not is uncomfortable. And we keep filling that discomfort with whatever rhetoric we can force to fit, and once that schema is in place, it takes a great act of discomfort and bravery to remove or replace it. The arguments and debates about it don't change minds, they just exacerbate the dissonance, making people even more motivated to shout loudly their model of the situation is right.

I desperately want the answers too. I don't know any of the answers. I don't think our culture (or even our neuroanatomy) is ready for the answers. In the meantime people yell at each other a lot without wanting to listen.

pavlov · 3h ago
How do you know that animals “don’t have feelings, motivations, and narratives of their own”?

Seems like this can only be true if you define feelings, emotions and narratives as precisely human ones… But then the question becomes whether humans are truly all so similar either.

danielbln · 3h ago
Are you saying animals have no feelings?