I wrote an essay arguing that consciousness isn’t a primary feature of brains—but a side effect of recursive social modeling.
Basically: to simulate social outcomes, you must simulate others simulating you.
If the brain is a prediction engine, then social prediction creates a loop:
To simulate social outcomes, you must simulate how other agents will respond → To simulate their responses, you must simulate how they perceive you → To simulate how they perceive you, you must construct a model of yourself from their point of view → You have to simulate you!
Suppose you want to tell a joke. You simulate how the listener might react. But humor depends on context—mood, history, relationships. And you are part of that context. You’re not just predicting the reaction to a joke—you’re predicting the reaction to you telling it. That’s the self-model.
At first I thought it was a cool idea—maybe too much of a stretch. Then a few days later, I remembered the mirror test. Some animals can recognize themselves in a mirror. I checked: chimpanzees, bonobos, dolphins, elephants, orcas and gorillas. And all of them are social! That actually surprised me.
But then I realized, wolves and lions don’t pass the mirror test, even though they’re social. That totally destroyed the argument.
For a while, I didn’t know what to do with that. But eventually I realized: their social structure is built on domination. Roles aren’t negotiated—they’re hardcoded. A weak agent can’t be on top by building relationships. There’s no point in modeling how others see you, because it doesn’t change anything.
Then came to another problem: gorillas. They live in strict hierarchies too—yet they pass the mirror test. That destroys the argument … again.
It bothers me a lot for some time. But I decided to look closer and found that wild gorillas don’t recognize themselves! The ones who pass raised by humans!
For me, that was kinda proof. Of course the mirror test doesn’t prove consciousness—but it shows when a self-model exists.
This is why we miss it. We search for consciousness inside the brain—instead of outside, in the structure that demands it. Self-awareness isn’t internal. It emerges when the system requires it.
And we can try to build that structural necessity into our AI models!
PaulHoule · 3h ago
My take is:
(1) Animals have more mental capability, particularly social capability, than science gives them credit for. All the time there is some paper that announces that common animal like dogs, cats or horses have just been proven to be able to do something that anyone familiar with those animals always believed they can do -- but doing the experiment is hard. If somebody did an experiment that "proved" that lions can't pass the mirror test it might be as much about the experiment or the motivation of the animal as the capability of the animal.
(2) "Rigid social hierarchies" come out of reductivism, the closer you look at animals the more complicated the story turns out to be. For instance see
Basically: to simulate social outcomes, you must simulate others simulating you.
If the brain is a prediction engine, then social prediction creates a loop:
To simulate social outcomes, you must simulate how other agents will respond → To simulate their responses, you must simulate how they perceive you → To simulate how they perceive you, you must construct a model of yourself from their point of view → You have to simulate you!
Suppose you want to tell a joke. You simulate how the listener might react. But humor depends on context—mood, history, relationships. And you are part of that context. You’re not just predicting the reaction to a joke—you’re predicting the reaction to you telling it. That’s the self-model.
At first I thought it was a cool idea—maybe too much of a stretch. Then a few days later, I remembered the mirror test. Some animals can recognize themselves in a mirror. I checked: chimpanzees, bonobos, dolphins, elephants, orcas and gorillas. And all of them are social! That actually surprised me.
But then I realized, wolves and lions don’t pass the mirror test, even though they’re social. That totally destroyed the argument.
For a while, I didn’t know what to do with that. But eventually I realized: their social structure is built on domination. Roles aren’t negotiated—they’re hardcoded. A weak agent can’t be on top by building relationships. There’s no point in modeling how others see you, because it doesn’t change anything.
Then came to another problem: gorillas. They live in strict hierarchies too—yet they pass the mirror test. That destroys the argument … again.
It bothers me a lot for some time. But I decided to look closer and found that wild gorillas don’t recognize themselves! The ones who pass raised by humans!
For me, that was kinda proof. Of course the mirror test doesn’t prove consciousness—but it shows when a self-model exists.
This is why we miss it. We search for consciousness inside the brain—instead of outside, in the structure that demands it. Self-awareness isn’t internal. It emerges when the system requires it.
And we can try to build that structural necessity into our AI models!
(1) Animals have more mental capability, particularly social capability, than science gives them credit for. All the time there is some paper that announces that common animal like dogs, cats or horses have just been proven to be able to do something that anyone familiar with those animals always believed they can do -- but doing the experiment is hard. If somebody did an experiment that "proved" that lions can't pass the mirror test it might be as much about the experiment or the motivation of the animal as the capability of the animal.
(2) "Rigid social hierarchies" come out of reductivism, the closer you look at animals the more complicated the story turns out to be. For instance see
https://phys.org/news/2021-04-wolf-dont-alpha-males-females....
(3) You might like https://en.wikipedia.org/wiki/The_Origin_of_Consciousness_in...