I’m amazed at the number of adults that think LLMs are “alive”.
Let’s be clear, they aren’t, but if you truly believe they are and you still use them then you’re essentially practicing slavery.
8bitsrule · 3h ago
"We conclude that the LLM has developed an analog form of humanlike cognitive selfhood."
Slack.
I was just using one (the mini at DDG) that declared one very small value for a mathematical probability of an event, then (in the next reply) declared a 1-in-1 probability for the same event.
woleium · 40m ago
I know humans who do that.
sonicvrooom · 1h ago
with enough CPU anything linguistic or analog becomes sentient — time is irrelevant ... patience isn't
cognitive dissonance is just neuro-chemical drama and or theater
and enough "free choice" is made to only to piss someone off ... so is "moderation", albeit potentially mostly counter-factual ...
smt88 · 3h ago
I use frontier models every day and cannot fathom how anyone could think they're sentient. They make so many obvious mistakes and every reply feels like a regurgitation rather than rational thoughts.
NathanKP · 3h ago
I don't believe that models are sentient yet either, but I must say that sentience and rationality are two separate things.
Sentient humans can be deeply irrational. We are often influenced by propaganda, and regurgitate that propaganda in irrational ways. If anything this is a deeply human characteristic of cognition, and testing for this type of cognitive dissonance is exactly what this article is about.
rossant · 3h ago
Now tell me seriously that ChatGPT is not sentient.
/s
SGML_ROCKSTAR · 3h ago
It's not sentient.
It cannot ever be sentient.
Software only ever does what it's told to do.
manucardoen · 2h ago
What is sentience? If you are so certain that ChatGPT cannot ever be sentient you must have a really good definition for that term.
fnordpiglet · 1h ago
The way NN and specifically transformers are evaluated can’t support agency or awareness under any circumstances. We would need something persistent, continuous, self reflective of experience, with an internal set of goals and motivations leading to agency. ChatGPT has none of this and the architecture of modern models doesn’t lend themselves to it either.
I would however note this article is about the cognitive psychology definition of self which does not require sentience. It’s a technical point but important for their results I assume (the full article is behind a paywall so I feel sad it was linked at all since all we have is the abstract)
fnordpiglet · 1h ago
I don’t think this is true, software is often able to operate with external stimulus and behaves according to its programming but in ways that are unanticipated. Neural networks are also learning systems that learn highly non linear behaviors to complex inputs, and can behave as a result in ways outside of its training - the learned function it represents doesn’t have to coincide with its trained data, or even interpolate - this is dependent on how its loss optimization was defined. None the less its software is not programmed as such - the software merely evaluated the neural network architecture with its weights and activation functions given a stimulus. The output is a highly complex interplay of those weights, functions, and input and can not be reasonably intended or reasoned about - or you can’t specifically tell it what to do. It’s not even necessarily deterministic as random seeding plays a role in most architectures.
Whether software can be sentient or not remains to be seen. But we don’t understand what induces or constitutes sentience in general so it seems hard to assert software can’t do it without understanding what “it” even is.
the_third_wave · 3h ago
That is, until either some form of controlled random reasoning - the cognitive equivalent of genetic algorithms - or a controlled form of hallucination is developed or happens to form during model training.
Let’s be clear, they aren’t, but if you truly believe they are and you still use them then you’re essentially practicing slavery.
Slack.
I was just using one (the mini at DDG) that declared one very small value for a mathematical probability of an event, then (in the next reply) declared a 1-in-1 probability for the same event.
cognitive dissonance is just neuro-chemical drama and or theater
and enough "free choice" is made to only to piss someone off ... so is "moderation", albeit potentially mostly counter-factual ...
Sentient humans can be deeply irrational. We are often influenced by propaganda, and regurgitate that propaganda in irrational ways. If anything this is a deeply human characteristic of cognition, and testing for this type of cognitive dissonance is exactly what this article is about.
/s
It cannot ever be sentient.
Software only ever does what it's told to do.
I would however note this article is about the cognitive psychology definition of self which does not require sentience. It’s a technical point but important for their results I assume (the full article is behind a paywall so I feel sad it was linked at all since all we have is the abstract)
Whether software can be sentient or not remains to be seen. But we don’t understand what induces or constitutes sentience in general so it seems hard to assert software can’t do it without understanding what “it” even is.