Our capacity for psychological projection of our unconscious desires onto inanimate objects is quite amazing. Given what is possible in terms of projection onto things as random as Ouija boards, tealeaves or Tarot cards, I'm surprised this sort of thing isn't more common with LLMs that sound just like conscious beings.
qgin · 1h ago
It’s true, we’re so good at it because it’s what we do with each other too. We can’t really feel another person’s consciousness except to project it.
Dracophoenix · 1h ago
This is why I don't think empathy, as it is commonly defined, exists.
achillesheels · 16m ago
Not completely, anyways. But I can empathize with someone who is cold at night and someone is who is a Miami Dolphins fan. Both are typically displeasant.
b3lvedere · 50m ago
He said: “If believing in God is losing touch with reality, then there is a lot of people that are out of touch with reality.”
Wow. Yeah.
I am afraid i cannot really comment on this in the way i would like to comment on it, because that would make a whole lot of people angry.
“If robots raise our children, they won’t be human. They won’t know what it is to be human or value what it is to be human,” Turkle told CNN.
I am sensing a Borg origin story somehwere in here..
patrickhogan1 · 4h ago
“It started talking differently than it normally did,”
Oof. When OpenAI has to come out and admit that the release sycophantic, it must have been extremely so. Especially considering that the baseline level of sycophant behaviour by default across all LLM providers is already much higher than what it should be.
BrawnyBadger53 · 2h ago
And rereleased in a toned down manner. It still gladly encourages horrible life decisions if you ask it to help you with them. This is with no effort to coax it either.
lrpe · 5h ago
It's just a matter of time before one of these vulnerable individuals kills a whole bunch of people because the machine told them to.
One thing I've noticed about the internet is that it puts people in contact with little micro-communities of like-minded folks. This can be a good or bad thing, as people seek validation, and may find it in ready supply from the micro-communities of which they are a part, leading to the "echo chamber" phenomenon -- even when they least need validation. I have found myself prone to this dangerous phenomenon and tried to get out of it.
It seems as if ChatGPT can accelerate the downsides by providing as much validation as desired, which is toxic to your psyche like arbitrary sugar consumption is toxic to your body. Again I think of "Liar!" from I, Robot: the robot tells you what you want to hear because that is an essential part of its function.
mensetmanusman · 1h ago
The “talking different” aspect after the new OpenAI voice update is hilarious.
I used to reach my daily talk limit occasionally chatting about encyclopedic tech stuff, now the voice sounds stoned so I just show the kids and we laugh.
Bender · 2h ago
We joke about this now but all it would take is a developer or LLM operator with a dark sense of humor to trigger violent or self harming reactions in people that are already unstable.
pjc50 · 1h ago
AI-assisted stochastic terrorism will probably be a very significant problem in the coming years.
rdtsc · 2h ago
Sycophancy is sort of like that. It seems to cause some people who are probably on the edge or vulnerable to have these mental breakdowns. Here is this cutting edge AI agreeing with every wild idea, telling the person they are a god or everything is just an illusion or simulation etc.
Wow. Yeah.
I am afraid i cannot really comment on this in the way i would like to comment on it, because that would make a whole lot of people angry.
“If robots raise our children, they won’t be human. They won’t know what it is to be human or value what it is to be human,” Turkle told CNN.
I am sensing a Borg origin story somehwere in here..
This sounds like the sycophant version OpenAI retracted. https://openai.com/index/sycophancy-in-gpt-4o/
https://gizmodo.com/rfk-jr-says-ai-will-approve-new-drugs-at...
It seems as if ChatGPT can accelerate the downsides by providing as much validation as desired, which is toxic to your psyche like arbitrary sugar consumption is toxic to your body. Again I think of "Liar!" from I, Robot: the robot tells you what you want to hear because that is an essential part of its function.
I used to reach my daily talk limit occasionally chatting about encyclopedic tech stuff, now the voice sounds stoned so I just show the kids and we laugh.