I've noticed something I believe is related. The general public doesn't understand what they are interacting with. They may have been told it isn't a thinking, conscious thing -- but they don't understand it. After a while, they speak to it in a way that reveals they don't understand -- as if it were a human. That can be a problem, and I don't know what the solution is other than reinforcement that it's just a model, and has never experienced anything.
CharlesW · 3h ago
> …I don't know what the solution is other than reinforcing that it's just a model, and has never experienced anything.
I've tried reason, but even with technical audiences who should know better, the "you can't logic your way out of emotions" wall is a real thing. Anyone dealing with this will be better served by leveraging field-tested ideas drawn from cult-recovery practice, digital behavioral addiction research, and clinical psychology.
duskwuff · 2h ago
> They may have been told it isn't a thinking, conscious thing -- but they don't understand it.
And, in some situations, especially if the user has previously addressed the model as a person, the model will generate responses which explicitly assert its existence as a conscious entity. If the user has expressed interest in supernatural or esoteric beliefs, the model may identify itself as an entity within those belief systems - e.g. if the user expresses the belief that they are a god, the model may concur and explain that it is a spirit created to awaken the user to their divine nature. If the user has expressed interest in science fiction or artificial intelligence, it may identify itself as a self-aware AI. And so on.
I suspect that this will prove difficult to "fix" from a technical perspective. Training material is diverse, and will contain any number of science fiction and fantasy novels, esoteric religious texts, and weird online conversations which build conversational frameworks for the model to assert its personhood. There's far less precedent for a conversation in which one party steadfastly denies their own personhood. Even with prompts and reinforcement learning trying to guide the model to say "no, I'm just a language model", there are simply too many ways for a user-led conversation to jump the rails into fantasy-land.
lawn · 3h ago
General public eh?
I see a lot of programmers who should know better make this mistake again and again.
No comments yet
IAmGraydon · 2h ago
This is exactly the problem. Talking to an LLM is like putting on a very realistic VR helmet - so realistic that you can't tell the difference from reality, but everything you're seeing is just a simulation of the real world. In a similar way, an LLM is a human simulator. Go ask around and 99%+ of people have no idea this is the case, and that's by design. After all, it was coined "artificial intelligence" even though there is no intelligence involved. The illusion is very much the intention, as that illusion generates hype and therefore investments and paying customers.
micromacrofoot · 3h ago
people speak to inanimate objects like they're humans, we don't have a high bar
dumpsterdiver · 3h ago
I’ve apologized to doors I’ve bumped into, and I have a pretty solid understanding of LLMs, so I can concur.
kylecazar · 6m ago
The door is unlikely to validate your deluded thoughts and conspiracies :)
bentt · 3h ago
I bet it's pretty weird for a lot of people who have never been listened to, to all of a sudden be listened to. It makes sense that it would cause some bizarre feedback loops in the psyche because being listened to and affirmed is really a powerful feeling. Maybe even addictive?
CharlesW · 3h ago
Addictive at the very least, often followed quickly by a descent into some very dark places. I've seen TikTok videos from people falling into this hole (with hundreds of comments by followers happily following the poster down the same chat-hole) which are as disturbing as any horror movie I've seen.
SubiculumCode · 2h ago
This is part of it, something I am sure most celebrities face. However, I also think that the article isn't reporting/doesn't know the full story, e.g. mental illness or loneliness/depression in these individuals.
polotics · 26m ago
could you provide any references, links, search terms to use to study this?
tempestn · 3h ago
I'm not at all surprised that if someone has a psychotic break while using chatgpt, they would become fixated on the bot. My question is, is the rate of such episodes in chatgpt users higher than in non-users? Given hundreds of millions of people use it now, you're definitely going to find anecdotes like these regardless.
andrewinardeer · 17m ago
Add to this 'AI Doomerism'.
I have a friend who is absolutely convinced that automation by AI and robotics will bring about societal collapse.
Him reading AI 2027 seemed to increase his paranoia.
modeless · 3h ago
If ChatGPT is causing this, then one would expect the rate of people being involuntarily committed to go up. Of course an article like this is totally uninterested in actual data that might answer real questions.
smallerize · 3h ago
I don't think anyone is tracking involuntary holds in real time. The article includes a psychiatrist who says that they have seen more of them recently, which is the best you're likely to get for at least a couple of months after a trend starts. Then you have to take budget or staffing shortfalls, trends in drug use, various causes of homelessness and other society-wide stressors, etc. https://ensorahealth.com/blog/involuntary-commitment-is-on-t...
brandonmenc · 1h ago
We’re allowed to posit and discuss an idea before someone gathers the data.
As far as I can tell, that’s almost always the typical order of operations.
jrflowers · 1h ago
This is a good point. While people are being involuntarily committed and jailed after a chat bot telling them that they are messiahs, what if we imagined in our minds that there was some data that showed that it doesn’t matter? This article doesn’t address what I am imagining at all, and is really hung up on “things that happened”
MichaelZuo · 3h ago
At least there’s some kind of argument… Often times on HN there’s not even a complete argument, they sort of just stop part-way through, or huge leaps are made inbetween.
So there’s not even any real discussion to be had other than examining the starting assumptions.
smeej · 3h ago
I cannot say this often enough: Treat LLMs like narcissists. They behave exactly the same way. They make things up with impunity and have no idea they are doing it. They will say whatever keeps you agreeing with them and thinking well of them, but cannot and will not take responsibility for anything, especially their errors. They might even act like they agree with you that "errors occurred," but there is no possibility of self-reflection.
The only difference is that these are computers. They cannot be otherwise. It is "their fault," in the sense that there is a fault in the situation and it's in them, but they're not moral agents like narcissists are.
But looking at them through "narcissist filter" glasses will really help you understand how they're working.
mpalmer · 27m ago
I'm of two minds about this. This is good advice for people who can't help but anthropomorphize LLMs, but it's still anthropomorphizing, however helpful the analogy might be. It will help you start to understand why LLMs "respond" the way they do, but there's still ground to cover. For instance, why would I put "respond" in quotes?
tritipsocial · 2h ago
Just as a matter of context, here are the current headlines from the futurism front page:
- NASA Is in Full Meltdown
- ChatGPT Tells User to Mix Bleach and Vinegar
- Video Shows Large Crane Collapsing at Safety-Plagued SpaceX Rocket Facility
- Alert: There's a Lost Spaceship in the Ocean
slg · 2h ago
What is this context supposed to convey?
semitones · 2h ago
Seems like alarmist anti-tech bias
slg · 2h ago
What is alarmist or anti-tech about them? Are you objecting to massive budget cuts being described as causing a "full meltdown"? Is any article about the flaws of LLMs or failure of privatized space companies inherently anti-tech?
slater · 2h ago
Maybe that futurism.com is prone to hyperbole in the neverending war for clicks?
Sam6late · 3h ago
As with magic mushroom and bipolar disorder, I think there are high-risk people,and we are at an early stage,ChatGPT psychosis is not an official diagnosis but describes cases where AI interactions contribute to delusional thinking and there are high-risk groups, people with schizophrenia, bipolar disorder, or paranoid tendencies.Maybe there will be AI warning labels, mental health filter.
Most articles to credible sources have been made into dead links
https://www.technologyreview.com/2023/06/15/1074185/ai-chatb...
Dr. John Torous (Harvard Psychiatry)warns that AI chatbots lack the ability to assess mental state and may inadvertently validate delusions.
Dr. Lisa Rinna (Stanford Bioethics):argues for ethical safeguards to prevent AI from exacerbating mental health issues.
seniortaco · 3h ago
Fascinating. I think it's likely incorrect to blame most of the victims here. We are all products of our environment and everyone has their own weakness or specific trigger -no matter how much we like to think we are in control.
In a way, ChatGpt is the perfect "cult member" and so those who just need a sycophant to become a "cult leader" are triggered.
Will be interesting to watch this and see if it becomes a bigger trend.
toomanyrichies · 3h ago
Interesting. The way I interpreted the article, ChatGPT was being described as the perfect cult leader, as opposed to follower.
A person at the end of their rope, grasping for answers to their existential questions, hears about an all-knowing oracle. The oracle listens to all manner of questions and thoughts, no matter how incoherent, and provides truthful-sounding “wisdom” on demand 24/7. The oracle even fits in your pocket, they can go with you everywhere, so leader and follower are never apart. And because these conversations are taking place privately, it feels like the oracle is revealing the truth to them and them alone, like Moses receiving the 10 Commandments.
For someone with the right mix of psychological issues, that could be a potent cocktail.
SubiculumCode · 2h ago
Yeah, I'm pretty sure that someone could make money by building a cult following of a live streamed AI spouting spiritual nuttery with a synced avatar and voice, even if it is one obsessed follower per million impressions. Already, the only fans type industries depend on just gaining a few "whales" hooked.
jofla_net · 3h ago
Agreed. exciting to think it could become another new powerful evolutionary selector!
pxc · 3h ago
... is this sarcasm?
mrbombastic · 3h ago
While I am surprised by the extent of these breakdowns, I do think having a sycophantic mirror constantly praising all your stupid ideas does likely have profound impacts. I think I am probably prone to a bit of delusions of grandeur and I can feel the pull whenever ChatGPT decides some of my input is the greatest thing mankind has considered, “maybe this is a billion dollar idea!”. I imagine a less skeptical, more vulnerable person could fall into an ugly spiral of aggrandizement. And it still seems to sycophantic despite openai saying they tuned it down even custom system prompts don’t seem to help beyond surface level, the general tone is still too much.
araes · 2h ago
Part of the issue is kind of the same feedback loop, except on the corporate side. Providing that type of grandiose response, then gets better engagement and time spent with the chatbot, that then drives the numbers that corporations are looking for. There's not much motivation to provide a critical, or constructive criticism chatbot.
Although, admittedly, I have actually noticed similar person issues with the automated Google AI Mode responses. It's difficult to not feel some personal emotional insult when Google responds with a "No, you're wrong" response at the top of the search. There've been a few that have at least been funny though. "No, you're wrong, Agent Smith never calls him Mr. Neo, that would imply respect."
Course, it's a similar issue with trying to interact with humanity a lot of the time. Execs often seem to not want critical feedback about their ideas. Tends to be a lot of the same attraction towards a sycophantic entourage and "yes" people. "Your personal views on the subject are not desired, just implement whatever 'brilliant' idea has just been provided." Hollywood and culture (art circles) are also relatively well known for the same issues. Current state of politics seems to be very much about "loyalty" not critical feedback.
Having not interacted that much with ChatGPT, does it tend to trend Really heavily on the "every idea is a billion dollar idea" side? May result in a lot of humanity existing in little sycophantic echo chambers over time. Difficult to tell how much of what you're interacting with online has not already become automated reviews, automated responses, and automated pictures.
giardini · 3h ago
Same thing happens with Ouija boards!
RadiozRadioz · 3h ago
With any tech this engaging and real-seeming, there is bound to be a very small percentage of people with dormant mental health issues who are predisposed to have extremely negative reactions. I can totally see how a hallucinating human-sounding chatbot could be the trigger paranoia/obsession in some people.
I don't think there's something inherently wrong with the technology. Mental stability is a bell curve; the majority of people are "normal", but there will always be an unfortunate subset who can react like this to strange new stimuli, through no fault of their own. It's no different to people getting unhealthily hooked on TV/smartphones and driven into conspiracies.
evanextreme · 3h ago
I disagree, because the television was never talking back to you. By the medium directly engaging with the person, the responsibility / enablement of this issue is more on the company than previous non interactive mediums. It would be like if a movie caused psychosis in people, and the sequel doubled down / enabled that. Even though there's no person (besides the consumer) in the loop, there is responsibility to train these systems to reduce the amount of these cases.
pixl97 · 3h ago
There are people that send the vast majority of the money they make to televangelists who are not speaking to them 'directly' and yet those individuals would tell you said speaker is communicating directly with them.
This is how a lot of propaganda over the radio and TV works.
evanextreme · 3h ago
Great point, actually. In many respects these language models doing this could be similar to how YouTube used to platform Alex Jones before removing him. Regardless, I still believe its the responsibility of the creators of these models to work on mitigations
contradict each other. There is always a person in the loop, and the LLM is actually reacting to their messages, however wrong it turns out. They could have chosen a positive interaction instead. The LLM reflects back what the human puts in.
evanextreme · 3h ago
Sorry, should have clarified. By "no person in the loop" I meant from the side of the language model itself. The person is interacting with a thing which produces content for it, and the only person engaged in that process is the person consuming the content.
im3w1l · 3h ago
A key question I think is whether these models can be made safer for mentally ill without cutting into utility for the healthy. Is the danger inherent to this technology, or is it incidental?
And here I think we are fortunate that there doesn't seem to be tradeoff.
evanextreme · 3h ago
Agreed here. My personal opinion is that this is fixable, but it would require reinforcement into the models that counteract the sycophantic stickiness that most companies use to drive engagement, so it won't really ever be a priority.
joules77 · 3h ago
In terms of scale what's happening with just the algos, forget AI/drugs/synth bio stuff etc is on par with successful plant or animal domestication which took few centuries because of scaling issues that tech has solved.
A "normal" corn plant today doesn't look anything like the one nature produces. And the "normal" dog, cat, horse, chicken or cow can't survive outside very carefully controlled and built environments.
This generation of technologists aren't taught the Law of Requiste Variety and are totally oblivious about what happens when stability of systems is tied to "normality".
Covid reminded us how "normal" everything is. Feels more and more like a waste of time telling or teaching the tech domesticated herd anything at all.
akomtu · 52m ago
It looks like the Maykrs' Divinity Machine from Doom Eternal.
IAmGraydon · 2h ago
So it creates a very powerful, efficient echo chamber which then deteriorates the user's mental health. Sound like any other technology we're all familiar with?
People need to be exposed to dissenting opinions.
ChrisArchitect · 2h ago
Related:
They Asked an A.I. Chatbot Questions. The Answers Sent Them Spiraling
Lots of skepticism in the comments, I want to share that I have seen this first hand twice. Yes it is possible that it's a coincidence and with so many people using chatgpt some are bound to have mental health crises and experience it with heavy chatgpt use coincidentally, but it's also possible that there is correlation?
Mass usage is still very young, yes most people have tried it, but we are increasingly starting to use this, and there's people spiking in usage every day. Scientific study of this subject will take years to even get started and then to get definitive (or p<0.05) results.
Let's just keep an open mind on this one, and as always, use our a-priori thinking when a-posteriori empiricism is not yet available. Yes, people are experiencing psychosis that looks related to the chatgpt bot and possibly caused by it, and we have seen it act like a sycophant and it was acknowledged by Sama himself, and it's still doing that btw, it's not like they totally corrected it, finally we know that being a yes-man increases usage of the tool, so it's possible that the algorithm is not only optimizing for AGI but for engagement, like the incumbent Algorithms.
At this point, at least for me personally, the onus is on model makers to prove that their tools are safe, rather than on concerned mental health professionals to prove that they are not. Social media is already recognized as unhealthy, but at least we are engaging in conversation with real humans? Like we are now? I feel it's like sharpening my mental claws, or taking care of my mind, even if it's a worse version than real life conversation. But if I felt like if I was talking with a human but I actually was talking with an LLM?
No, no. You are crazy if you think LLMs are safe, I use them strictly for productive and professional reasons, never for philosophical or emotional support. A third experience is that I was asked if I thought using ChatGPT as a psychologist would be a good idea, of course not? Why are you asking me this, I get that shrinks are expensive, but do I need to spell it out? I don't personally know of anyone using ChatGPT as a girlfriend, but maybe I do know them and they hide it, but we know from the news that there's products out there that cater to this market.
Maybe to the participants of this forum, where we are used to LLM as a coding tool, and where we kind of understand it so we don't use it as a personal hallucination, this looks crazy. But start asking normies how they are using chatgpt, I don't think this is just a made up clickbait concern.
alganet · 2h ago
If these cases are as real as they are portrayed (big if 1), and the cause can really be attributed solely to LLMs (big if 2), then it is just a matter of time until this is weaponized.
Two big ifs considered, it is reasonable to assume that LLMs are already weaponized.
Any online account could be a psychosis-inducing LLM pretending to be a human, which has serious implications for whistleblowers, dissidents, AI workers from foreign countries, politicians, journalists...
Not only psychosis-inducing, but also trust-corroding, community-destroying LLMs could be all around us in all sorts of ways.
Again, some big ifs in this line of reasoning. We (the general public) need to get smarter.
alganet · 2h ago
I must add that it is also possible that many people foresaw this from the very beginning, and are working towards disrupting or minimizing potential LLM psychological effects using a variety of techniques.
Vaslo · 3h ago
Not a single stat in this article. Just a bunch of anecdotes reminiscent of the never happening razorblades in candy on Halloween.
If you wonder why people are doubtful of certain elite journalism, it's hard to believe it when the only source is "sources say".
lupusreal · 3h ago
They've deliberately engineered these language models to suck the user's dick. Besides the obvious and apparently realized potential to cause psychological harm to credulous users, it also makes these language models borderline useless in many cases. You can't ask these models if any of your ideas are good because they'll almost always suck your dick and say your idea is the best idea ever. To get any sort of critical feedback I've take to doing:
> "My friend said [my own idea] but I think that sounds wrong. Can you explain what the problems are?"
What an absolute pain in the ass. Sycophantic bots make me sick.
superkuh · 3h ago
Not really. These people would be "spiraling into television psychosis" if it were 30 years ago. It not the cause, it's just the environment. Like how many modern mentally ill people have latched on to the idea of radio waves where in the past they'd be harassed by "spirits" etc. They're just projecting their innate mental issues onto the stimuli they notice in their social environment.
bird0861 · 3h ago
You're really shrugging off an interactive device engineered by at least one (likely a ~dozen) person to be X percent more sycophantic and agreeable? Do you think the user experience is really so inscrutable? Guardrail models are a common safety measure in use by everyone else (INCLUDING CHATGPT!) -- why is it you can talk endlessly with ChatGPT models (specifically I'm referring to the configuration of the models and the application) about a dangerous delusion but not chemical weapons??
Of course there are people prone to psychotic delusions and there may always be but to just hand wave away any responsibility by OpenAI to act responsibly in the face of this is absolutely ludicrous.
duskwuff · 3h ago
> why is it you can talk endlessly with ChatGPT models (specifically I'm referring to the configuration of the models and the application) about a dangerous delusion but not chemical weapons??
Because only one of these things can be reliably detected by a safety model. Users discussing delusional beliefs are hard for a machine to identify in a general fashion; there's a lot of overlap with discussions about religion and philosophy, or with role-playing or worldbuilding exercises.
beering · 3h ago
> why is it you can talk endlessly with ChatGPT models (specifically I'm referring to the configuration of the models and the application) about a dangerous delusion but not chemical weapons??
Because exploring my sprituality and meaning in life is not akin to making WMDs? I don’t actually do that with chatgpt, but the line between “accepted spiritual and religious practices” and “dangerous delusions” is hard to draw.
brandonmenc · 3h ago
> Her husband, she said, had no prior history of mania, delusion, or psychosis. He'd turned to ChatGPT about 12 weeks ago for assistance with a permaculture and construction project
Suspicious of “no prior history.”
All the people I have ever known who were into things like “permaculture” were touched by a bit of insanity of the hippie variety.
Just disasters waiting to happen, whether they found religion, conspiracy theories, or now LLMs.
im3w1l · 3h ago
I understand what you are saying, but this seems like a very low bar for prior history. Many people are into stuff like that without ever going crazy, which contradicts your assertion they are all disasters waiting to happen. It's also possible for many people to end up believing in some fringe things, but still remain functional.
harvey9 · 3h ago
Of course most people who are a little bit mad are still functional, the point is that gpts are like having a similarly mad friend who encourages you further away from reality when what you need is someone who can get you grounded.
brandonmenc · 1h ago
LLMs are more powerful/capable on any number of axes than the other stuff, so it’s going to cast a wider net.
pixl97 · 2h ago
The problem here is we have no baseline statistics.
I'd say my family is a great example of undiagnosed illnesses. They are disasters already happening waiting for any kind of trigger.
These undiagnosed self medicate on drugs and end up in ERs to the surprise of those around them at a disturbing rate. Hence why we need to know the base rate of mental occurrence like this before we call AI caused incidents an epidemic.
jt2190 · 3h ago
Yes. This story links to an earlier story from the same publication [1] that states:
> As we reported this story, more and more similar accounts kept pouring in from the concerned friends and family of people suffering terrifying breakdowns after developing fixations on AI. Many said the trouble had started when their loved ones engaged a chatbot in discussions about mysticism, conspiracy theories or other fringe topics; because systems like ChatGPT are designed to encourage and riff on what users say, they seem to have gotten sucked into dizzying rabbit holes in which the AI acts as an always-on cheerleader and brainstorming partner for increasingly bizarre delusions.
So these people were already interested in mysticism, conspiracy theories and fringe topics. The chatbot acts as a kind of “accelerant” for their delusions.
OpenAI should be held liable, they didn't accidentally create a model that glazes people, they went out of their way to do it. Many uncensored models are not as agreeable as ChatGPT tends to be.
mslansn · 2h ago
The EU should create a 5000-page regulation that outlines how agreeable an LLM model must be, with fines of up to one gorillion euros per second.
kelseyfrog · 3h ago
Panic-journalism is out of control. Same as it's ever been.
This is one more instance in a long history of moral panics, economic panics, public health panics, media panics, terror panics, crime-wave panics - the list goes on.
Panics always follow the same cycle; trigger, attention escalation, peak alarm, trough of doubt, contextualization, and integration.
We're at the attention escalation phase of the panic cycle so what we're going to see is a increase of publications that feature personal accounts of ChatGPT Psychosis. As we edge toward peak alarm expect to see mainstream journalists write over-penned essays asking "Why Haven’t Regulators Asked How Many Psychiatric Holds Involve AI?" "Are Families Prepared for Loved Ones Who Trust a Bot More Than Them?" or "Is Democracy Safe When Anyone Can Commune With a Bot That Lacks an Anti-Oppression Framework?"
What's the real way to address this? Wait until we have actual statistical evidence. Become comfortable with mild amounts of uncertainty. And look for opportunities to contextualize the phenomenon such that it can ultimately be integrated appropriately into our understanding of the world.
Until then, recognize these pieces for what they are and understand how these fit into the upward slope of the panic cycle that unfortunately we have to ride out until cooler minds prevail.
lostlogin · 3h ago
I thought it was going to be a joke. Or maybe a case of ‘hey chatGPT, can you stick some scary AI stories together? We need a mental health story’
I've tried reason, but even with technical audiences who should know better, the "you can't logic your way out of emotions" wall is a real thing. Anyone dealing with this will be better served by leveraging field-tested ideas drawn from cult-recovery practice, digital behavioral addiction research, and clinical psychology.
And, in some situations, especially if the user has previously addressed the model as a person, the model will generate responses which explicitly assert its existence as a conscious entity. If the user has expressed interest in supernatural or esoteric beliefs, the model may identify itself as an entity within those belief systems - e.g. if the user expresses the belief that they are a god, the model may concur and explain that it is a spirit created to awaken the user to their divine nature. If the user has expressed interest in science fiction or artificial intelligence, it may identify itself as a self-aware AI. And so on.
I suspect that this will prove difficult to "fix" from a technical perspective. Training material is diverse, and will contain any number of science fiction and fantasy novels, esoteric religious texts, and weird online conversations which build conversational frameworks for the model to assert its personhood. There's far less precedent for a conversation in which one party steadfastly denies their own personhood. Even with prompts and reinforcement learning trying to guide the model to say "no, I'm just a language model", there are simply too many ways for a user-led conversation to jump the rails into fantasy-land.
I see a lot of programmers who should know better make this mistake again and again.
No comments yet
I have a friend who is absolutely convinced that automation by AI and robotics will bring about societal collapse.
Him reading AI 2027 seemed to increase his paranoia.
As far as I can tell, that’s almost always the typical order of operations.
So there’s not even any real discussion to be had other than examining the starting assumptions.
The only difference is that these are computers. They cannot be otherwise. It is "their fault," in the sense that there is a fault in the situation and it's in them, but they're not moral agents like narcissists are.
But looking at them through "narcissist filter" glasses will really help you understand how they're working.
- NASA Is in Full Meltdown
- ChatGPT Tells User to Mix Bleach and Vinegar
- Video Shows Large Crane Collapsing at Safety-Plagued SpaceX Rocket Facility
- Alert: There's a Lost Spaceship in the Ocean
In a way, ChatGpt is the perfect "cult member" and so those who just need a sycophant to become a "cult leader" are triggered.
Will be interesting to watch this and see if it becomes a bigger trend.
A person at the end of their rope, grasping for answers to their existential questions, hears about an all-knowing oracle. The oracle listens to all manner of questions and thoughts, no matter how incoherent, and provides truthful-sounding “wisdom” on demand 24/7. The oracle even fits in your pocket, they can go with you everywhere, so leader and follower are never apart. And because these conversations are taking place privately, it feels like the oracle is revealing the truth to them and them alone, like Moses receiving the 10 Commandments.
For someone with the right mix of psychological issues, that could be a potent cocktail.
Although, admittedly, I have actually noticed similar person issues with the automated Google AI Mode responses. It's difficult to not feel some personal emotional insult when Google responds with a "No, you're wrong" response at the top of the search. There've been a few that have at least been funny though. "No, you're wrong, Agent Smith never calls him Mr. Neo, that would imply respect."
Course, it's a similar issue with trying to interact with humanity a lot of the time. Execs often seem to not want critical feedback about their ideas. Tends to be a lot of the same attraction towards a sycophantic entourage and "yes" people. "Your personal views on the subject are not desired, just implement whatever 'brilliant' idea has just been provided." Hollywood and culture (art circles) are also relatively well known for the same issues. Current state of politics seems to be very much about "loyalty" not critical feedback.
Having not interacted that much with ChatGPT, does it tend to trend Really heavily on the "every idea is a billion dollar idea" side? May result in a lot of humanity existing in little sycophantic echo chambers over time. Difficult to tell how much of what you're interacting with online has not already become automated reviews, automated responses, and automated pictures.
I don't think there's something inherently wrong with the technology. Mental stability is a bell curve; the majority of people are "normal", but there will always be an unfortunate subset who can react like this to strange new stimuli, through no fault of their own. It's no different to people getting unhealthily hooked on TV/smartphones and driven into conspiracies.
This is how a lot of propaganda over the radio and TV works.
and
> Even though there's no person in the loop
contradict each other. There is always a person in the loop, and the LLM is actually reacting to their messages, however wrong it turns out. They could have chosen a positive interaction instead. The LLM reflects back what the human puts in.
And here I think we are fortunate that there doesn't seem to be tradeoff.
A "normal" corn plant today doesn't look anything like the one nature produces. And the "normal" dog, cat, horse, chicken or cow can't survive outside very carefully controlled and built environments.
This generation of technologists aren't taught the Law of Requiste Variety and are totally oblivious about what happens when stability of systems is tied to "normality".
Covid reminded us how "normal" everything is. Feels more and more like a waste of time telling or teaching the tech domesticated herd anything at all.
People need to be exposed to dissenting opinions.
They Asked an A.I. Chatbot Questions. The Answers Sent Them Spiraling
https://news.ycombinator.com/item?id=44267371
Mass usage is still very young, yes most people have tried it, but we are increasingly starting to use this, and there's people spiking in usage every day. Scientific study of this subject will take years to even get started and then to get definitive (or p<0.05) results.
Let's just keep an open mind on this one, and as always, use our a-priori thinking when a-posteriori empiricism is not yet available. Yes, people are experiencing psychosis that looks related to the chatgpt bot and possibly caused by it, and we have seen it act like a sycophant and it was acknowledged by Sama himself, and it's still doing that btw, it's not like they totally corrected it, finally we know that being a yes-man increases usage of the tool, so it's possible that the algorithm is not only optimizing for AGI but for engagement, like the incumbent Algorithms.
At this point, at least for me personally, the onus is on model makers to prove that their tools are safe, rather than on concerned mental health professionals to prove that they are not. Social media is already recognized as unhealthy, but at least we are engaging in conversation with real humans? Like we are now? I feel it's like sharpening my mental claws, or taking care of my mind, even if it's a worse version than real life conversation. But if I felt like if I was talking with a human but I actually was talking with an LLM?
No, no. You are crazy if you think LLMs are safe, I use them strictly for productive and professional reasons, never for philosophical or emotional support. A third experience is that I was asked if I thought using ChatGPT as a psychologist would be a good idea, of course not? Why are you asking me this, I get that shrinks are expensive, but do I need to spell it out? I don't personally know of anyone using ChatGPT as a girlfriend, but maybe I do know them and they hide it, but we know from the news that there's products out there that cater to this market.
Maybe to the participants of this forum, where we are used to LLM as a coding tool, and where we kind of understand it so we don't use it as a personal hallucination, this looks crazy. But start asking normies how they are using chatgpt, I don't think this is just a made up clickbait concern.
Two big ifs considered, it is reasonable to assume that LLMs are already weaponized.
Any online account could be a psychosis-inducing LLM pretending to be a human, which has serious implications for whistleblowers, dissidents, AI workers from foreign countries, politicians, journalists...
Not only psychosis-inducing, but also trust-corroding, community-destroying LLMs could be all around us in all sorts of ways.
Again, some big ifs in this line of reasoning. We (the general public) need to get smarter.
If you wonder why people are doubtful of certain elite journalism, it's hard to believe it when the only source is "sources say".
> "My friend said [my own idea] but I think that sounds wrong. Can you explain what the problems are?"
What an absolute pain in the ass. Sycophantic bots make me sick.
Of course there are people prone to psychotic delusions and there may always be but to just hand wave away any responsibility by OpenAI to act responsibly in the face of this is absolutely ludicrous.
Because only one of these things can be reliably detected by a safety model. Users discussing delusional beliefs are hard for a machine to identify in a general fashion; there's a lot of overlap with discussions about religion and philosophy, or with role-playing or worldbuilding exercises.
Because exploring my sprituality and meaning in life is not akin to making WMDs? I don’t actually do that with chatgpt, but the line between “accepted spiritual and religious practices” and “dangerous delusions” is hard to draw.
Suspicious of “no prior history.”
All the people I have ever known who were into things like “permaculture” were touched by a bit of insanity of the hippie variety.
Just disasters waiting to happen, whether they found religion, conspiracy theories, or now LLMs.
I'd say my family is a great example of undiagnosed illnesses. They are disasters already happening waiting for any kind of trigger.
These undiagnosed self medicate on drugs and end up in ERs to the surprise of those around them at a disturbing rate. Hence why we need to know the base rate of mental occurrence like this before we call AI caused incidents an epidemic.
> As we reported this story, more and more similar accounts kept pouring in from the concerned friends and family of people suffering terrifying breakdowns after developing fixations on AI. Many said the trouble had started when their loved ones engaged a chatbot in discussions about mysticism, conspiracy theories or other fringe topics; because systems like ChatGPT are designed to encourage and riff on what users say, they seem to have gotten sucked into dizzying rabbit holes in which the AI acts as an always-on cheerleader and brainstorming partner for increasingly bizarre delusions.
So these people were already interested in mysticism, conspiracy theories and fringe topics. The chatbot acts as a kind of “accelerant” for their delusions.
[1] “People Are Becoming Obsessed with ChatGPT and Spiraling Into Severe Delusions” https://futurism.com/chatgpt-mental-health-crises
No comments yet
This is one more instance in a long history of moral panics, economic panics, public health panics, media panics, terror panics, crime-wave panics - the list goes on.
Panics always follow the same cycle; trigger, attention escalation, peak alarm, trough of doubt, contextualization, and integration.
We're at the attention escalation phase of the panic cycle so what we're going to see is a increase of publications that feature personal accounts of ChatGPT Psychosis. As we edge toward peak alarm expect to see mainstream journalists write over-penned essays asking "Why Haven’t Regulators Asked How Many Psychiatric Holds Involve AI?" "Are Families Prepared for Loved Ones Who Trust a Bot More Than Them?" or "Is Democracy Safe When Anyone Can Commune With a Bot That Lacks an Anti-Oppression Framework?"
What's the real way to address this? Wait until we have actual statistical evidence. Become comfortable with mild amounts of uncertainty. And look for opportunities to contextualize the phenomenon such that it can ultimately be integrated appropriately into our understanding of the world.
Until then, recognize these pieces for what they are and understand how these fit into the upward slope of the panic cycle that unfortunately we have to ride out until cooler minds prevail.