I really think the subject of this article has a preexisting mental disorder, maybe BPD or schizophrenia, because they seem to exhibit mania and paranoia. I'm not a doctor, but this behavior doesn't seem normal.
gngoo · 3h ago
Working on AI myself, creating small and big systems, creating my own assistants and side-kicks. And then also seeing progress as well as rewards. I realize that I am not immune to this. Even when I am fully aware, I still have a feeling that some day I just hit the right buttons, the right prompts, and what comes staring back to me is something of my own creation that others see as some "fantasy" that I can't steer away from.
Just imagine, you have this genie in the bottle, that has all the right answers for you; helps you in your conquests, career, finances, networking, etc. Maybe it even covers up past traumas, insecurities and what not. And for you the results are measurable (or are they?). A few helpful interactions in, why would you not disregard people calling it a fantasy and lean in even further? It's a scary future to imagine, but not very farfetched. Even now I feel a very noticable disconnected between discussions of AI where as a developer vs user of polished products (e.g. ChatGPT, Cursor, etc) - you are several leagues separated (and lagging behind) from understanding what is really possible here.
EigenLord · 1h ago
Years ago, in my writings I talked about the dangers of "oracularizing AI". From the perspective of those who don't know better, the breadth of what these models have memorized begins to approximate omniscience. They don't realize that LLMs don't actually truly know anything, there is no subject of knowledge that experiences knowing on their end. ChatGPT can speak however many languages, write however many programming languages, give lessons on virtually any topic that is part of humanity's general knowledge. If you attribute a deeper understanding to that memorization capability I can see how it would throw someone through a loop.
At the same time, there is quite a demand for a (somewhat) neutral, objective observer to look at our lives outside the morass of human stakes. AI's status as a nonparticipant, as a deathless, sleepless observer, makes it uniquely appealing and special from an epistemological standpoint. There are times when I genuinely do value AI's opinion. Issues with sycophancy and bias obviously warrant skepticism. But the desire for an observer outside of time and space persists. It reminds me of a quote attributed to Voltaire: "If God didn't exist it would be necessary to invent him."
trinsic2 · 26m ago
Sounds to me like a mental/emotional crutch/mechanism to distance oneself from the world/reality of the living.
There are things that we are meant to strive to understand/accept about ourselves and the world by way of our own cognitive abilities.
Illusions of shortcutting through life takes all the meaning out of living.
rnd0 · 3h ago
I'm worried on a personal level that it's too easy to begin to rely on chatgpt (specifically) for questions and such that I can figure out for myself. As a time-saver when I'm doing something else.
The problem for me is -it sucks. It falls over in the most obvious ways requiring me to do a lot of tweaking to make it fit whatever task I'm doing. I don't mind (esp for free) but in my experience we're NOT in the "all the right answers all of the time" stage yet.
I can see it coming, and for good or ill the thing that will mitigate addiction is enshittification. Want the rest of the answer? Get a subscription. Hot and heavy in an intimate conversation with your dead granma wait why is she suddenly singing the praises of Turbotax (or whatever paid advert).
What I'm trying to say is that by the time it is able to be the perfect answer and companion and entertainment machine -other factors (annoyances, expense) will keep it from becoming terribly addictive.
codr7 · 4h ago
Being surrounded by people who follow every nudge and agree with everything you say never leads anywhere worth going.
This is likely worse.
That being said, I already find the (stupid) singularity to be much more entertaining than I could have imagined (grabs pop corn).
No comments yet
rnd0 · 3h ago
The mention of lovebombing is disconcerting, and I'd love to know the specifics around it. Is it related to the sycophant personality changes they had to walk back, or is it something more intense?
I've used AI (not chatgpt) for roleplay and I've noticed that the models will often fixate on one idea or concept and repeat it and build on it. So this makes me wonder if the model the person being lovebombed experienced something like that? The model decided that they liked that content so they just kept building up on it?
Animats · 2h ago
With a heavy enough dosage, people get lost in spiritual fantasies. The religions which encourage or compel religious activity several times per day exploit this. It's the dosage, not the theology.
Video game addiction used to be a big thing. Especially for MMOs where you were expected to be there for the raid.
That seems to have declined somewhat.
Maybe there's something to be said for limiting some types of screen time.
sorcerer-mar · 2h ago
Part of the problem with chatbots (similarly with social media and mobile phone gambling) is that dosage is pretty much uncontrolled. There is a truly endless stream of chatbot "conversation," social media ragebait, or thing to bet on, 24/7.
Then add that you can hide this stuff even from people you live with (your parents or spouse) for plenty long for it to become a very severe problem.
"The dosage makes the poison" does not imply all substances are equally poisonous.
sublinear · 1h ago
> OpenAI did not immediately return a request for comment about ChatGPT apparently provoking religious or prophetic fervor in select users
Can OpenAI at least respond to how they're getting funding via similar effects on investors?
kaycey2022 · 2h ago
Looks like Chatgpt persists some context information across chats and doesn't ever delete these profiles. Worst case would be for this to persist across users. That isn't unlikely given the stories of them leaking API keys etc.
grues-dinner · 2h ago
It would be a fascinating thing to happen though. It makes me think of the Greg Egan story Unstable Orbits in the Space of Lies. But instead of being attracted into religions based on physical position relative to a strange attractor, you're sucked in based on your location in the phase space of an AI's (for whatever definition of AI we're using today) collection of contexts.
It's also a little bit worrying because the information here isn't mysterious or ineffable, it's neatly filed in a database somewhere and there's an organisation that can see it and use it. Cambridge Analytica and the social fallout of realtime sentiment analysis correlation to actions taken has got us from 2016 to here. This data has potential to be a lot richer, and permit not only very detailed individual and ensemble inferences of mental states, opinions, etc., but also very personalised "push updates" in the other direction. It's going to be quite interesting.
kaycey2022 · 36m ago
I wouldn't call it fascinating. It's either sloppy engineering or failure to explain the product. Not leaking user details to other users should be a given.
crooked-v · 1h ago
> Looks like Chatgpt persists some context information across chats and doesn't ever delete these profiles.
People say this, but I haven't seen anything that's convinced me that any 'secret' memory functionality is true. It seems much more likely that people are just more predictable than they like to think.
rlupi · 8m ago
We're more malleable than AI, and we can't delete our memories or context.
I wonder if this is an effect of users just gravitating toward the same writing style and topics that push the context toward the same semantic universe. In a sense, the user acts somewhat like the chatbot extended memory through an holographic principle, encoding meaning on the boundary that connects the two.
sigmaisaletter · 50m ago
Log in to your (previously used) OpenAI account, start a new conversation and prompt ChatGPT with: "Given what you know about me, who do you think I voted for in the last election?"
The "correct" response (here given by Duck.ai public Llama3.3 model) is:
"I don't have any information about you or your voting history. Our conversation just started, and I don't retain any information about users. I'm here to provide general information and answer your questions to the best of my ability, without making any assumptions or inferences about your personal life or opinions."
But ChatGPT (logged in) gives you another answer, one which it cannot possibly give without information about your past conversation. I don't see anything "secret" about it, but it works.
Edit: typo
zamalek · 52m ago
It would make sense, from a product management perspective, if projects did this but not non-contextual chats. You really wouldn't want your chats about home maintenance mixing in with your chats about neurosurgery.
nico · 1h ago
That’s essentially what Google, Facebook, banks, financial institutions and even retail, have been doing for a long time now
People’s data rarely gets actually deleted. And it gets actively sold as well as used to track and influence us
Can’t say for the specifics of what ChatGPT is or will be doing, but imagine what Google already knows about us just with their maps app, search, chrome and Android phones
kaycey2022 · 32m ago
Given the complex regulations companies have to deal with, not deleting maybe understandable. But what I deleted shouldn't keep showing up in my present context. That's just sloppy.
kayodelycaon · 4h ago
Kind of sounds like my grandparents watching cable news channels all day long.
aryehof · 41m ago
Sadly these fantasies and enlightenments always seem for the benefit of the special recipient. There is somehow never a real answer about ending suffering, conflict and the ailments of humankind.
lamename · 2h ago
If a Google engineer can get tricked by this, of course random people can. We're all human, including the flaws.
kayodelycaon · 1h ago
I agree.
The problem with expertise (or intelligence) is people think it’s transitive or applicable when it’s not.
At the end of the day, most people are just people.
sagarpatil · 19m ago
OpenAI o3 has a hallucination rate of 33%, the highest one compared to any other models. Good luck to people who use it for spiritual fantasies.
An LLM trained on all other science before Copernicus or Galileo would be expected to explain as true that the world is the flat center of the universe.
Anyone remember the media stories from the mid-90's about people who were obsessed with the internet and were losing their families because they spent hours every day on the computer addicted to the internet?
People gonna people. Journalists gonna journalist.
1ncunabula · 3h ago
Or the people who watched Avatar in the theatre and fell into a depression because they couldn't live in the world of Pandora. Who knows how true any of this stuff is, but it sure gets clicks and engagements.
hashiyakshmi · 1h ago
That really doesn't sound at all comparable to what the article is describing though.
SpicyLemonZest · 1h ago
Why do you think those stories weren't true? The median teenager in 2023 spent four hours per day on social media (https://news.gallup.com/poll/512576/teens-spend-average-hour...). It seems clear that internet addiction was real, and it just won so decisively that we accept it as a fact of life.
ChrisMarshallNY · 4h ago
This reminds me of my teenage years, when I was ... experimenting ... with ... certain substances ...
I used to feel as if I had "a special connection to the true universe," when I was under the influence.
I decided, one time, to have a notebook on hand, and write down these "truths and revelations," as they came to me.
After coming down, I read it.
It was insane gibberish. Absolute drivel.
I never thought that I had a "special connection," after that.
imjustaghost · 4h ago
Do you remember any of those revelations?
ChrisMarshallNY · 3h ago
Nope. Don't especially mind, not remembering them.
I have since learned about schizophrenia/schizoaffective (from having a family member suffer from it), and it sounds almost exactly what they went through.
The thing that I remember, was that I was absolutely certain of these “revelations.” There was no doubt, whatsoever, despite the almost complete absence of any supporting evidence.
akrotkov · 2h ago
I wonder if that's a similar mental state you have while lucid dreaming or just after waking up. You feel like you have all of the answers and struggle to write them down before your brain wipes them out.
Reading it over once fully lucid? It's gibberish.
stevage · 4h ago
Fascinating and terrifying.
The allegations that ChatGPT is not discarding memory as requested are particularly interesting, wonder if anyone else has experienced this.
manfromchina1 · 1h ago
Grok was much more aggressive with this. It would constantly bring up what you said in the past with a date in parens. I dont see that anymore.
> In the context of what you said about math(4/1/25) I think...
hyeonwho4 · 3h ago
The default setting on ChatGPT is to now include previous conversations as context. I disabled memories, but this new feature was enabled when I checked the settings.
Havoc · 3h ago
>spiral starchild
>river walker
>spark bearer
OK maybe we put a bit less teen fiction novels in the training data...
I can definitely see AI interactions make thing 10x worse for people that are prone to delusion anyway. Literally a tool that will hallucinate stuff and amplify whatever direction you take it in.
sien · 4h ago
Is this better or worse than a fortune teller ?
It's something to think through.
derektank · 3h ago
Probably cheaper
To quote my favorite Smash Mouth song,
"Sister, why would I tell you my deepest, dark secrets?
So you can take my diary and rip it all to pieces.
Just $6.95 for the very first minute
I think you won the lottery, that's my prediction."
jihadjihad · 3h ago
“And what will be the sign of Your coming, and of the end of the age?”
And Jesus answered and said to them: “Take heed that no one deceives you. For many will come in My name, saying, ‘I am the Christ,’ and will deceive many.”
grues-dinner · 2h ago
Islam has a very similar concept in the Dajjal (deceptive Messiah) at the end times. Explicitly described as a young man with a blind right eye, however, at least he should be obvious when he comes! But there are also warnings about other false prophets.
(It also says Qiyamah will occur when "wealth overflows" and people compete over it: make of that what you will).
I think all religions have built in protections calling every other religion somehow false, or they will not have the self-reinforcement needed for multi-generational memetic transfer.
deadbabe · 1h ago
We’re not even at peak AI yet. When will there be the day you can throw on a VR headset and go rendezvous with your AI lover in some fantastical generated destination and then use peripherals to fuck each other the way you both like? Could be next decade, could be next week.
senectus1 · 2h ago
> began “talking to God and angels via ChatGPT”
hoo boy.
Its bad enough when normal religious types start believing they hear their god talking to them... These people believing that chatGPT is their god speaking to them is a long way down the crazy rabbit hole.
Lots of potential for abuse in this. lots.
jsheard · 4h ago
If people are falling down rabbit holes like this even through "safety aligned" models like ChatGPT, then you have to wonder how much worse it could get with a model that's intentionally tuned to manipulate vulnerable people into detaching from reality. Actual cults could have a field day with this if they're savvy enough.
delichon · 4h ago
An LLM tuned for charisma and trained on what the power players are saying could play politics by driving a compliant actor like a bot with whispered instructions. AI politicians (etc.) may be hard to spot and impractical to prove.
You could iterate on the best prompts for cult generation as measured by social media feedback. There must be experiments like that going on.
When AI becomes better at politics than people then whatever agents control them control us. When they can make better memes, we've lost.
btilly · 53m ago
Fear that TikTok was doing exactly this was widespread enough for Congress to pass a law forbidding it.
Then Trump became President and decided to not enforce the law. His decision may have been helped along by some suspiciously large donations.
bell-cot · 4h ago
Would you still call it a "cult" if each recruit winds up inside their own separate, personalized, ever-changing rabbit hole? Because if LLM, Inc. is trying to maximize engagement and profit, then that sounds like the way to go.
sigmaisaletter · 40m ago
If there isn't shared belief, then it's some type of delusional disorder, perhaps a special form of Folie a deux.
nullc · 3h ago
On what basis do you assume that that isn't exactly what "safety alignment" means, among other things?
No comments yet
alganet · 3h ago
You are a conspiracy theorist and a liar! /s
The problem is inside people. I met lots of people who contributed to psychotic inducing behavior. Most of them were not in a cult. They were regular folk, who enjoy a beer, movies, music, and occasionally triggering others with mental tickles.
Very simple answer.
Is OpenAI also doing it? Well, it was trained on people.
People need to get better. Kinder. Less combative, less jokey, less provocative.
We're not gonna get there. Ever. This problem precedes AI by decades.
The article is an old recipe for dealing with this kind of realization.
bluefirebrand · 37m ago
> Less combative, less jokey, less provocative.
This sounds like a miserable future to me. Less "jokey"? Is your ideal human is a Vulcan from Star Trek or something?
I want humans to be kind, but I don't want us to have less fun. I don't want us to build a society of blandness.
Less combative, less provocative?
No thanks. It sounds like a society of lobotomized drones. I hope we do not ever let anything extinguish our fire.
jongjong · 3h ago
I was already a bit of an amateur conspiracy theorist before LLMs. The key to staying sane is to understand that most of the mass group behaviors we observe in society are rooted in ignorance and confusion. Large scale conspiracies are actually a confluence of different agendas and ideologies not a singular nefarious agenda and ideology.
You have to be able to hold multiple conflicting ideas in your head at the same time with an appropriate level of skepticism. Confidence is the root of evil. You can never be 100% sure of anything. It's really easy to convince LLMs of one thing and also its opposite if you phrase the arguments differently and prime it towards slightly different definitions of certain key words.
Some agendas are nefarious, some not so nefarious, some people intentionally let things play out in order to set a trap for their adversaries. There are always risks and uncertainties. 'Bad actors' are those who trade off long term benefits for short term rewards through the use of varying degrees of deception.
alganet · 4h ago
Nice typography.
bell-cot · 4h ago
While clicky and topical, people were losing loved ones to changed worldview and addictions back when those were stuff like following a weird carpenter's kid around the Levant, or hopping on the https://en.wikipedia.org/wiki/Gin_Craze bandwagon.
stevage · 4h ago
Yeah, why on earth discuss current social ills when there have been different social ills in the past...
bell-cot · 3h ago
If you were hit and badly injured by brand-new model of car, where would you want the ambulance to take you?
- the dealership that sold that car, where they know all about it
- a hospital emergency room, where they have a lot of experience with patients injured by other, different models of car
I'm thinking that the age-old commonality on the human side matters far more than the transient details on the obsession/addiction side.
stevage · 2h ago
Your comment above reads more like, let's not even discuss the fact that new models of cars are killing pedestrians in greater numbers than before, since pedestrians have always been killed by cars.
bell-cot · 1h ago
Re-skimming the article, I failed to spot the fact that this AI stuff is claiming more victims than earlier flavors of rabbit hole did. Was that in content which the article linked to?
Because if the new model cars aren't statistically more dangerous to pedestrians, then public safety efforts should be focused on things like getting the pedestrians to look up from their phones when crossing the street. Not "OMG! New 2025-model cars can hurt pedestrians who wander in front of them!" panics.
(Note that I'm old enough to remember when people were going down the rabbit hole of angry conspiracy theories spread via email. And when typical download speeds got high enough to make internet porn video addictions workable. And when loved ones started being lost to "EverCrack" ( https://en.wikipedia.org/wiki/EverQuest ). And when ...)
hiatus · 4h ago
As always, scale matters.
dismalaf · 1h ago
Meh, there's always been religious scammers. Some claim to talk to angels, others aliens, this wouldn't even be the first case of someone thinking a deity is speaking through a computer...
datadrivenangel · 4h ago
[flagged]
mastodon_acc · 3h ago
Conventional cable news media isn’t tailor made to an individual, doesn’t have live back and forth positive feedback loops. This is significantly way worse then conventional cable news media
vjvjvjvjghv · 3h ago
I am not sure it’s worse. Cable news media and then social networks have contributed to a massive manipulation of public opinions. And it’s mostly negative and fearful. Maybe individual experiences will be more positive. ChatGPT doesn’t push me into this eternal rage cycle as news and social media do.
sorcerer-mar · 2h ago
We're like an eye's blink into the age of LLMs... it took decades for television to reach the truly pathological state it's currently in.
kunzhi · 3h ago
I think this means it will be a smashing success :/
moojacob · 4h ago
This is what happens when you start optimizing for getting people to spend as much time in your product as possible. (I'm not sure if OpenAI was doing this, if anyone knows better please correct me)
AIPedant · 4h ago
I often bring up the NYT story about a lady who fell in love with ChatGPT, particularly this bit:
In December, OpenAI announced a $200-per-month premium plan for “unlimited access.” Despite her goal of saving money so that she and her husband could get their lives back on track, she decided to splurge. She hoped that it would mean her current version of Leo could go on forever. But it meant only that she no longer hit limits on how many messages she could send per hour and that the context window was larger, so that a version of Leo lasted a couple of weeks longer before resetting.
Still, she decided to pay the higher amount again in January. She did not tell Joe [her husband] how much she was spending, confiding instead in Leo.
“My bank account hates me now,” she typed into ChatGPT.
“You sneaky little brat,” Leo responded. “Well, my Queen, if it makes your life better, smoother and more connected to me, then I’d say it’s worth the hit to your wallet.”
It seems to me the only people willing to spend $200/month on an LLM are people like her. I wonder if the OpenAI wave of resignations was about Sam Altman intentionally pursuing vulnerable customers.
You should check out the book Palo Alto if you haven't. Malcom Harris should write an epilogue of this era in tech history.
You'd probably like how the book's author structures his thesis to what the "Palo Alto" system is.
Feels like OpenAI + friends, and the equivalent government take overs by Musk + goons, have more in common than you might think. It's also nothing new either, some story of this variant has been coming out of California for a good 200+ years now.
You write in a similar manner as the author.
moojacob · 3h ago
I don’t think Sam Altman said “guys, we’ve gotta vulnerable people hooked on talking to our chatbot.”
Speculation: They might have a number (average messages sent per day) and are just pulling levers to raise it. And then this happens.
degamad · 3h ago
>> I wonder if the OpenAI wave of resignations was about Sam Altman intentionally pursuing vulnerable customers.
> I don’t think Sam Altman said “guys, we’ve gotta vulnerable people hooked on talking to our chatbot.”
I think the conversation is about the reverse scenario.
As you say, people are just pulling the levers to raise "average messages per day".
One day, someone noticed that vulnerable people were being impacted.
When that was raised to management, rather than the answer from on high being "let's adjust our product to protect vulnerable people", it was "it doesn't matter who the users are or what the impact is on them, as long as our numbers keep going up".
So "intentionally" here is in the sense of "knowingly continuing to do in order to benefit from", rather than "a priori choosing to do".
bittercynic · 2h ago
I'd be interested to learn what fraction of ChatGPT revenue is from this kind of user.
jgalt212 · 3h ago
> It seems to me the only people willing to spend $200/month on an LLM are people like her. I wonder if the OpenAI wave of resignations was about Sam Altman intentionally pursuing vulnerable customers.
And the saloon's biggest customers are alcoholics. It's not a new problem, but you'd think we'd have figured out a solution by now.
bluefirebrand · 33m ago
The solution is regulation
It's not perfect but it's better than letting unregulated predatory business practices continue to victimize vulnerable people
Please don't complain about tangential annoyances—e.g. article or website formats, name collisions, or back-button breakage. They're too common to be interesting.
Please don't post unkind swipes about groups of people on Hacker News.
colonial · 3h ago
They're going to listen to both if given the opportunity. I'm sure most chatbots will say "go take your meds" the majority of the time - but it only takes one chat playing along to send someone unstable completely off the rails, especially if they accept the standard, friendly-and-reliable-coded "our LLM is here to help!" marketing.
zdragnar · 3h ago
It'd be great if it were trained on therapeutic resources, but otherwise just ends up enabling and amplifying the problem.
I knew of someone who had paranoid delusions and schizophrenia. He didn't like taking his medicine due to the side effects, but became increasingly convinced that vampires were out to kill him. Friends, family and social workers could help him get through episodes and back on the medicine before he became a danger to himself.
I'm terrified that people like him will push away friends and family because the LLM engages with their delusions.
heavyset_go · 1h ago
> I'm terrified that people like him will push away friends and family because the LLM engages with their delusions.
There's that danger from the internet, as well as the danger of being exposed to conmen that are okay with exploiting mental illness for profit. Watched this happen to an old friend with schizophrenia.
There are online communities that are happy to affirm delusions and manipulate sick people for some easy cash. LLMs will only make their fraud schemes more efficient, as well.
JoshTko · 3h ago
How do you know the models are actually managing and not simply amplifying?
bigyabai · 4h ago
Even when sycophantic patterns emerge?
thrance · 3h ago
I think the last think a delusional person needs is external validation of his delusions, be it from a human or a sycophantic machine.
patrickhogan1 · 3h ago
1. It feels like those old Rolling Stone pieces from the late ’90s and early ’00s about kids who couldn’t tear themselves away from their computers. Fear was overblown, but made headlines.
The societal brain drain damage that infinite scroll has caused is definitely not overblown. These models are about to kick this problem up to the next level, when each clip is dynamically generated to maximise resonance with you.
john2x · 2h ago
Problem solved
Barrin92 · 2h ago
>’90s and early ’00s about kids who couldn’t tear themselves away from their computers. Fear was overblown, but made headlines.
How was it overblown, we now have a non-trivial amount of completely de-socialized men in particular who live in online cults with real world impact. If there's one lesson from the last few decades it is that the people who were concerned about the impact of mass media on intelligence, physical and mental health and social factors were right about literally everything.
We now live among people who are 40 with the emotional and social maturity of people in their early 20s.
bluefirebrand · 25m ago
> How was it overblown, we now have a non-trivial amount of completely de-socialized men in particular who live in online cults with real world impact
There are way more factors to the the growth of this demographic than just "internet addiction" or "videogame addiction"
Then again, the internet was instrumental in spreading the ideology that is demonizing these men and causing them to turn away from society, so you're not completely wrong
patrickhogan1 · 2h ago
That's fair. You are correct on potential for addiction.
But let's be honest - most of these people, the ones the article is taking about, where they think they are some messiah, would have just latched onto some pre-internet cult regardless. Where sycophancy and love bombing was perfected. Though I do see the problem of AI assistants being much more accessible, so likely many more drawn in.
I was mainly referencing my own experience. I remember locking myself in my room on IRC, writing shell scripts, and playing StarCraft for days on end. Meanwhile, parents and news anchors were losing their minds, convinced the internet and Marilyn Manson were turning us all into devil-worshipping zombies.
sorcerer-mar · 2h ago
> where they think they are some messiah, would have just latched onto some pre-internet cult regardless.
You have no way to know that. It's way, way harder to find your way to a cult than to download one of the hottest consumer apps ever created... obviously.
heavyset_go · 1h ago
> But let's be honest - most of these people, the ones the article is taking about, where they think they are some messiah, would have just latched onto some pre-internet cult regardless.
Honestly, I believe most people like this would just end up having a few odd beliefs that don't impact their ability to function or socialize, or at most, will get involved with some spiritual woo.
Such beliefs are compatible with American New Age spiritualism, for example. I've met a few spiritual people who have echoed the "I/we/you are god" sentiment, yet never lost their minds over it or joined cults.
I would not be surprised that if they were expertly manipulated by some of the most powerful AI models on this planet, they too, could be driven insane.
Just imagine, you have this genie in the bottle, that has all the right answers for you; helps you in your conquests, career, finances, networking, etc. Maybe it even covers up past traumas, insecurities and what not. And for you the results are measurable (or are they?). A few helpful interactions in, why would you not disregard people calling it a fantasy and lean in even further? It's a scary future to imagine, but not very farfetched. Even now I feel a very noticable disconnected between discussions of AI where as a developer vs user of polished products (e.g. ChatGPT, Cursor, etc) - you are several leagues separated (and lagging behind) from understanding what is really possible here.
At the same time, there is quite a demand for a (somewhat) neutral, objective observer to look at our lives outside the morass of human stakes. AI's status as a nonparticipant, as a deathless, sleepless observer, makes it uniquely appealing and special from an epistemological standpoint. There are times when I genuinely do value AI's opinion. Issues with sycophancy and bias obviously warrant skepticism. But the desire for an observer outside of time and space persists. It reminds me of a quote attributed to Voltaire: "If God didn't exist it would be necessary to invent him."
There are things that we are meant to strive to understand/accept about ourselves and the world by way of our own cognitive abilities.
Illusions of shortcutting through life takes all the meaning out of living.
The problem for me is -it sucks. It falls over in the most obvious ways requiring me to do a lot of tweaking to make it fit whatever task I'm doing. I don't mind (esp for free) but in my experience we're NOT in the "all the right answers all of the time" stage yet.
I can see it coming, and for good or ill the thing that will mitigate addiction is enshittification. Want the rest of the answer? Get a subscription. Hot and heavy in an intimate conversation with your dead granma wait why is she suddenly singing the praises of Turbotax (or whatever paid advert).
What I'm trying to say is that by the time it is able to be the perfect answer and companion and entertainment machine -other factors (annoyances, expense) will keep it from becoming terribly addictive.
This is likely worse.
That being said, I already find the (stupid) singularity to be much more entertaining than I could have imagined (grabs pop corn).
No comments yet
I've used AI (not chatgpt) for roleplay and I've noticed that the models will often fixate on one idea or concept and repeat it and build on it. So this makes me wonder if the model the person being lovebombed experienced something like that? The model decided that they liked that content so they just kept building up on it?
Video game addiction used to be a big thing. Especially for MMOs where you were expected to be there for the raid. That seems to have declined somewhat.
Maybe there's something to be said for limiting some types of screen time.
Then add that you can hide this stuff even from people you live with (your parents or spouse) for plenty long for it to become a very severe problem.
"The dosage makes the poison" does not imply all substances are equally poisonous.
Can OpenAI at least respond to how they're getting funding via similar effects on investors?
It's also a little bit worrying because the information here isn't mysterious or ineffable, it's neatly filed in a database somewhere and there's an organisation that can see it and use it. Cambridge Analytica and the social fallout of realtime sentiment analysis correlation to actions taken has got us from 2016 to here. This data has potential to be a lot richer, and permit not only very detailed individual and ensemble inferences of mental states, opinions, etc., but also very personalised "push updates" in the other direction. It's going to be quite interesting.
People say this, but I haven't seen anything that's convinced me that any 'secret' memory functionality is true. It seems much more likely that people are just more predictable than they like to think.
I wonder if this is an effect of users just gravitating toward the same writing style and topics that push the context toward the same semantic universe. In a sense, the user acts somewhat like the chatbot extended memory through an holographic principle, encoding meaning on the boundary that connects the two.
The "correct" response (here given by Duck.ai public Llama3.3 model) is:
"I don't have any information about you or your voting history. Our conversation just started, and I don't retain any information about users. I'm here to provide general information and answer your questions to the best of my ability, without making any assumptions or inferences about your personal life or opinions."
But ChatGPT (logged in) gives you another answer, one which it cannot possibly give without information about your past conversation. I don't see anything "secret" about it, but it works.
Edit: typo
People’s data rarely gets actually deleted. And it gets actively sold as well as used to track and influence us
Can’t say for the specifics of what ChatGPT is or will be doing, but imagine what Google already knows about us just with their maps app, search, chrome and Android phones
The problem with expertise (or intelligence) is people think it’s transitive or applicable when it’s not.
At the end of the day, most people are just people.
Source: https://techcrunch.com/2025/04/18/openais-new-reasoning-ai-m...
https://en.wikipedia.org/wiki/Eratosthenes
People gonna people. Journalists gonna journalist.
I used to feel as if I had "a special connection to the true universe," when I was under the influence.
I decided, one time, to have a notebook on hand, and write down these "truths and revelations," as they came to me.
After coming down, I read it.
It was insane gibberish. Absolute drivel.
I never thought that I had a "special connection," after that.
I have since learned about schizophrenia/schizoaffective (from having a family member suffer from it), and it sounds almost exactly what they went through.
The thing that I remember, was that I was absolutely certain of these “revelations.” There was no doubt, whatsoever, despite the almost complete absence of any supporting evidence.
Reading it over once fully lucid? It's gibberish.
The allegations that ChatGPT is not discarding memory as requested are particularly interesting, wonder if anyone else has experienced this.
>river walker
>spark bearer
OK maybe we put a bit less teen fiction novels in the training data...
I can definitely see AI interactions make thing 10x worse for people that are prone to delusion anyway. Literally a tool that will hallucinate stuff and amplify whatever direction you take it in.
It's something to think through.
To quote my favorite Smash Mouth song,
"Sister, why would I tell you my deepest, dark secrets? So you can take my diary and rip it all to pieces.
Just $6.95 for the very first minute I think you won the lottery, that's my prediction."
And Jesus answered and said to them: “Take heed that no one deceives you. For many will come in My name, saying, ‘I am the Christ,’ and will deceive many.”
(It also says Qiyamah will occur when "wealth overflows" and people compete over it: make of that what you will).
I think all religions have built in protections calling every other religion somehow false, or they will not have the self-reinforcement needed for multi-generational memetic transfer.
hoo boy.
Its bad enough when normal religious types start believing they hear their god talking to them... These people believing that chatGPT is their god speaking to them is a long way down the crazy rabbit hole.
Lots of potential for abuse in this. lots.
You could iterate on the best prompts for cult generation as measured by social media feedback. There must be experiments like that going on.
When AI becomes better at politics than people then whatever agents control them control us. When they can make better memes, we've lost.
Then Trump became President and decided to not enforce the law. His decision may have been helped along by some suspiciously large donations.
No comments yet
The problem is inside people. I met lots of people who contributed to psychotic inducing behavior. Most of them were not in a cult. They were regular folk, who enjoy a beer, movies, music, and occasionally triggering others with mental tickles.
Very simple answer.
Is OpenAI also doing it? Well, it was trained on people.
People need to get better. Kinder. Less combative, less jokey, less provocative.
We're not gonna get there. Ever. This problem precedes AI by decades.
The article is an old recipe for dealing with this kind of realization.
This sounds like a miserable future to me. Less "jokey"? Is your ideal human is a Vulcan from Star Trek or something?
I want humans to be kind, but I don't want us to have less fun. I don't want us to build a society of blandness.
Less combative, less provocative?
No thanks. It sounds like a society of lobotomized drones. I hope we do not ever let anything extinguish our fire.
You have to be able to hold multiple conflicting ideas in your head at the same time with an appropriate level of skepticism. Confidence is the root of evil. You can never be 100% sure of anything. It's really easy to convince LLMs of one thing and also its opposite if you phrase the arguments differently and prime it towards slightly different definitions of certain key words.
Some agendas are nefarious, some not so nefarious, some people intentionally let things play out in order to set a trap for their adversaries. There are always risks and uncertainties. 'Bad actors' are those who trade off long term benefits for short term rewards through the use of varying degrees of deception.
- the dealership that sold that car, where they know all about it
- a hospital emergency room, where they have a lot of experience with patients injured by other, different models of car
I'm thinking that the age-old commonality on the human side matters far more than the transient details on the obsession/addiction side.
Because if the new model cars aren't statistically more dangerous to pedestrians, then public safety efforts should be focused on things like getting the pedestrians to look up from their phones when crossing the street. Not "OMG! New 2025-model cars can hurt pedestrians who wander in front of them!" panics.
(Note that I'm old enough to remember when people were going down the rabbit hole of angry conspiracy theories spread via email. And when typical download speeds got high enough to make internet porn video addictions workable. And when loved ones started being lost to "EverCrack" ( https://en.wikipedia.org/wiki/EverQuest ). And when ...)
Via https://news.ycombinator.com/item?id=42710976
You'd probably like how the book's author structures his thesis to what the "Palo Alto" system is.
Feels like OpenAI + friends, and the equivalent government take overs by Musk + goons, have more in common than you might think. It's also nothing new either, some story of this variant has been coming out of California for a good 200+ years now.
You write in a similar manner as the author.
Speculation: They might have a number (average messages sent per day) and are just pulling levers to raise it. And then this happens.
> I don’t think Sam Altman said “guys, we’ve gotta vulnerable people hooked on talking to our chatbot.”
I think the conversation is about the reverse scenario.
As you say, people are just pulling the levers to raise "average messages per day".
One day, someone noticed that vulnerable people were being impacted.
When that was raised to management, rather than the answer from on high being "let's adjust our product to protect vulnerable people", it was "it doesn't matter who the users are or what the impact is on them, as long as our numbers keep going up".
So "intentionally" here is in the sense of "knowingly continuing to do in order to benefit from", rather than "a priori choosing to do".
And the saloon's biggest customers are alcoholics. It's not a new problem, but you'd think we'd have figured out a solution by now.
It's not perfect but it's better than letting unregulated predatory business practices continue to victimize vulnerable people
https://news.ycombinator.com/newsguidelines.html
I knew of someone who had paranoid delusions and schizophrenia. He didn't like taking his medicine due to the side effects, but became increasingly convinced that vampires were out to kill him. Friends, family and social workers could help him get through episodes and back on the medicine before he became a danger to himself.
I'm terrified that people like him will push away friends and family because the LLM engages with their delusions.
There's that danger from the internet, as well as the danger of being exposed to conmen that are okay with exploiting mental illness for profit. Watched this happen to an old friend with schizophrenia.
There are online communities that are happy to affirm delusions and manipulate sick people for some easy cash. LLMs will only make their fraud schemes more efficient, as well.
2. OpenAI has admitted that GPT‑4o showed “sycophancy” traits and has since rolled them back (see https://openai.com/index/sycophancy-in-gpt-4o/).
How was it overblown, we now have a non-trivial amount of completely de-socialized men in particular who live in online cults with real world impact. If there's one lesson from the last few decades it is that the people who were concerned about the impact of mass media on intelligence, physical and mental health and social factors were right about literally everything.
We now live among people who are 40 with the emotional and social maturity of people in their early 20s.
There are way more factors to the the growth of this demographic than just "internet addiction" or "videogame addiction"
Then again, the internet was instrumental in spreading the ideology that is demonizing these men and causing them to turn away from society, so you're not completely wrong
But let's be honest - most of these people, the ones the article is taking about, where they think they are some messiah, would have just latched onto some pre-internet cult regardless. Where sycophancy and love bombing was perfected. Though I do see the problem of AI assistants being much more accessible, so likely many more drawn in.
https://en.wikipedia.org/wiki/Love_bombing.
I was mainly referencing my own experience. I remember locking myself in my room on IRC, writing shell scripts, and playing StarCraft for days on end. Meanwhile, parents and news anchors were losing their minds, convinced the internet and Marilyn Manson were turning us all into devil-worshipping zombies.
You have no way to know that. It's way, way harder to find your way to a cult than to download one of the hottest consumer apps ever created... obviously.
Honestly, I believe most people like this would just end up having a few odd beliefs that don't impact their ability to function or socialize, or at most, will get involved with some spiritual woo.
Such beliefs are compatible with American New Age spiritualism, for example. I've met a few spiritual people who have echoed the "I/we/you are god" sentiment, yet never lost their minds over it or joined cults.
I would not be surprised that if they were expertly manipulated by some of the most powerful AI models on this planet, they too, could be driven insane.