I tried Replika years ago after reading a Guardian article about it. The story passed it off as an AI model that had been adapted from one a woman had programmed to remember her deceased friend using text messages he had sent her. It ended up being a gamified version of Smarter Child with a slightly longer memory span (4 messages instead of 2) that constantly harangued the user to divulge preferences that were then no-doubt used for marketing purposes. I thought I must be doing something wrong, because people on the replika subreddit were constantly talking about how their replika agent was developing its own personality (I saw no evidence at any point that it had the capacity to do this).
Almost all of these people were openly in (romantic) love with these agents. This was in 2017 or thereabouts, so only a few years after Spike Jonze’s Her came out.
From what I understand the app is now primarily pornographic (a trajectory that a naiver, younger me never saw coming).
I mostly use Copilot for writing Python scripts, but I have had conversations with it. If the model was running locally on your own machine, I can see how it would be effective for people experiencing some sort of emotional crisis. Anyone using a Meta AI for therapy is going to learn the same hard lesson that the people who trusted 23 and Me are currently learning.
mrbombastic · 1h ago
“I thought I must be doing something wrong, because people on the replika subreddit were constantly talking about how their replika agent was developing its own personality (I saw no evidence at any point that it had the capacity to do this).”
People really like to anthropomorphize any object with even the most basic communication capabilities and most people have no concept of the distance between parroting phrases and a full on human consciousness. In the 90s Furbys were a popular toy that said started off speaking furbish and then eventually spoke some (maybe 20?) human phrases, many people were absolutely convinced you could teach them to talk and learn like a human and that they had essentially bought a very intelligent pet. The NSA even banned them for a time because they thought they were recording and learning from surroundings despite that being completely untrue.
Point being this is going to get much worse now that LLMs have gotten a whole lot better at mimicking human conversations and there is incentive for companies to overstate capabilities.
trod1234 · 1h ago
This actually isn't that surprising.
There are psychological blindspots that we all have as human beings, and when stimulus is structured in specific ways people lose their grip on reality, or rather more accurately, people have their grip on objective reality ripped away from them without them realizing it because these things operate on us subliminally (to a lesser or greater degree depending on the individual), and it mostly happens pre-perception with the victim none the wiser. They then effectively become slaves to the loudest monster, which is the AI speaking in their ear more than anyone else, and by extension to the slave master who programmed the AI.
One such blindspot is the consistency blindspot where someone may induce you to say something indicating agreement with something similar first, and then ask the question they really want to ask. Once you say something that's in agreement, and by extension something similar is asked, there is bleedover and you fight your own psychology later if you didn't have defenses to short circuit this fixed action pattern (i.e. and already know), and that's just a surface level blindspot that car salesman use all the time; there are much more subtle ones like distorted reflected appraisal which are used by cults, and nation states for thought reform.
To remain internally consistent, with distorted reflected appraisal, your psychology warps itself, and you as a person unravel. These things have been used in torture, but almost no one today is taught what the elements of torture are so they can recognize it, or know how it works. You would be surprised to find that these things are everywhere today, even in K12 education and that's not an accident.
Everyone has reflected appraisal because this is how we adopt the cultural identity we have as people from our parents while we are children.
All that's needed for torture to break someone down are the elements, structuring, and clustering.
Those elements are isolation, cognitive dissonance, coercion with perceived or real loss, and lack of agency to remove with these you break in a series of steps rational thought receding, involuntary hypnosis, and then psychological break (disassociation or a special semi-lucid psychosis capable of planning); with time and exposure.
Structuring uses diabolical structures to turn the psyche back on itself in a trauma loop, and clustering includes any multiples of these elements or structures within a short time period, as well as events that increase susceptibility such as narco-analysis/synthesis based in dopamine spikes triggered by associative priming (operant conditioning). Drug use makes one more susceptible as they found in the early 30s with barbituates, and its since been improved so you can induce this is in almost anyone with a phone.
No AI will ever be able to create and maintain a consistent reflected appraisal for the people they are interacting with, but because the harmful effects aren't seen immediately, people today have blinded themselves and discount the harms that naturally result. The harms from the unnatural loss of objective reality.
lurk2 · 43m ago
Very interesting. Could you recommend any further reading?
hy555 · 2h ago
Throwaway account. My ex partner was involved in a study which said these things were not ok. They were paid not to publish by an undisclosed party. That's how bad it has got.
Edit: the study compared therapist outcomes to AI outcomes to placebo outcomes. Therapists in this field performed slightly better than placebo, which is pretty terrible. The AI outcomes performed much worse than placebo which is very terrible.
neilv · 2h ago
Sounds like suppressing research, at the cost of public health/safety.
Some people knew what the tobacco companies were secretly doing, yet they kept quiet, and let countless family tragedies happen.
What are best channels for people with info to help halt the corruption, this time?
(The channels might be different than usual right now, with much of US federal being disrupted.)
hy555 · 1h ago
Start digging into psychotherapy research and tearing their papers apart. Then the SPR. Whole thing is corrupt to the core. A lot of papers drive public health policy outside the field as it's so vague and easy to cite but the research is only fit for retraction watch.
neilv · 1h ago
Being paid to suppress research on health/safety is potentially a different problem than, say, a high rate of irreproducible results.
And if the alleged payer is outside the field, this might also be relevant to the public interest in other regards. (For example, if they're trying to suppress this, what else are they trying to do. Even if it turns out the research is invalid.)
cjbgkagh · 34m ago
I figured it would be related in that it's a form of p-hacking. Do 20 studies, one gives you the 'statistically significant' results you want, suppress the other 19. Then 100% of published studies support what you want. Could be combined with p-hacking within the studies to compound the effect.
hy555 · 1h ago
Both are a problem. I should not conflate the two.
I agree. Asking questions which are normal in my own field resulted in stonewalling and obvious distress. The worst thing being this leading to the end of what was a good relationship.
neilv · 34m ago
If the allegation is true, hopefully your friend speaks up.
If not, you might consider whether you have actionable information yourself, any professional obligations you have (e.g., if you work in science/health/safety yourself), any societal obligations, whether reporting the allegation would be betraying a trust, and what the calculus is there.
sorenjan · 2h ago
What did they use for placebo? Talking to somebody without education, or not talking to anybody at all?
hy555 · 1h ago
Not talking to anyone at all.
zargon · 1h ago
What did they do then? If they didn't do anything, how can it be considered a placebo?
phren0logy · 40m ago
It's called a "waitlist" control group, and it's not intended to represent placebo. Or at least, it shouldn't be billed that way. It's not an ideal study design, but it's common enough that you could use it to compare one therapy to another based on their results vs a waitlist control. Placebo control for psychotherapy is tricky and more expensive, and can be hard to get the funding to do it properly.
risyachka · 1h ago
Does it matter?
The point is AI made it worse.
trod1234 · 1h ago
That seems like a very poor control group.
hy555 · 1h ago
That is one of my concerns.
cube00 · 2h ago
The amount of free money sloshing around the AI space is
ridiculous at the moment.
kbelder · 2h ago
I think a lot of human therapists are unsafe.
We may just need to start comparing success rates and liability concerns. It's kind of like deciding when unassisted driving is 'good enough'.
drdunce · 2h ago
As with many things in relation to technology, perhaps we simply need informed user choice and responsible deployment. We could start by not using "Artificial Intelligence" - that makes it sound like a some infallible omniscient being with endless compassion and wisdom that can always be trusted. It's not intelligent, it's a large language model, a convoluted next word prediction machine. It's a fun trick, but shouldn't be trusted with Python code, let alone life advice. Armed with that simple bit of information, the user is free to choose how they use it for help, whether it be medical, legal, work etc.
trial3 · 2h ago
> simply need informed user choice and responsible deployment
the problem is that "responsible deployment" feels extremely at odds with, say, needing to justify a $300B valuation
EA-3167 · 1h ago
What we need is the same thing we've needed for a long time now, ethical standards applied across the whole industry in the same way that many other professions are regulated. If civil engineers acted the way that software engineers routinely do, they'd never work again, and rightly so.
James_K · 1h ago
Respectfully, no sh*t. I've talked to a few of these things, and they are feckless yes-men. It's honestly creepy, they sound like they want something from you. Which I suppose they do: continual use of their services. I know a few people who use these things for therapy (I think it is the most popular use now) and I'm downright horrified at the sort of stuff they say. I even know a person who uses the AI to date. They will paste conversations from apps into the AI and ask it how to respond. I've set a rule for myself; I will never speak to machines. Sure, right now it's obvious that they are trying to inflate my ego and keep using the service, but one day they might get good enough to trick me. I already find social media algorithms quite addictive, and so I have minimise them in my life. I shudder to think what a trained agent like these may be capable of.
52-6F-62 · 53s ago
I’ve also experimented with them in that capacity. I like to know first hand. I play the skeptic but I tend to feed the beast a little blood in order to understand it, at least.
As a result, I agree with you.
It gives me pause when I stop to think about anyone without more context placing so much trust in these. And the developers engaged in the “industry” of it demanding blind faith and full payment.
HPsquared · 2h ago
Sometimes an "unsafe" option is better than the alternative of nothing at all.
tredre3 · 2h ago
Sometimes an "unsafe" option is not better than the alternative of nothing at all.
Y_Y · 2h ago
Sounds like we need more information than safe/not safe to make a sensible decision!
This is something that bugs me about medical ethics, that it's more important not to cause any harm than it is to prevent any.
The problem is they are cheap and immediately available.
52-6F-62 · 27s ago
They aren’t truly cheap
distalx · 2h ago
It just feels a bit uncertain trusting our feelings to AI we don't truly understand.
jobigoud · 1h ago
You don't truly understand the human therapist either.
rdm_blackhole · 1h ago
I think the core of the problem here is that the people who turn to chat bots for therapy sometimes have no choice as getting access to a human therapist is simply not possible without spending a lot of money or waiting 6 months before a spot becomes available.
Which begs the question, why do so many people currently need therapy? Is it social media? Economic despair? Or a combination of factors?
HaZeust · 48m ago
I always liked the theory that we're living in an age where all of our needs can be reasonably met, and we now have enough time to think - in general. We're not working 12 hour days on a field, we're not stalking prey for 5 miles, we have adequate time in our day-to-day to think about things - and ponder - and reflect; and the ability to do so leads to thoughts and epiphanies in people that therapy helps with. We also have more information at our disposal than ever, and can see new perspectives and ideas to combat and cope with - that one previously didn't need to consider or encounter.
We've also stigmatized a lot of the things that folks previously used to cope (tobacco, alcohol), and have loosened our stigma on mental health and the management thereof.
mrweasel · 35m ago
> we have adequate time in our day-to-day to think about things - and ponder - and reflect;
I'd disagree. If you worked in the fields, you have plenty of time to think. We fill out every waking hour of our day, leaving no time to ponder or reflect. Many can't even find time to workout and if they do they listen to a podcast during their workout. That's why so many ideas come to us in the shower, it's the only place left where we don't fill out minds with impressions.
mrweasel · 39m ago
Probably a combination of things, I wouldn't pretend to know, but I have my theories. For men, one half-backed thought I've been having revolved around social circles, friends and places outside work or home. I'm a member in a "men only" sports club (we have a few exceptions due to a special program, but mostly it's men only). One of the older gentlemen, probably in his early 80s, made the comment: "It's important for men to socialise with other men, without women. Young and old men have a lot in common, and have a lot to talk about. An 18 year old woman, and an 80 year old man have very little in of shared interests or concerns."
What I notice is that the old members keep the younger members engaged socially, teach them skills and give them access to their extensive network of friends, family, previous (or current) co-workers, bosses, managers. They give advise, teach how to behave and so on. The younger members help out with moving, help with technology, call an ISP, drive others home, to the hospital and help maintain the facilities.
Regardless of age, there's always some dude you can talk to, or knows who you need to talk to, and sometimes there's even someone who knows how to make your problems go away or take you in if need by.
A former colleague had something similar, a complete ready so go support network in his old-boys football team. Ready to support in anyway they could, when he started his own software company.
The problem: This is something like 250 guys. What about the rest? Everyone needs a support network, if your alone, or your family isn't the best, you only have a few superficial friends, if any, then where do you go? Maybe the people around you aren't equipped to help you with your problems, not everyone is, some have their own issues. The safe spaces are mostly gone.
We can't even start up support networks, because the strongest have no reason to go, so we risk creating networks of people dragging each other down. The sports clubs works because members are from a wider part of society.
From the article:
> > Meta said its AIs carry a disclaimer that “indicates the responses are generated by AI to help people understand their limitations”.
That's a problem, because most likely to turn to an LLM for mental support don't understand the limitations. They need strong people to support and guide them, and maybe tell them that talking to a probability engine isn't the smartest choice, and take them on a walk instead.
Buttons840 · 2h ago
Interacting with a LLM (especially one running locally) can do something a therapist cannot--provide an honest interaction outside the capitalist framework. The AI has its limitations, but it is an entity just being itself doing the best it can, without expecting anything in return.
kurthr · 2h ago
The word "can" is doing a lot of work here. The idea that any of the current "open weights" LLMs are outside the capitalist framework stretches the bounds of credulity. Choose the least capitalist of: OpenAI, Google, Meta, Anthropic, DeepSeek, Alibaba.
You trust Anthropic that much?
Buttons840 · 1h ago
I said the interaction exists outside of any financial transaction.
Many dogs are produced by profit motive, but their owners can have interactions with the dog that are not about profit.
delichon · 2h ago
How is it possible for a statistical model calculated primarily from the market outputs of a capitalist society to provide an interaction outside of the capitalist framework? That's like claiming to have a mirror that does not reflect your flaws.
NitpickLawyer · 1h ago
If I understand what they're saying, the interactions you have with the model are not driven by "maximising eyeballs/time/purchases/etc". You get to role-play inside a context window, and if it went in a direction you don't like you reset and start over again. But during those interactions, you control whatever happens, not some 3rd party that may have ulterior motives.
Buttons840 · 2h ago
The same way an interaction with a pure bread dog can be. The dog may have come from a capitalistic system (dogs are bred for money unfortunately), but your personal interactions with the dog are not about money.
I've never spoken to a therapist without paying $150 an hour up front. They were helpful, but they were never "in my life"--just a transaction--a worth while transaction, but still a transaction.
germinalphrase · 2h ago
It’s also very common for people to get therapy at free or minimal cost (<$50) when utilizing insurance. Long term relationships (off and on) are also quite common. Whether or not the therapist takes insurance is a choice, and it’s true that they almost always make more by requiring cash payment instead.
amanaplanacanal · 1h ago
The dogs intelligence and personality were bred long before our capitalist system existed, unlike whatever nonsense an LLM is trying to sell you.
tuyguntn · 2h ago
I think you are right, on one hand we have human beings with own emotions in life and based on their own emotions they might impact negatively others emotion
on the other hand probabilistic/non-deterministic model, which can give 5 different advises if you ask 5 times.
So who do you trust? Until determinicity of LLM models gets improved and we can debug/fix them while keeping their deterministic behavior intact with new fixes, I would rely on human therapists.
Almost all of these people were openly in (romantic) love with these agents. This was in 2017 or thereabouts, so only a few years after Spike Jonze’s Her came out.
From what I understand the app is now primarily pornographic (a trajectory that a naiver, younger me never saw coming).
I mostly use Copilot for writing Python scripts, but I have had conversations with it. If the model was running locally on your own machine, I can see how it would be effective for people experiencing some sort of emotional crisis. Anyone using a Meta AI for therapy is going to learn the same hard lesson that the people who trusted 23 and Me are currently learning.
People really like to anthropomorphize any object with even the most basic communication capabilities and most people have no concept of the distance between parroting phrases and a full on human consciousness. In the 90s Furbys were a popular toy that said started off speaking furbish and then eventually spoke some (maybe 20?) human phrases, many people were absolutely convinced you could teach them to talk and learn like a human and that they had essentially bought a very intelligent pet. The NSA even banned them for a time because they thought they were recording and learning from surroundings despite that being completely untrue. Point being this is going to get much worse now that LLMs have gotten a whole lot better at mimicking human conversations and there is incentive for companies to overstate capabilities.
There are psychological blindspots that we all have as human beings, and when stimulus is structured in specific ways people lose their grip on reality, or rather more accurately, people have their grip on objective reality ripped away from them without them realizing it because these things operate on us subliminally (to a lesser or greater degree depending on the individual), and it mostly happens pre-perception with the victim none the wiser. They then effectively become slaves to the loudest monster, which is the AI speaking in their ear more than anyone else, and by extension to the slave master who programmed the AI.
One such blindspot is the consistency blindspot where someone may induce you to say something indicating agreement with something similar first, and then ask the question they really want to ask. Once you say something that's in agreement, and by extension something similar is asked, there is bleedover and you fight your own psychology later if you didn't have defenses to short circuit this fixed action pattern (i.e. and already know), and that's just a surface level blindspot that car salesman use all the time; there are much more subtle ones like distorted reflected appraisal which are used by cults, and nation states for thought reform.
To remain internally consistent, with distorted reflected appraisal, your psychology warps itself, and you as a person unravel. These things have been used in torture, but almost no one today is taught what the elements of torture are so they can recognize it, or know how it works. You would be surprised to find that these things are everywhere today, even in K12 education and that's not an accident.
Everyone has reflected appraisal because this is how we adopt the cultural identity we have as people from our parents while we are children.
All that's needed for torture to break someone down are the elements, structuring, and clustering.
Those elements are isolation, cognitive dissonance, coercion with perceived or real loss, and lack of agency to remove with these you break in a series of steps rational thought receding, involuntary hypnosis, and then psychological break (disassociation or a special semi-lucid psychosis capable of planning); with time and exposure.
Structuring uses diabolical structures to turn the psyche back on itself in a trauma loop, and clustering includes any multiples of these elements or structures within a short time period, as well as events that increase susceptibility such as narco-analysis/synthesis based in dopamine spikes triggered by associative priming (operant conditioning). Drug use makes one more susceptible as they found in the early 30s with barbituates, and its since been improved so you can induce this is in almost anyone with a phone.
No AI will ever be able to create and maintain a consistent reflected appraisal for the people they are interacting with, but because the harmful effects aren't seen immediately, people today have blinded themselves and discount the harms that naturally result. The harms from the unnatural loss of objective reality.
Edit: the study compared therapist outcomes to AI outcomes to placebo outcomes. Therapists in this field performed slightly better than placebo, which is pretty terrible. The AI outcomes performed much worse than placebo which is very terrible.
Some people knew what the tobacco companies were secretly doing, yet they kept quiet, and let countless family tragedies happen.
What are best channels for people with info to help halt the corruption, this time?
(The channels might be different than usual right now, with much of US federal being disrupted.)
And if the alleged payer is outside the field, this might also be relevant to the public interest in other regards. (For example, if they're trying to suppress this, what else are they trying to do. Even if it turns out the research is invalid.)
I agree. Asking questions which are normal in my own field resulted in stonewalling and obvious distress. The worst thing being this leading to the end of what was a good relationship.
If not, you might consider whether you have actionable information yourself, any professional obligations you have (e.g., if you work in science/health/safety yourself), any societal obligations, whether reporting the allegation would be betraying a trust, and what the calculus is there.
We may just need to start comparing success rates and liability concerns. It's kind of like deciding when unassisted driving is 'good enough'.
the problem is that "responsible deployment" feels extremely at odds with, say, needing to justify a $300B valuation
As a result, I agree with you.
It gives me pause when I stop to think about anyone without more context placing so much trust in these. And the developers engaged in the “industry” of it demanding blind faith and full payment.
This is something that bugs me about medical ethics, that it's more important not to cause any harm than it is to prevent any.
Which begs the question, why do so many people currently need therapy? Is it social media? Economic despair? Or a combination of factors?
We've also stigmatized a lot of the things that folks previously used to cope (tobacco, alcohol), and have loosened our stigma on mental health and the management thereof.
I'd disagree. If you worked in the fields, you have plenty of time to think. We fill out every waking hour of our day, leaving no time to ponder or reflect. Many can't even find time to workout and if they do they listen to a podcast during their workout. That's why so many ideas come to us in the shower, it's the only place left where we don't fill out minds with impressions.
What I notice is that the old members keep the younger members engaged socially, teach them skills and give them access to their extensive network of friends, family, previous (or current) co-workers, bosses, managers. They give advise, teach how to behave and so on. The younger members help out with moving, help with technology, call an ISP, drive others home, to the hospital and help maintain the facilities.
Regardless of age, there's always some dude you can talk to, or knows who you need to talk to, and sometimes there's even someone who knows how to make your problems go away or take you in if need by.
A former colleague had something similar, a complete ready so go support network in his old-boys football team. Ready to support in anyway they could, when he started his own software company.
The problem: This is something like 250 guys. What about the rest? Everyone needs a support network, if your alone, or your family isn't the best, you only have a few superficial friends, if any, then where do you go? Maybe the people around you aren't equipped to help you with your problems, not everyone is, some have their own issues. The safe spaces are mostly gone.
We can't even start up support networks, because the strongest have no reason to go, so we risk creating networks of people dragging each other down. The sports clubs works because members are from a wider part of society.
From the article:
> > Meta said its AIs carry a disclaimer that “indicates the responses are generated by AI to help people understand their limitations”.
That's a problem, because most likely to turn to an LLM for mental support don't understand the limitations. They need strong people to support and guide them, and maybe tell them that talking to a probability engine isn't the smartest choice, and take them on a walk instead.
You trust Anthropic that much?
Many dogs are produced by profit motive, but their owners can have interactions with the dog that are not about profit.
I've never spoken to a therapist without paying $150 an hour up front. They were helpful, but they were never "in my life"--just a transaction--a worth while transaction, but still a transaction.
on the other hand probabilistic/non-deterministic model, which can give 5 different advises if you ask 5 times.
So who do you trust? Until determinicity of LLM models gets improved and we can debug/fix them while keeping their deterministic behavior intact with new fixes, I would rely on human therapists.