Some therapists are using ChatGPT. Clients are triggered

7 mdp2021 6 9/2/2025, 9:10:05 AM technologyreview.com ↗

Comments (6)

mensetmanusman · 8h ago
Each word has nearly 5 billion calculations behind its choice. Why would an expert not use LLMs to graft their responses to the repository of human knowledge?
mdp2021 · 37m ago
> Each word has nearly 5 billion calculations behind its choice

If you mean "the words picked by LLMs have nearly 5 billion calculations behind", then the reply is simply "LLMs are easily effing morons" - those calculations must be not well spent or not enough.

And in the example, the paid «expert» (which is actually meant to be "the professional" - expert, talented and educated) is the human, not the machine.

ilioscio · 4h ago
Two big reasons spring to mind, client privacy and trust. This therapist was quite literally dumping confidential medical information to a third party corporation without telling the patient their information was being shared and disclosed with outside entities. The broader issue if patient trust in the ability or professionalism of the therapist are a whole other issue.
ljf · 11h ago
Very interesting and well written article, let down by a clickbaity/dismissive headline - shame as it was well worth the read.
mdp2021 · 10h ago
> let down by a clickbaity/dismissive

? It's almost literally what's happening, and it is a big alarm light in the whole picture...

mdp2021 · 11h ago
The perplexity following this idea of an "automate compassion", from human servers, is multifold and heavy. The boundaries of farce broken and broken again:

> It would have been consoling and thoughtful ... were it not for the reference to the AI prompt accidentally preserved at the top: “Here’s a more human, heartfelt version with a gentle, conversational tone”. // [...] This was especially problematic, she adds, because “part of why I was seeing her was for my trust issues. // [The patient] had believed her therapist to be competent and empathetic ... [the therapist] explained that because she’d never had a pet herself, she’d turned to AI for help expressing the appropriate sentiment”

> People value authenticity, particularly in psychotherapy ["Well discovered, congratulations" - Note from the Poster]

> A recent Stanford University study, for example, found that chatbots can fuel delusions and psychopathy by blindly validating a user rather than challenging them, as well as suffer from biases and engage in sycophancy [Again, NftP.] ... The tool will produce lots of possible conditions, but it’s rather thin in its analysis, he says [Again, NftP] ...The American Counseling Association recommends that AI not be used for mental health diagnosis at present [Maybe start calling things with their own name, and that 'AI' is grossly misused, and it is the index of the sloppiness that brings us to this point, NftP]

Interesting is the further confirmation in this side field of the perception of shallowness from LLMS:

> However, “it didn’t do a lot of digging” [a testing scientist] says. It didn’t attempt “to link seemingly or superficially unrelated things together into something cohesive … to come up with a story, an idea, a theory”

...not to mention that the Big Problem, especially in cognition, is verifying theories, not creating them. Many people go around with loads of bad ideas left unchecked or badly checked.

--

So, again: since the inception of LLMs we have been flooded with the paradoxical claims, "they are superior to most people" - those people that represent a problem in the professional world, the world in which we seek instead for solutions incarnated. AI - classical AI - remains, "what can replace a professional".