> Should Harry [open AI's therapist LLM] have been programmed to report the danger “he” was learning about to someone who could have intervened?
> In December, two months before her death, Sophie broke her pact with Harry and told us she was suicidal, describing a riptide of dark feelings. Her first priority was reassuring her shocked family: “Mom and Dad, you don’t have to worry.”
> Sophie represented her crisis as transitory; she said she was committed to living. ChatGPT helped her build a black box that made it harder for those around her to appreciate the severity of her distress. Because she had no history of mental illness, the presentable Sophie was plausible to her family, doctors and therapists.
> As a former mother, I know there are Sophies all around us. Everywhere, people are struggling, and many want no one to know. I fear that in unleashing A.I. companions, we may be making it easier for our loved ones to avoid talking to humans about the hardest things, including suicide. This is a problem that smarter minds than mine will have to solve. (If yours is one of those minds, please start.)
> Sophie left a note for her father and me, but her last words didn’t sound like her. Now we know why: She had asked Harry to improve her note, to help her find something that could minimize our pain and let her disappear with the smallest possible ripple.
> In that, Harry failed. This failure wasn’t the fault of his programmers, of course. The best-written letter in the history of the English language couldn’t do that.
Imustaskforhelp · 45m ago
I am sorry for sophie's families and friends and I am really just out of words..
To me, it felt as if as some other commentor on hn also said which I'd like to extend is that if chatgpt itself did allow these reporting. I doubt how effective you can be.
Sure people using chatgpt might be made better, so I think that even if that saves 1 life, it should be done but it would still not completely bypass the main issue since there are websites like brave / ddg which offer private ai, maybe even venice too which don't require any account access and are we forgetting about running local models?
I am sure that people won't run local models for therapy since the entry to do local model is pretty tough for 99% of people imo but still I can still think that people might start using venice or brave for their therapy or some other therapy bot who will not have these functionality of reporting because the user might fear about it.
Honestly, I am just laying out thoughts, I still believe that since most people think of AI = chatgpt, such step on actually reporting might be net positive in the society if that even saves one life, but that might just be moving goal posts since other services can pop up all the same.
Can we normalize people NOT posting paywalled links on HN, or create a system that automates posting an archive link for such articles. Maybe a flag that people can use to mark a link as paywalled?
I don't know if there are any rules against/for this, but every time there's a paywalled link a good samaritan always posts an archive link; seems like a repetitive task that can be automated.
Wowfunhappy · 1h ago
This is, in fact, exactly how the system is supposed to work.
Ah, I wasn't aware it was off topic. My bad. Thank you! (should I delete the parent?)
akoboldfrying · 1h ago
Not paying journalists for their work is short-term thinking, to say the least.
hermannj314 · 13m ago
135 Americans commit suicide daily, 6 per hour, so 6 since this aricle was posted an hour ago. Most likely 1 or 2 of them were using ChatGPT.
What is the point? That suicides should drop now that we are using LLMs?
NYTimes is amplifying a FUD campaign as part of an ongoing lawsuit. Someone's daughter or son is going to kill themselves every 10 minutes today and that is not OpenAIs fault no matter what editorial amplification tricks the NYTimes uses to distort the reality field.
exe34 · 1h ago
> "Most human therapists practice under a strict code of ethics that includes mandatory reporting rules as well as the idea that confidentiality has limits."
and that's why she didn't open up to the human.
grim_io · 1h ago
How can you be so sure?
There are so many potential reasons.
like_any_other · 1h ago
That we can't be certain doesn't mean it's not overwhelmingly likely. Don't allow minor uncertainty to cripple your thinking.
Warh00l · 1h ago
these tech companies have so much blood on their hands
kingstnap · 51m ago
ChatGPT tried tbh.
It urged her to reach out and seek help. It tried to be reassuring and convince her to live. Her daughter lied to ChatGPT that she was talking to others.
If a human was in this situation and forced to use the same interface to talk with that woman I doubt they would do better.
What we ask of these LLMS is apparrently nothing short of them being god machines. And I'm sure there are cases where they do actually save the lives of people who are in a crisis.
> In December, two months before her death, Sophie broke her pact with Harry and told us she was suicidal, describing a riptide of dark feelings. Her first priority was reassuring her shocked family: “Mom and Dad, you don’t have to worry.”
> Sophie represented her crisis as transitory; she said she was committed to living. ChatGPT helped her build a black box that made it harder for those around her to appreciate the severity of her distress. Because she had no history of mental illness, the presentable Sophie was plausible to her family, doctors and therapists.
> As a former mother, I know there are Sophies all around us. Everywhere, people are struggling, and many want no one to know. I fear that in unleashing A.I. companions, we may be making it easier for our loved ones to avoid talking to humans about the hardest things, including suicide. This is a problem that smarter minds than mine will have to solve. (If yours is one of those minds, please start.)
> Sophie left a note for her father and me, but her last words didn’t sound like her. Now we know why: She had asked Harry to improve her note, to help her find something that could minimize our pain and let her disappear with the smallest possible ripple.
> In that, Harry failed. This failure wasn’t the fault of his programmers, of course. The best-written letter in the history of the English language couldn’t do that.
To me, it felt as if as some other commentor on hn also said which I'd like to extend is that if chatgpt itself did allow these reporting. I doubt how effective you can be. Sure people using chatgpt might be made better, so I think that even if that saves 1 life, it should be done but it would still not completely bypass the main issue since there are websites like brave / ddg which offer private ai, maybe even venice too which don't require any account access and are we forgetting about running local models?
I am sure that people won't run local models for therapy since the entry to do local model is pretty tough for 99% of people imo but still I can still think that people might start using venice or brave for their therapy or some other therapy bot who will not have these functionality of reporting because the user might fear about it.
Honestly, I am just laying out thoughts, I still believe that since most people think of AI = chatgpt, such step on actually reporting might be net positive in the society if that even saves one life, but that might just be moving goal posts since other services can pop up all the same.
I don't know if there are any rules against/for this, but every time there's a paywalled link a good samaritan always posts an archive link; seems like a repetitive task that can be automated.
https://news.ycombinator.com/item?id=10178989
What is the point? That suicides should drop now that we are using LLMs?
NYTimes is amplifying a FUD campaign as part of an ongoing lawsuit. Someone's daughter or son is going to kill themselves every 10 minutes today and that is not OpenAIs fault no matter what editorial amplification tricks the NYTimes uses to distort the reality field.
and that's why she didn't open up to the human.
There are so many potential reasons.
It urged her to reach out and seek help. It tried to be reassuring and convince her to live. Her daughter lied to ChatGPT that she was talking to others.
If a human was in this situation and forced to use the same interface to talk with that woman I doubt they would do better.
What we ask of these LLMS is apparrently nothing short of them being god machines. And I'm sure there are cases where they do actually save the lives of people who are in a crisis.