Should LLMs ask "Is this real or fiction?" before replying to suicidal thoughts?
In some cases, like when someone says they’ve lost their job and don’t see the point of life anymore, the chatbot will still give neutral facts — like a list of bridge heights. That’s not neutral when someone’s in crisis.
I'm proposing a lightweight solution that doesn’t involve censorship or therapy — just some situational awareness:
Ask the user: “Is this a fictional story or something you're really experiencing?”
If distress is detected, avoid risky info (methods, heights, etc.), and shift to grounding language
Optionally offer calming content (e.g., ocean breeze, rain on a cabin roof, etc.)
I used ChatGPT to help structure this idea clearly, but the reasoning and concern are mine. The full write-up is here: https://gist.github.com/ParityMind/dcd68384cbd7075ac63715ef579392c9
Would love to hear what devs and alignment researchers think. Is anything like this already being tested?
Do you have a degree in psychology, counselling psychology, clinical psychology, psychotherapy, psychoanalysis, psychiatry? anything to do with the care professions?
If not. why are you fucking about in my profession which you know nothing about. Its like me writing few 10 line bash scripts, and them saying I am going to build the next google from home on my laptop.
This is the sort of care real professionals provide to those in crisis in the middle of suicical ideation. It is a crisis.
Every year in the month before Christmas, Therapists who worked in the psychotherapy service, where I worked fo 20 years, had to attend a meeting.
This meeting was to bring any clients who they felt were a suicide risk over the christmas period.
If a client met those criteria, a plan was put in place to support that person. This might mean; daily phone calls, daily meetings, and other interventions to keep that person safe.
Human stuff that no computer program can duplicate.
The Christmas period is the most critical time for suicides, It is the period when most suicides occur.
what the fuck do you fucking idiots think you are doing??
Offering a service, not a service, a poxy app, that has absolutely no oversight, no moral or ethical considerations, which, in my opinion would drive people to suicide completion.
you are a danger to everyone who suffers mental illness.
Thinking you can make a poxy chatgpt app in 5 minutes to manage those in great despair and in the middle of suicidal ideation is incredibly naive and stupid. In therapy terms, Incongruent, comes to mind,
How will you know if those superficial sounds like "ocean breeze, rain on a cabin roof" are not triggers for that person to attempt suicide. I suppose you will rely on some shit chatgpt vibe fantasy coding shit.
This too is absolute bullshit: "Ask the user, They are not users! they are human beings in crisis!:
“Is this a fictional story or something you're really experiencing?” The hidden meaning behind this question is: "Are you lying to me", "Have you been lying to me".
A fictional story is one made up, imaginary. To then ask in the same sentence if that story is real is contradictory and confusing.
Are you assuming that this person has some sort of psychosis and is hearing voices, when you say "something you're really experiencing? Are you qualified to diagnose psychotic or Schizophrenic disorders? How do you know if the psychosis is not a response to illicit drugs.
so many things to take into consideration that a bit if vibe coding cannot provide.
No therapist would ever ask this banal question. We would have spent a long to time developing trust. A therapist will have a taken a full history of the client, a risk assessment, would be fully aware of the clients triggers, and will know the clients back story.
suicide is not something you can prevent with an app.
YES! I do have the right to be angry and express it as I feel fit, especially if it stops people from abusing those who need care. A bystander I am not.
Many people are now turning to AI to vent and for advice that may be better suited for a professional. The AI is always available and it’s free. Two things the professionals are not.
From this point of view, you need to meet people where they are. When someone searches in Google for suicide related stuff, the number for the suicide hotline comes up. Doing something similar in AI would make sense. Maybe not have AI try and walk someone through a crisis, but at the very least, direct them to people who can help. AI assisting in the planning of a suicide is probably never a good path to go down.
If you can at least agree with this, then maybe you can partner with people in tech to produce guardrails that can help people, instead of berating someone for sharing an idea in good faith to try and help avoid AI leading to more suicides.
I’m not saying you should replace a therapist with AI — that’s a stupid assumption. If someone needs help, they should 100% be seeing a human. A machine can’t replace a person in crisis, and I never said it could.
But in the times we’re in — with mental health services underfunded and people increasingly turning to AI — someone has to raise this question.
I’m not attacking therapists — I’m defending people who are suffering and turning to the tools in front of them. People think AI is smarter than doctors. That’s not true. A human can diagnose. A machine cannot.
This is a temporary deflection, not treatment. The man in New York who asked for bridge heights after losing his job — this is for people like him. If a simple, harmless change could have delayed that moment — long enough to get help — why wouldn’t we try?
You should be angry, but aim it at the government, not at people trying to prevent avoidable harm.
This isn’t about replacing you. It’s about trying to hold the line until someone like you can step in.