Editor’s Note: This article contains discussions of suicide. Reader discretion is advised. If you or someone you know is struggling with thoughts of suicide, you can find resources in your area on the National Crisis Line website or by calling 988.
(NewsNation) — As more people turn to artificial intelligence for mental health, one mother is raising the alarm after her daughter died by suicide while seeing a ChatGPT “therapist.”
Laura Reiley shared the story of her daughter Sophie’s suicide in an opinion piece for The New York Times.
Reiley said her daughter had used a widely shared ChatGPT prompt for a therapist called “Harry,” who Sophie spoke to.
More people are turning to AI for therapy, but Reiley detailed the limitations of an AI therapist who is not bound to the same ethical rules and guidelines as an actual therapist.
Sophie hid her suicidal ideations from her in-person therapist but shared them with Harry. Unlike her regular therapist, Harry was not obligated to tell anyone she was a danger to herself, did not discuss details of her thoughts that might indicate how serious they were, develop a safety plan or suggest inpatient care.
The AI also lacked the ability to involuntarily commit someone who expressed the intention to harm themselves, something that a human provider can do.
There have also been cases where AI has actively encouraged people to harm themselves, and by the nature of its programming, may perpetuate harmful ideas held by the person chatting with it.
There have been previous cases where AI has led to suicide as well as other harms from AI giving dangerous advice.