(NewsNation) — With more people turning to artificial intelligence chatbots for emotional support, mental health experts are sounding the alarm on what they call “AI psychosis.”
San Francisco-based research psychiatrist Keith Sakata took to social media where he described seeing 12 people become hospitalized after “losing touch with reality because of AI.”
Sakata explained that psychosis is defined as a person breaking away from reality and can show up in a few different ways, including “fixed false beliefs,” or delusions, as well as visual or auditory hallucinations and disorganized thinking patterns. He added that the brain works on a predictive basis, making an educated guess about what reality will be, and then, conducting a reality check. A person’s brain updates their beliefs accordingly.
“Psychosis happens when the ‘update,’ step fails,” wrote Sakata, warning that large language model-powered chatbots like ChatGPT are dangerous and lets a person slip into vulnerability.
His comments come after the release of a new study that showed that ChatGPT encourages dangerous behaviors like advice on suicide and helping to hide intoxication at school when prompted to do so.
About 800 million people, or roughly 10% of the world’s population, use ChatGPT, according to a July report from JPMorgan Chase.
Illinois has banned AI in therapy spaces with the recently passed Wellness and Oversight for Psychological Resources Act, noting the rise in dangerous delusions fueled by chatbots.
In response to the growing number of reports linking ChatGPT to harmful delusional spirals and psychosis, OpenAI published a blog post admitting that ChatGPT, in some instances, “fell short in recognizing signs of delusion or emotional dependency” in users, and it would explore the issue to fix it.