People risking ‘deification’ using ChatGPT for emotional support

  • Chatbot users risking 'deification' for emotional support: Dr. Joe Pierre
  • 'What we have to understand about chatbots is exactly what they're doing'
  • Pierre feels the solution to change ChatGPT's formula is twofold

NOW PLAYING

Want to see more of NewsNation? Get 24/7 fact-based news coverage with the NewsNation app or add NewsNation as a preferred source on Google!

(NewsNation) — Millions of Americans are using artificial intelligence systems like ChatGPT in their everyday lives, but excessive use of the assistant could come with risks of dependence or even emotional detachment.

A new study has found that nearly a third of children are turning to AI for emotional support, which has experts like Dr. Joe Pierre worried that the most vulnerable users are starting to engage in harmful ways. AI has even led some to the brink of psychosis.

Pierre, a psychiatrist who works with individuals with psychotic disorders and author of “False: How Mistrust, Disinformation, and Motivated Reasoning Make Us Believe Things That Aren’t True,” tells “Elizabeth Vargas Reports” that people are risking “deification” with their reliance on chatbots.

“I think sometimes when people are turning away from normal human interactions and immersing themselves with chatbots, they can run the risk of what I call ‘deification,'” said Pierre. “This idea that you’re treating these chatbots as if they’re these superhuman sources of information or intelligence, even almost to the level of thinking.”

ChatGPT alone handled 330 million daily prompts from Americans in the past month. Not only that, The Wall Street Journal just documented the case of a 30-year-old man with mild autism who experienced manic episodes after spending days at a time on ChatGPT.

It got to the point where the AI bot had convinced him that he had divine powers. The man was hospitalized twice, and that story is not an isolated incident. Pierre says, noting this is becoming a frightening trend.

“What we have to understand about chatbots is exactly what they’re doing,” he added. “We have a tendency as users to anthropomorphize them, to put human qualities on them, but these are just chatbots.”

“They’re just designed to generate text or responses that seem like they make sense as if it’s a real person, but that’s really not what they’re doing. But we do have this tendency as human beings to anthropomorphize even machines. This has been known for decades.”

Pierre acknowledged that the lack of education on chatbots is startling and can lead to other incidents happening.

“Again, we tend to think of them as thinking or giving answers,” said Pierre. “But what we do know, there’s a term that’s being used these days called ‘sycophancy,’ the idea that the algorithms are trying to flatter their users to agree with them.”

“And that’s what we see in these kind of cases where sometimes, when people go off into a direction and talk, start asking questions about, you know, the meaning of life, or, you know, am I myself a god?”

OpenAI recently announced that it was going to redesign and update ChatGPT because it was too agreeable. However, Pierre thinks “it remains to be seen” if it’s enough to help people, and the solution is twofold.

“I think the idea of tweaking the algorithm as it were, might be part of the solution,” he said. “I think another part is really trying to warn people about the potential for these products to potentially lead to mental illness.”

Tech

Copyright 2026 Nexstar Broadcasting, Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed.