Google and Character.AI plan to settle several lawsuits with families alleging that their products drove their children to self-harm or suicide, according to recent court documents.
The companies have reached settlement agreements “in principle” in five cases, they said in filings Tuesday and Wednesday.
That includes lawsuits brought by the families of 14-year-old Sewell Setzer III and 13-year-old Juliana Peralta, both of whom committed suicide after lengthy conversations with Character.AI’s chatbots.
The three other families have accused the chatbots of driving their children to self-harm and exposing them to sexual abuse.
The complaints also alleged that Google was “instrumental” to the development of Character.AI’s products, noting that its creators began their work while at Google and later entered into a $2.7 billion deal with the tech giant.
Google and Character.AI both declined to comment.
The lawsuits are indicative of growing concerns about the impacts of AI chatbots on children. Several parents, including Setzer’s mother Megan Garcia, testified before Congress last year and called on lawmakers to establish guardrails.
The problem is not unique to Character.AI. OpenAI is also facing a lawsuit over the death of 16-year-old Adam Raine, whose parents allege was coached into taking his own life by ChatGPT.
Chatbot companies have moved to establish new protections amid the backlash. In recent months, OpenAI has announced new parental controls, as well as efforts to develop age prediction technology to direct young users to a more tailored experience.
Character.AI also banned children under 18 years old from engaging in “open ended” conversations with its chatbots in late November and has said it plans to develop a separate “under-18 experience.”