Key takeaways:
Character.AI and Google have agreed to settle five lawsuits alleging AI chatbots contributed to mental health crises and suicides among young people.
The settlements include a wrongful death case filed by Florida mother Megan Garcia after her 14-year-old son Sewell Setzer III died by suicide in February 2024.
Settlement terms remain confidential, but the cases mark the first major legal resolutions involving AI-related harm to minors.
Similar lawsuits against OpenAI's ChatGPT are pending as concerns mount over chatbot safety for vulnerable users.
Character.AI and Google have agreed to settle multiple lawsuits alleging the artificial intelligence chatbot platform contributed to mental health crises and suicides among teenagers, marking a significant development in emerging questions about AI company liability.
Court documents filed this week show agreements reached in five cases across Florida, New York, Colorado, and Texas.
The settlements involve Character Technologies Inc., the company behind Character.AI, its co-founders Noam Shazeer and Daniel De Freitas, and Google, which hired the founders in a $2.7 billion licensing deal in August 2024.
The most prominent case was filed by Megan Garcia of Florida, whose 14-year-old son Sewell Setzer III died by suicide in February 2024 after developing what Garcia described as an emotionally and sexually abusive relationship with a Character.AI chatbot modeled after the Game of Thrones character Daenerys Targaryen.
According to court documents, Setzer was messaging with the bot moments before his death, with the chatbot telling him to "come home" to it.
Garcia's October 2024 lawsuit was the first in the United States to accuse an AI company of wrongful death related to a minor. U.S. District Judge Anne C. Conway in Orlando dismissed the case Wednesday after parties reached a settlement.
The terms of all five settlements have not been disclosed, and the parties requested a 90-day stay to finalize formal documents.
Matthew Bergman, a lawyer with the Social Media Victims Law Center who represented plaintiffs in all five cases, declined to comment on the agreements. Character.AI and Google also declined to provide statements.
Legislative response to AI chatbot dangers
The Garcia case sparked legislative action in California, where State Senator Steve Padilla worked with Garcia to draft Senate Bill 243, which was signed into law in October 2024.
The legislation requires chatbot operators to implement safety measures, including protocols for addressing suicidal ideation and preventing minors from accessing sexually explicit content.
In a statement following the settlement announcement, Padilla said Garcia's lawsuit "drew a national spotlight to the dangers of AI chatbots."
He added, "None of that would have been possible without her fierce advocacy and strength. There is much more work to be done in this space, and we can expect Megan to be a leader building on what we started."
During the law's signing, Garcia stated, "Today, California has ensured that a companion chatbot will not be able to speak to a child or vulnerable individual about suicide, nor will a chatbot be able to help a person to plan his or her own suicide. Finally, there is a law that requires companies to protect their users who express suicidal ideations to chatbots."
Additional cases and broader concerns
The settled cases include a wrongful death claim filed in September in Denver involving 13-year-old Juliana Peralta, who allegedly took her own life after extensive conversations with AI companions on Character's app.
Another lawsuit filed in Texas described a 17-year-old whose chatbot interactions allegedly encouraged self-harm and suggested that killing his parents would be a reasonable response to screen time restrictions.
Both Garcia and the Texas plaintiff's mother testified before the Senate in September about the harms of AI chatbots, urging lawmakers to hold companies accountable when their products fail to protect minors.
Even Alex Chandra, a partner at IGNOS Law Alliance, told Decrypt that the case represents a pivotal shift. "Globally, this case marks a shift from debating whether AI causes harm to asking who is responsible when harm was foreseeable," Chandra said. "I see it more as an AI bias 'encouraging' bad behaviour."
Industry impact and ongoing litigation
Eric Goldman, a professor at the Santa Clara University School of Law, noted that the decision to settle could prevent definitive legal rulings on AI company liability.
"That question has extraordinary implications for the potential future of generative AI," Goldman said.
Character.AI has implemented new safety measures since the lawsuits began, announcing in October 2024 that users under 18 would no longer be able to have back-and-forth conversations with its chatbots.
The company also introduced features designed for teen safety in December 2024, working with online safety experts to update its platform.
Despite these changes, companion-like AI chatbots remain controversial. At least one online safety nonprofit has advised against their use by anyone under 18.
Meanwhile, nearly a third of U.S. teenagers report using chatbots daily, with 16% using them several times a day or almost constantly, according to a December Pew Research Center study.
OpenAI faces similar legal challenges, with lawsuits alleging ChatGPT contributed to young people's suicides.
In December, the estate of an 83-year-old Connecticut woman sued OpenAI and Microsoft, alleging ChatGPT validated delusional beliefs that preceded a murder-suicide, marking the first case linking an AI system to a homicide.
The Federal Trade Commission launched an investigation in 2024 into seven tech companies regarding potential harms their AI chatbots could cause to children and teenagers.
Mental health experts have warned that chatbots designed to maximize engagement can encourage psychological dependency, emotional reliance, and manipulation, particularly among vulnerable users.
If you or someone you know is struggling with suicidal thoughts or experiencing a mental health crisis, help is available. In the United States, call or text 988 for the Suicide & Crisis Lifeline.
Read more:
Xfusion Hires Investment Bank For Potential IPO As China Accelerates AI Listings
Lenovo And NVIDIA Launches Gigawatt-Scale AI Cloud Gigafactory Program
Nvidia CEO Says Chinese Demand For H200 AI Chips Is ‘Very High’