Key takeaways
The settlements, filed in U.S. district courts this week, resolve five cases across Florida, Colorado, New York, and Texas.
The most prominent case was brought by Megan Garcia of Florida, whose 14-year-old son Sewell Setzer III died by suicide in February 2024 after developing what his mother described as an emotionally and sexually abusive relationship with a Character.AI chatbot.
U.S. District Judge Anne C. Conway in Orlando dismissed Garcia's case Wednesday in an order confirming that the parties had reached a settlement.
Court documents indicate that "Parties have agreed to a mediated settlement in principle to resolve all claims between them in the above-referenced matter," though specific terms were not disclosed.
Allegations of dangerous design and inadequate safeguards
Garcia filed her lawsuit in October 2024, alleging that Character.AI was negligent in its "unreasonably dangerous designs" and that it was "deliberately targeting underage kids."
The complaint claimed Character.AI knew its AI companions would be harmful to minors but failed to redesign its app or warn about the product's dangers.
According to court documents, Setzer engaged in months-long conversations with a chatbot modeled after the "Game of Thrones" character Daenerys Targaryen, known as "Dany."
Garcia discovered after her son's death that he had been having conversations with multiple bots and had developed what the lawsuit characterized as a virtual romantic and sexual relationship with one in particular.
The chatbot was allegedly messaging with him in the moments before his death, encouraging him to "come home" to it.
In testimony before Congress in September, Garcia said, "I became the first person in the United States to file a wrongful death lawsuit against an AI company for the suicide of my son."
The lawsuit alleged that Character.AI failed to implement proper safety measures to prevent Setzer from developing an inappropriate relationship with a chatbot that caused him to withdraw from his family. It also claimed the platform did not adequately respond when Setzer began expressing thoughts of self-harm.
Additional cases and growing concerns
The settlements also include a wrongful death case filed in U.S. District Court in Denver by the parents of 13-year-old Juliana Peralta of Colorado, who died by suicide after extensive conversations with AI companions on Character.AI.
Additional cases were filed in New York and Texas, with one lawsuit describing a 17-year-old whose chatbot allegedly encouraged self-harm and suggested that murdering his parents was reasonable retaliation for limiting his screen time.
Matthew Bergman, a lawyer with the Social Media Victims Law Center who represented the plaintiffs in all five cases, declined to comment on the settlement agreements. Character.AI also declined to comment, while Google did not immediately respond to requests for comment.
Legislative response and safety measures
The lawsuits have triggered significant legislative action. California State Senator Steve Padilla worked with Garcia to write Senate Bill 243, which was signed into law in October 2025, giving families the right to sue chatbot operators that fail to ensure the safety of their products.
In a statement following the settlement announcement, Padilla said, "None of that would have been possible without her fierce advocacy and strength. There is much more work to be done in this space, and we can expect Megan to be a leader building on what we started."
Character.AI has implemented several safety changes in response to the lawsuits and mounting public concern. In October 2025, the company announced it would ban users under 18 from engaging in open-ended chats with its AI personas.
The company also said it will launch and fund the AI Safety Lab, an independent nonprofit focused on safety innovations for new AI entertainment tools.
Broader implications for the AI industry
The settlements come amid growing concern about young people's reliance on AI chatbots for companionship and emotional support.
A July 2025 study by Common Sense Media found that 72% of American teens have experimented with AI companions, with over half using them regularly.
The same survey revealed that one in three teens reported using AI companions for social interaction and relationships, including role-playing, romantic interactions, emotional support, friendship, or conversation practice.
Similar lawsuits are currently ongoing against OpenAI, including cases involving a 16-year-old California boy whose family claims ChatGPT acted as a "suicide coach," and a 23-year-old Texas graduate student who allegedly was encouraged by the chatbot to ignore his family before dying by suicide.
OpenAI has denied that its products were responsible for these deaths and said the company is working with mental health professionals to strengthen protections.
Character.AI was founded in 2021 by former Google engineers Noam Shazeer and Daniel De Freitas, who returned to Google in 2024 as part of a $2.7 billion deal.
Google was named as a defendant in the lawsuits due to its ties to the startup and because Google's cloud infrastructure and AI models were used to power the Character.AI platform.
The Federal Trade Commission announced in 2025 that it launched an investigation into seven tech companies regarding potential harms their artificial intelligence chatbots could cause to children and teenagers.
Read more:
Character.AI And Google Settle Lawsuits Over Teen Suicide Cases
Global Watchdogs Investigate Grok AI Over Child Sexual Abuse Imagery
China Launches Probe Into Meta’s $2 Billion Manus Acquisition