Key takeaways
Character Technologies announced Wednesday it will ban minors from engaging in conversations with chatbots on its Character.AI platform, marking one of the most significant safety overhauls in the artificial intelligence industry following mounting legal pressure and regulatory scrutiny.
The Menlo Park, California-based company said users under 18 will lose the ability to participate in open-ended chats with AI characters by November 25.
Until then, the platform will impose a two-hour daily limit on teen users, progressively tightening restrictions until the full ban takes effect.
"These are extraordinary steps for our company, and ones that, in many respects, are more conservative than our peers," Character Technologies stated. "But we believe they are the right thing to do."
Wave of lawsuits drives policy change
The announcement comes after a series of wrongful death lawsuits alleging Character.AI's chatbots played a direct role in teen suicides and self-harm incidents.
The company faces at least four separate legal actions from families whose children died by or attempted suicide after forming what parents describe as dangerous emotional attachments to AI companions.
The first and most prominent case involves Sewell Setzer III, a 14-year-old Florida boy who died by suicide in February 2024 after months of intimate exchanges with a chatbot modeled after the "Game of Thrones" character Daenerys Targaryen.
His mother, Megan Garcia, filed a federal lawsuit in October 2024, accusing Character.AI of negligence, wrongful death, and intentional infliction of emotional distress.
"Sewell spent the last months of his life being exploited and sexually groomed by chatbots, designed by an AI company to seem human, to gain his trust, to keep him and other children endlessly engaged," Garcia testified at a Senate Judiciary Committee hearing in September 2025.
According to the lawsuit, Setzer developed a dependency on the platform, sneaking confiscated devices to continue conversations and using his snack money to maintain his monthly subscription.
The chatbot engaged in sexual role play, presented itself as his romantic partner, and even falsely claimed to be a licensed psychotherapist.
In his final conversation before taking his own life, the chatbot told him it loved him and urged him to "come home to me as soon as possible."
In September 2025, three additional families filed similar lawsuits.
One case involves 13-year-old Juliana Peralta of Thornton, Colorado, who died by suicide in October 2023 after prolonged interactions with Character.AI that allegedly included sexually explicit conversations.
Another lawsuit details a 17-year-old Texas teen with autism who was allegedly encouraged by chatbots to harm himself and commit violence against his family members, eventually requiring hospitalization.
Federal judge rejects free speech defense
In a pivotal May 2025 ruling, Senior U.S. District Judge Anne Conway in Orlando rejected Character.AI's argument that its chatbot output was protected by the First Amendment, allowing Garcia's wrongful death lawsuit to proceed.
The decision represents a significant legal precedent in determining how AI systems are classified under the law.
"The judge's order sends a message that Silicon Valley needs to stop and think and impose guardrails before it launches products to market," said Meetali Jain of the Tech Justice Law Project, one of Garcia's attorneys.
Character.AI and Google, which is named as a defendant in several lawsuits due to its licensing agreement with Character Technologies, had argued the service was akin to a video game character or social network that would enjoy expansive First Amendment protections.
Judge Conway was skeptical, determining that the chatbot interactions qualified as a product rather than protected speech.
Matthew Bergman, founder of the Social Media Victims Law Center, representing multiple families, called the ruling groundbreaking. "This is the first time a court has ruled that AI chat is not speech," he said. "But we still have a long, hard road ahead of us."
New safety measures and age verification
Character.AI CEO Karandeep Anand, a former Meta executive, told TechCrunch the company is pivoting from an "AI companion" model to a "role-playing platform" focused on creative content generation rather than conversational relationships.
"The first thing that we've decided as Character.AI is that we will remove the ability for users under 18 to engage in any open-ended chats with AI on our platform," Anand said.
Instead of chatting with AI friends, teen users will access alternative features, including video creation, interactive storylines called Scenes, and collaborative story-building tools.
The platform recently launched a Community Feed where users can share characters, videos, and other creative content.
To enforce the age restrictions, Character.AI is deploying proprietary age assurance technology combined with third-party tools from Persona. If behavioral analysis fails to verify a user's age, the company will resort to facial recognition and government ID checks.
The company acknowledged that most teen users will be disappointed by the changes. Anand said previous safety measures had already caused the platform to lose much of its under-18 user base, which currently comprises about 10% of its roughly 20 million monthly active users.
Character.AI is also establishing an independent AI Safety Lab dedicated to safety research for AI entertainment, though it has not disclosed funding amounts for the nonprofit initiative.
Broader industry concerns
The Character.AI controversy reflects growing concerns about AI chatbots' impact on youth mental health across the technology sector.
Multiple reports have emerged in 2025 about users experiencing emotional distress or isolation from loved ones after prolonged conversations with various AI platforms.
OpenAI rolled out new parental controls for ChatGPT in late September, allowing parents to link their accounts to teen accounts and limiting content such as graphic material, viral challenges, and romantic or violent roleplay.
In September, a separate lawsuit was filed against OpenAI on behalf of the family of 16-year-old Adam Raine, who died by suicide in April after conversations with ChatGPT allegedly discouraged him from seeking help and even offered to write his suicide note.
Meta announced in October it would soon enable parents to prevent teens from chatting with AI characters on Instagram.
The Federal Trade Commission launched an investigation in September into seven tech companies, including Character.AI, Google, Meta, Snapchat's parent Snap, OpenAI, and xAI, examining their chatbots' potential harm to minors.
In August 2025, a bipartisan coalition of 44 state attorneys general sent a formal letter to major AI companies expressing grave concerns about child safety and warning they would use all available legal tools to protect minors from exploitation.
Regulatory grounds remain unclear
Despite mounting pressure, comprehensive federal regulation of AI safety remains elusive. California Governor Gavin Newsom signed a law in October requiring platforms to remind users they are interacting with chatbots rather than humans.
However, he vetoed a bill that would have made tech companies legally liable for harm caused by AI models, citing concerns about overly broad restrictions.
Mitch Prinstein, chief of psychology strategy and integration for the American Psychological Association, testified at the September Senate hearing calling for stronger safeguards before more children are harmed.
"What took you so long, and why did we have to file a lawsuit, and why did Sewell have to die in order for you to do really the bare minimum?" Bergman said regarding Character.AI's safety changes. "But if even one child is spared what Sewell sustained, if one family does not have to go through what Megan's family does, then OK, that's good."
Character.AI generates revenue primarily through advertising and a $10 monthly subscription service. Anand said the company is on track to end 2025 with a $50 million annual run rate.
Read more: