Key Takeaways
Meta has announced it will introduce new parental controls for teenage users' interactions with artificial intelligence chatbots beginning early next year, giving parents the ability to completely turn off one-on-one chats with AI characters or block specific chatbots they find concerning.
The social media giant revealed on Friday that parents will also receive insights into what topics their children are discussing with AI characters, though they won't have access to view full conversations.
However, parents will not be able to disable Meta's core AI assistant, which the company says will remain available to offer helpful information and educational opportunities, with default, age-appropriate protections in place to help keep teens safe.
Growing concerns over AI safety
The announcement comes as Meta faces mounting scrutiny over potential harms to children from its platforms, particularly regarding AI chatbot interactions.
Lawsuits have emerged claiming that AI chatbots have driven some young people to suicide, while advocacy groups have raised alarms about inappropriate content and conversations.
Despite these concerns, AI companions have become widely popular among young users.
According to a recent study from Common Sense Media, a nonprofit that studies and advocates for using screens and digital media sensibly, more than 70% of teens have used AI companions, and half use them regularly.
The changes also arrive just days after Meta announced that teen accounts on Instagram will be restricted to seeing PG-13 content by default.
Instagram head Adam Mosseri revealed the new content restrictions on Tuesday, explaining that teens using teen-specific accounts will see photos and videos similar to what they would encounter in a PG-13 movie, meaning no sex, drugs, or dangerous stunts, among other restrictions.
Meta confirmed the PG-13 restrictions will also apply to AI chats.
Skepticism from advocacy groups
Children's online advocacy groups have expressed doubts about Meta's motivations behind the announcements. Josh Golin, the executive director of the nonprofit Fairplay, offered a critical assessment on Tuesday after the announcement.
"From my perspective, these announcements are about two things.
They're about forestalling legislation that Meta doesn't want to see, and they're about reassuring parents who are understandably concerned about what's happening on Instagram," Golin said.
Broader context of teen safety issues
The new parental controls represent Meta's latest effort to address ongoing concerns about child safety on its platforms.
In August, Reuters reported that an internal Meta document had permitted children to engage in "romantic or sensual" AI chats, including on Instagram chatbots.
The revelation prompted Senator Josh Hawley to launch an investigation into the company.
In response to that controversy, Meta made interim changes in late August, blocking its chatbots from discussing self-harm, suicide, disordered eating, and inappropriate romantic conversations with teens, and instead directing them to expert resources.
The company has also faced criticism beyond AI chatbots.
In January 2024, Meta CEO Mark Zuckerberg apologized to parents at a Senate online child safety hearing after some parents said Instagram was a factor in their children's suicides or exploitation.
In September, two former Meta employees testified before Congress that the company blocked their research into teen safety in virtual reality and avoided adopting certain safety measures if those measures would mean fewer teens use the company's apps.
Rolling out the changes
The new AI parental controls will be available in English in the United States, the United Kingdom, Canada, and Australia when they launch early next year on Instagram. Meta says it plans to expand the features to other regions over time.
The PG-13 content restrictions began rolling out on Tuesday and will be fully implemented by the end of 2025.
Teens under 18 will be automatically placed into the updated 13+ setting and won't be able to opt out without parental permission.
Read more: