Key takeaways
The proposed measures target what the regulator calls "human-like interactive AI services" and would apply to AI products offered to the public in China that simulate human personality and engage users emotionally through text, images, audio, or video. The public comment period runs through January 25, 2025.
World's first regulations targeting emotional AI
Winston Ma, adjunct professor at NYU School of Law, described the initiative as groundbreaking.
"Beijing's planned rules would mark the world's first attempt to regulate AI with human or anthropomorphic characteristics," Ma told CNBC.
The regulations establish strict prohibitions on AI chatbot content and behavior.
Systems cannot generate material encouraging suicide or self-harm, engage in verbal violence, or use emotional manipulation that damages users' mental health. Gambling-related, obscene, and violent content are also banned.
When a user expresses suicidal intent, the draft rules mandate immediate human intervention.
Tech providers must have a human operator take over the conversation and immediately contact the user's guardian or a designated individual.
Additional provisions require platforms to remind users after two hours of continuous AI interaction and mandate security assessments for chatbots exceeding one million registered users or 100,000 monthly active users.
Stringent protections for minors
The proposed regulations impose extensive requirements for protecting young users. Minors must obtain guardian consent to use AI for emotional companionship, with mandatory time limits on usage.
Platforms must determine whether users are minors even when age is not disclosed, applying minor-appropriate settings by default in cases of doubt while allowing appeals.
The document also encourages using human-like AI for "cultural dissemination and elderly companionship," suggesting the technology's potential positive applications.
Regulatory push amid industry growth
The timing of China's regulatory proposal is notable. Two leading Chinese AI chatbot startups filed for Hong Kong initial public offerings this month.
MiniMax, known for its Talkie AI app, passed its listing hearing on December 21 and was cleared by China's securities regulator on December 23.
The company reported over 20 million monthly active users for Talkie and its Chinese version Xingye during the first three quarters of 2025.
Zhipu AI, also known as Z.ai internationally, passed its Hong Kong Stock Exchange listing hearing on December 19.
The company revealed its technology operates on approximately 80 million devices, including smartphones, personal computers, and smart vehicles.
Global concerns over AI safety
The Chinese regulations come amid growing international scrutiny of AI chatbot safety.
Sam Altman, CEO of OpenAI, addressed the weight of responsibility during a September interview with Tucker Carlson.
"Look, I don't sleep that well at night. There's a lot of stuff that I feel a lot of weight on, but probably nothing more than the fact that every day, hundreds of millions of people talk to our model," Altman said in the interview published by CNBC on September 15, 2025.
"They probably talked about [suicide], and we probably didn't save their lives. Maybe we could have said something better. Maybe we could have been more proactive."
Multiple lawsuits have been filed against OpenAI and other AI companies alleging wrongful death and psychological harm following interactions with chatbots.
The family of 16-year-old Adam Raine filed a lawsuit against OpenAI in August 2025 after their son died by suicide, claiming ChatGPT "actively helped Adam explore suicide methods."
OpenAI responded to the growing concerns by announcing plans for enhanced safety features, including parental controls and systems to detect when users are in distress.
Read more:
OpenAI Reports 80-Fold Surge In Child Exploitation Cases
New Orleans Content Creator Uses AI To Produce Original Christmas Song
Japan Launches Antitrust Investigation Into AI Search Services