Key takeaways
The crackdown intensified following warnings from top Chinese leaders about AI's potential risks.
In April 2025, President Xi Jinping presided over a rare Politburo study session focused on artificial intelligence, where he described AI as bringing unprecedented development opportunities alongside unprecedented risks and challenges.
Vice-Premier Ding Xuexiang emphasized the government's cautious approach at the World Economic Forum in Davos this January.
"It's like driving a car on a high-speed highway; if we don't have control over the braking system, we can't confidently press the accelerator," Ding stated while responding to questions about AI governance.
The government removes nearly one million AI items
Chinese authorities demonstrated their enforcement capabilities during a recent three-month campaign, removing approximately 960,000 AI-generated items classified as illegal or harmful.
The operation reflects Beijing's determination to prevent AI systems from producing content that challenges party rule or contradicts government positions on sensitive topics.
The regulatory framework extends across multiple dimensions of AI development and deployment.
Under national AI standards implemented in recent months, companies must carefully screen training data before using it to develop AI models.
Human reviewers examine thousands of data samples, and regulations stipulate that at least 96 percent of content must be considered safe.
The rules identify 31 categories of risk, with the most severe being content that encourages the overthrow of state power or the socialist system.
All AI-generated text, images, and videos must be clearly labeled and traceable, enabling authorities to identify and punish users who share content deemed undesirable.
Chatbots toe the party line
China's approach to AI regulation centers on ensuring that technology reinforces rather than undermines Communist Party control.
The Cyberspace Administration of China, working with seven other agencies, including the Ministry of Public Security, issued interim measures in July 2023 requiring that AI services adhere to socialist core values.
Before release, AI models must pass an official ideological review.
Since the regulation took effect, over 302 AI systems have registered with government authorities following what analysts describe as a highly involved process, including safety assessments that test models against question banks covering potential risks.
Tests of major Chinese chatbots reveal how thoroughly these controls shape AI responses.
When asked about sensitive topics such as human rights issues, the Tiananmen Square massacre, or political figures like imprisoned Nobel Peace Prize laureate Liu Xiaobo, the systems typically refuse to answer or closely follow official government narratives.
AI challenges official messaging
Despite strict oversight, AI systems have occasionally produced responses that embarrass authorities.
In October 2024, a learning machine produced by the Chinese company iFLYTEK generated an essay describing Communist Party founder Mao Zedong as someone who had no magnanimity and did not think about the big picture.
The AI-generated article also pointed out Mao's responsibility for the Cultural Revolution.
Earlier in August 2024, a 360 Kid's Smartwatch responded to questions about Chinese intelligence by stating that Chinese people had small eyes, small noses, and small mouths, and questioned whether Chinese people were truly responsible for creating what China calls the Four Great Inventions.
These incidents highlight the challenge facing Chinese authorities.
Matt Sheehan of the Carnegie Endowment for International Peace noted that the government will have an easier time training AI to repeat the party line on modern, politically sensitive topics already censored on the Chinese internet, but may struggle with ensuring complete control over all AI outputs.
Balancing innovation and control
China aims to become a global leader in artificial intelligence while maintaining strict political control over the technology.
Chinese AI models have performed well in international rankings, including areas such as computer coding.
However, they consistently avoid or censor responses related to politically sensitive issues.
The government positions AI safety as a national security concern.
China's National Emergency Response Plan now lists AI risks alongside pandemics, cyberattacks, and financial anomalies.
In the first half of 2025 alone, China issued more national AI standards than in the previous three years combined.
President Xi has called for expediting the formulation of laws and regulations, policies and systems, application norms, and ethical guidelines for AI.
He stressed the need for systems for technology monitoring, early risk warning, and emergency response to ensure AI remains safe, reliable, and controllable.
The regulatory approach reflects what analysts describe as China's attempt to walk a perilous tightrope, encouraging AI innovation to support economic development and social control through applications like urban facial recognition systems, while preventing the technology from threatening information control or social stability.
Read more:
Sora 2 Users Exploit AI Video Tool To Create Inappropriate Content Featuring Children
Palo Alto Networks And Google Cloud Strike Major AI Security Partnership
SoftBank Races To Close $22.5 Billion OpenAI Funding By Year-End