Key Takeaways
China's major social media platforms have rapidly implemented new content labeling systems to comply with groundbreaking legislation requiring clear identification of all artificial intelligence-generated material.
The law, which took effect on September 1, 2025, represents one of the world's most comprehensive approaches to AI content transparency, but also raises significant technical and compliance challenges.
Comprehensive labeling requirements under the new law
The rules were issued on Friday by the country's top internet watchdog, the Cyberspace Administration of China (CAC), alongside the Ministry of Industry and Information Technology, the Ministry of Public Security, and the National Radio and Television Administration.
The directive says there must be explicit and implicit labels for AI-generated text, images, audio, video, and virtual content. Explicit markings must be clearly visible to users, while implicit identifiers, such as digital watermarks, should be embedded in the metadata.
The regulations extend beyond traditional content to include virtual scenes and synthetic environments, potentially affecting gaming and metaverse applications. Online service providers involved in AI content generation must also ensure compliance with China's cybersecurity and deep synthesis management rules.
Platforms must verify AI-generated content before putting it online and add labels where required. If the metadata lacks AI markers but the content shows signs of AI-generation, this must be flagged accordingly.
WeChat, China's dominant messaging platform with over 1.4 billion monthly active users globally, has implemented strict compliance measures. WeChat, known as Weixin on the mainland, said content creators must voluntarily declare all AI-generated content upon publication.
In a post today, WeChat said it "strictly prohibits" any attempts to delete, tamper with, forge, or conceal AI labels added by its own automated tools, which are designed to pick up any AI-generated content that's not flagged by users who upload it. It also reminded users against using AI to spread false information or for any other "illegal activities."
ByteDance's Douyin, the Chinese version of TikTok, has similarly updated its systems. ByteDance's Douyin — the Chinese version of TikTok — similarly urged users to apply a label to every post of theirs that includes AI-generated material while noting it's able to use metadata to detect where a piece of content came from.
Weibo, meanwhile, has added the option for users to report "unlabelled AI content" when they see something that should have such a label.
Technical challenges and industry skepticism
Despite the comprehensive framework, implementation faces hurdles.
However, considering the growing sophistication of anti-labeling techniques and the widespread application of AI technology, many industry experts are skeptical about the practical implementation of such stringent requirements, holding that current technology may not be capable of providing reliable labels that meet the standards.
Legal experts have raised concerns about the burden placed on platforms. "If WeChat or Douyin needs to examine every single photo uploaded to the platform and check if they are generated by AI, that will become a huge burden in terms of workload and technical capabilities for the company," says Jay Si, a Shanghai-based partner at Zhong Lun Law Firm.
The AI Labeling Measures stipulate that when reviewing an app for release or launch, the internet application distribution platform must verify whether the service provider has implemented the required content labeling function.
The logic behind this obligation is that app distribution platforms should be capable of and responsible for accurately identifying whether an app provides such labeling functions in accordance with the AI Labeling Measures.
However, under current technological conditions, it remains to be seen whether various platforms, especially small and medium-sized ones, possess sufficient technical means to comprehensively verify whether service providers have implemented the required labeling functionalities.
International perspective and expert analysis
The regulations have drawn international attention from AI governance experts.
Professor Zhang Linghan from China University of Political Science and Law, who serves on the UN High-Level Advisory Body on AI, argues that "labeling can effectively distinguish AI-generated synthetic information and prevent the spread and misuse of false information; on the other hand, it helps users quickly understand the attributes or parameters of generative AI products or services; finally, labeling assists regulatory authorities in evaluating and tracing AI-generated synthetic content, thereby promoting the legal and compliant development of such content."
Constellation Research analyst Holger Mueller noted the global importance: "One critical aspect will be how visible the tags are. Are they aimed at users, or more for the government and its branches?", he asked. "It will also be interesting to see if and how other countries and governments try and deal with the challenge of authenticating content posted online."
It also aligns with a broader push to tighten AI oversight, which was made a key focus of the CAC's 2025 Qinglang, or clear and bright, campaign, an annual initiative aimed at cleaning up China's cyberspace. Deepfake technology, which uses AI to manipulate images, audio, and video, threatens both individual and national security, according to Chinese regulators.
Four agencies drafted the law, which was issued earlier this year, including the main internet regulator, the Cyberspace Administration of China (CAC). The Ministry of Industry and Information Technology, the Ministry of Public Security, and the National Radio and Television Administration also helped put together the legislation, which is being enforced to help oversee the tidal wave of genAI content.
Global implications and enforcement concerns
China's approach sets new international standards for AI content regulation. China's new regulation goes beyond the European Union's AI Act, which also requires content labeling. The EU Act focuses on explicit disclosure and machine-readable formats.
However, China's regulation adds the responsibility of screening user-uploaded content for AI, something unique to China's context and unlikely to be replicated in other countries.
The European Union is set to implement its own AI content labeling requirements in August 2026, as part of the EU AI Act, which mandates that any content "significantly generated" by AI must be labeled to ensure transparency.
The U.S. has not yet mandated AI content labels, but a number of social media platforms, such as Meta Platforms Inc., are implementing their own policies regarding the tagging of AI-generated media.
However, enforcement questions remain. As is common in normative documents like this one, the punishments for non-compliance are only alluded to with reference to other laws and regulations.
Liability is, however, limited to situations where there is (1) a failure to label AI-generated content and (2) serious consequences result. China has strict restrictions on speech, including for spreading false content through any means, and these laws can still be employed where there is active misinformation.
As for non-malicious failures to label, I would guess that the emphasis would be on compliance rather than deterrence, and that the authorities would order corrections rather than give punishments.
Despite the comprehensive scope of China's framework, industry experts remain cautious about its practical implementation. Concerns include the technical feasibility and durability of metadata labelling, as well as the compliance burden placed on platforms and app stores.
There is also potential for over-labelling or false positives (i.e., mistakenly attributing human-generated content to AI), which may impact the effectiveness of the regulatory regime.
The implementation challenges are significant, given the enormous volume of content uploaded daily to Chinese social media platforms.
China's new AI content labelling regime positions it as a global frontrunner in AI governance, contrasting with the more fragmented or risk-focused approaches emerging in the EU and US.
Read more