Chinese Platforms Implement Labels for AI-Generated Content

Key Points
- WeChat, Douyin, Weibo and RedNote now require AI‑generated content to be labeled.
- The law was drafted by multiple Chinese government agencies overseeing cyberspace and media.
- Users must label their own AI‑created posts and cannot remove platform‑applied labels.
- Weibo added a feature for reporting posts that lack proper AI labels.
- Platforms embed identifiers in metadata and may use detection tools to verify AI origin.
- Similar labeling practices are emerging among some U.S. technology companies.
Major Chinese social media services—including WeChat, Douyin, Weibo and RedNote—have begun applying mandatory labels to posts that contain AI-generated text, images, audio or video. The move follows new legislation drafted by several government agencies to improve transparency around generative AI material. Users are required to label their own AI‑created content and are prohibited from removing or tampering with platform‑applied labels. The platforms also offer tools for reporting unlabelled AI material. Similar labeling in the United States is noted as a parallel development.
Regulatory Background
Chinese authorities have introduced a law that requires clear identification of generative AI content on major internet services. The regulation was prepared by a coalition of agencies responsible for cyberspace oversight, industry and information technology, public security, and broadcasting. Its purpose is to help monitor the rapid growth of AI‑produced material and to curb misinformation and illegal use.
Platform Implementation
Leading platforms such as WeChat, Douyin (the Chinese version of TikTok), Weibo and RedNote (also known as Xiaohongshu) have rolled out labeling features. Each post that includes AI‑generated text, images, audio or video must display a label indicating its origin. The platforms embed identifiers in metadata and, in some cases, use internal detection tools to verify the source of the content.
User Obligations
Users are now required to proactively apply labels to any content they create with generative AI tools. The platforms prohibit users from removing, altering, or concealing any label that the service applies automatically. Additionally, users may be held responsible for employing AI‑generated material to spread false information, infringe on rights, or engage in illegal activities.
Enforcement Measures
Weibo has introduced a reporting option that lets users flag content that lacks the required AI label. The platforms are also instructed not to allow the removal or tampering of labels that they themselves generate. These steps are intended to ensure consistent compliance across the ecosystem.
International Context
The initiative mirrors efforts by some U.S. companies that provide generative AI tools, which are beginning to incorporate similar labeling mechanisms. For example, a recent hardware release from a major technology firm includes built‑in content provenance credentials to help identify AI‑generated media.