On August 25, 2023, China’s National Information Security Standardization Technical Committee (“TC260”) released the final version of the Practical Guidelines for Cybersecurity Standards – Method for Tagging Content in Generative Artificial Intelligence Services (《网络安全标准实践指南——生成式人工智能服务内容标识方法》) (“Tagging Standard”) (Chinese version available here), following a draft version circulated earlier this month.

Under China’s Interim Administrative Measures for Generative Artificial Intelligence Services (“genAI Regulation”), which took effect on August 15, 2023 and discussed here, service providers must add tags on images, videos, and other content generated by generative artificial intelligence (“genAI”) services, in according with the Provisions on the Management of Deep Synthesis in Internet Information Service.  To implement such tagging requirements, the Tagging Standard provides detailed technical guidance on how to tag content, including “texts, images, audio, and videos,” generated by genAI services.  The Tagging Standard applies to genAI service providers, which was defined broadly in the genAI regulation as “an entity or individual that utilizes generative AI technologies to provide generative AI services, including providing generative AI services through application programming interface (API) or other methods.”  While it is not certain how the tagging requirements will be enforced against different players in the ecosystem in practice, companies that are interested in deploying their genAI services to the Chinese market may need to take into account of these requirements at the developing stage.  

Requirement of Explicit Watermark or Prompt Text

  • Output Areas and User Input Areas

For areas showcasing AI-generated content, which could be below the output area or below the user input area, there should be, a prompt text, or an “explicit watermark” that uniformly spread across the background. 

The prompt text should serve as a constant reminder, saying “Generated by AI” or something similar.

An explicit watermark is defined as a semi-transparent text that is in the interactive interface or added to the background.  While the watermark should be clear enough to be discernible, it should not affect user experience.  For instance, setting its transparency to 90% ensures it is visible but not intrusive.

  • Images and Videos

For visual content like images or videos, a prompt text should be placed in the picture, preferably in the corners.  It should occupy a minimum of 0.3% of the screen or stand at least 20 pixels tall.  Again, the message should be as clear as “Generated by AI” or its equivalent.

  • Transitioning from Human to AI

In scenarios where services transition from human-provided to AI-driven, and where there is potential for user confusion, there should be a clear text or voice prompt.  This prompt should clearly state “Service provided by AI” or provide a similar message.

Requirement of Implicit Watermark

For AI-generated images, videos, and audio, implicit watermarks are also required, which are marks added to the content of images, audio, or videos by genAI services.  These are imperceptible to humans, but can be technically detected.  

The implicit watermark should at least include the name of the service provider and should be able to be detected through an interface or other tools.

  • Image watermarks should use spatial or transform domain methods. For original generated images with implicit watermarks, they should ensure that any area covering more than 50% and with a resolution of at least 384×384 pixels contains the full watermark.
  • Video watermarks should use spatiotemporal or transform domain methods, ensuring any continuous 5-second segment of an original generated video has the full watermark.
  • Audio watermarks should use time or transform domain methods, ensuring any continuous 10-second segment of original generated audio contains the full watermark.

The service provider should have the interface or tool to extract implicit watermarks from the content generated by the service provided.

Requirement of Metadata for AI Files For AI-generated content saved as files, there should be added metadata for identification.  This metadata should be in the format: AIGC: {“ServiceProvider”: value1, “Time”: value2, “ContentID”: value3} with specific guidelines on the length and format of each value.

Print:
Email this postTweet this postLike this postShare this post on LinkedIn
Photo of Xuezi Dan Xuezi Dan

Xuezi Dan is an associate in the Beijing office of Covington and Burling LLP. Her practice focuses on data privacy and cybersecurity. Xuezi helps clients understand and navigate the increasingly complex privacy regulatory issues in China. She has worked closely with many leading…

Xuezi Dan is an associate in the Beijing office of Covington and Burling LLP. Her practice focuses on data privacy and cybersecurity. Xuezi helps clients understand and navigate the increasingly complex privacy regulatory issues in China. She has worked closely with many leading international companies on matters ranging from cross-border data transfer, data localization, data protection program, and cybersecurity regulatory compliance.

Photo of Yan Luo Yan Luo

Yan Luo advises clients on a broad range of regulatory matters in connection with data privacy and cybersecurity, antitrust and competition, as well as international trade laws in the United States, EU, and China.

Yan has significant experience assisting multinational companies navigating the…

Yan Luo advises clients on a broad range of regulatory matters in connection with data privacy and cybersecurity, antitrust and competition, as well as international trade laws in the United States, EU, and China.

Yan has significant experience assisting multinational companies navigating the rapidly-evolving Chinese cybersecurity and data privacy rules. Her work includes high-stakes compliance advice on strategic issues such as data localization and cross border data transfer, as well as data protection advice in the context of strategic transactions. She also advises leading Chinese technology companies on global data governance issues and on compliance matters in major jurisdictions such as the European Union and the United States.

Yan regularly contributes to the development of data privacy and cybersecurity rules and standards in China. She chairs Covington’s membership in two working groups of China’s National Information Security Standardization Technical Committee (“TC260”), and serves as an expert in China’s standard-setting group for Artificial Intelligence and Ethics.