On July 13, 2023, the Cybersecurity Administration of China (“CAC”), in conjunction with six other agencies, jointly issued the Interim Administrative Measures for Generative Artificial Intelligence Services (《生成式人工智能管理暂行办法》) (“Generative AI Measures” or “Measures”) (official Chinese version here).  The Generative AI Measures are set to take effect on August 15, 2023. 

As the first comprehensive AI regulation in China, the Measures cover a wide range of topics touching upon how Generative AI Services are developed and how such services can be offered.  These topics range from AI governance, training data, tagging and labeling to data protection and user rights.  In this blog post, we will spotlight a few most important points that could potentially impact a company’s decision to develop and deploy their Generative AI Services in China.

This final version follows a first draft which was released for public consultation in April 2023 (see our previous post here). Several requirements were removed from the April 2023 draft, including, for example, the prohibition of user profiling, user real-name verification, and the requirement to take measures within three months through model optimization training to prevent illegal content from being generated again.  However, several provisions in the final version remain vague (potentially by design) and leave room to future regulatory guidance as the generative AI landscape continues to evolve.

Scope and Key Definitions

Article 2 of the Measures set out the scope of this regulation, which applies to the “provision of services of generating content in the form of text(s), picture(s), audio and video(s) to the public within China through the use of Generative AI Technologies.”  (Article 2).  In this context, the following definitions have been offered under Article 22:

  • Generative AI Technologies” are defined as models and related technologies that are capable of generating contents in the form of text(s), picture(s), audio and video(s).
  • Generative AI Services” refer to the service that is offered to the public and generates content in the form of text(s), picture(s), audio and video(s) by use of Generative AI Technologies.
  • Generative AI Service Provider” (“Provider”) refers to “an entity or individual that utilizes Generative AI Technologies to provide Generative AI Services, including providing Generative AI Services through application programming interface (API) or other methods.”  

Note “Provider” is broadly defined, and in theory, all entities in the ecosystem involved in provision of Generative AI Services to the Chinese public could be covered.  In practice, it could mean that both the developers of Generative AI Technologies, which make their services available for other entities to deploy in China, and entities actually using Generative AI Technologies to offer services to Chinese consumers, are subject to the jurisdiction of the Measures, so long as the Generative AI Technologies are used by the public within China.  The term “used by public in China” is not defined and will most likely be decided on a case-by-case basis until further regulatory guidance is issued.

Notably, the Measures exclude the development and use of Generative AI Services by enterprises, research and academic institutions, or other public institutions from the scope of its application.  Nevertheless, and as discussed above, the lines around this exception could be blurred if the term “used by domestic public” is broadly interpreted. 

Finally, no provisions in the Measures explicitly prohibit Chinese enterprises or even consumers from using Generative AI Services provided by offshore Providers.  The Measures, however, state that the Chinese regulators may take “technical measures and other necessary measures” to “deal with” offshore Generative AI Services if such services fail to “comply with the Chinese laws, regulations and the Generative AI Measures.”  (Article 20).  While this provision does not grant the Chinese regulators the authority to regulate offshore Providers per se (for example, to audit such services), in practice, it may pressure these Providers to comply with the Measures if they still wish to remain in the market.  Otherwise, access to such Generative AI Services from China could be blocked.

Key Requirements for Generative AI Service Providers

The Generative AI Measures impose a wide range of obligations on Providers of Generative AI Services.  Some relate to governance model of such Generative AI Services, including with respect to algorithm training and product development.  Others are more specifically related to the manner in which the Services are offered.  We highlight a few examples below:

  • Content Moderation:  Providers bear the responsibilities of “content producers” under the Measures. (Article 9).  This means that if a Provider identifies that a user of its Generative AI Service is engaged in “illegal content” (not clearly defined by the Measures or other Chinese regulations), it must promptly take measures to, for example, suspend the content generation and transmission, and take down the content.  In addition, the Providers must rectify the issue, including through model optimization, and must report the issue to regulators.  (Article 14).
  • Training Data:  The Measures impose several requirements on Providers related to training data.  For instance, data and “foundation models” used for training and optimization must be obtained from “legitimate sources.”  Providers are prohibited from infringing on the intellectual property rights of others, and must process personal information with consent or another legal basis under Chinese laws.  The Measures also state at a high level that the Providers must improve the quality of training data and enhance its “authenticity, accuracy, objectivity and diversity.”  (Article 7).  It is less clear how such requirements should be implemented at the development stage and what type of supporting documents Chinese regulators would consider to support the claims of the Providers.
  • Labeling of Training Data:  At the development stage, if the Providers is labeling training data, it must formulate “clear, specific and practical” labelling rules.  The Provider must also undertake a quality assessment of its data labeling and conduct sample verification to understand the accuracy of the labeled content.  (Articles 8).
  • Tagging of generated content: Consistent with the requirements under Provisions on the Management of Deep Synthesis in Internet Information Service, the Providers must add tags on content generated by Generative AI Services. (Article 12)
  • User Protection:  The Measures reflect several requirements for the Providers regarding user rights and protections, including:
    • Personal Information Protection:  Providers must not collect unnecessary personal information, store the input information and usage records in a way capable of identifying users, or provide users’ input information and usage records to others.  (Article 11). 
    • Complaints:  Providers must establish a mechanism for receiving and handling complaints from users.  Additionally, requests for access, copies, correction, or deletion of personal information from users should be handled in a timely manner.  (Articles 11 and 15).
  • Contracting:  Providers must implement a service agreement with the entity deploying its Generative AI Services.  The service agreement must specify the rights and obligations of the parties.  (Article 9).  There is no further guidance in the Measures on what needs to be included in such an agreement.
  • Security Assessment and Filing: While the Measures do not specifically identify any high risk services, Generative AI Services “with the attributes of public opinion or the capacity for social mobilization” are subject to the requirement to carry out security assessment and conduct algorithm filing.  (Article 17).  The precise scope of services subject to these requirements is not defined in the Measures, though information services that “provide channels for the public to express their opinions or are capable of mobilizing the public to engage in specific activities” could be the focus of the regulators based on other Chinese regulations.  These services could, for example, include operating Internet forums blogs, or chat rooms, or distributing information through public accounts, short videos, webcasts.

Enforcement

While the penalty provisions in the Measures are in line with existing Chinese laws such as Cybersecurity Law, Data Security Law and Personal Information Protection Law, the Providers are required to cooperate with ”supervision and inspection” of regulators, including by “explaining the source, scale and types of training data; labeling rules and algorithmic mechanism;” and providing necessary support and assistance to regulators.  (Article 19).

Print:
Email this postTweet this postLike this postShare this post on LinkedIn
Photo of Yan Luo Yan Luo

Yan Luo advises clients on a broad range of regulatory matters in connection with data privacy and cybersecurity, antitrust and competition, as well as international trade laws in the United States, EU, and China.

Yan has significant experience assisting multinational companies navigating the…

Yan Luo advises clients on a broad range of regulatory matters in connection with data privacy and cybersecurity, antitrust and competition, as well as international trade laws in the United States, EU, and China.

Yan has significant experience assisting multinational companies navigating the rapidly-evolving Chinese cybersecurity and data privacy rules. Her work includes high-stakes compliance advice on strategic issues such as data localization and cross border data transfer, as well as data protection advice in the context of strategic transactions. She also advises leading Chinese technology companies on global data governance issues and on compliance matters in major jurisdictions such as the European Union and the United States.

Yan regularly contributes to the development of data privacy and cybersecurity rules and standards in China. She chairs Covington’s membership in two working groups of China’s National Information Security Standardization Technical Committee (“TC260”), and serves as an expert in China’s standard-setting group for Artificial Intelligence and Ethics.

Yan is named as Global Data Review’s40 under 40” in 2018 and is frequently quoted by leading media outlets including the Wall Street Journal and the Financial Times.

Prior to joining the firm, Yan completed an internship with the Office of International Affairs of the U.S. Federal Trade Commission in Washington, DC. Her experiences in Brussels include representing major Chinese companies in trade, competition and public procurement matters before the European Commission and national authorities in EU Member States.

Yan is a Certified Information Privacy Professional (CIPP/Asia) by the International Association of Privacy Professionals and an active member of the American Bar Association’s Section of Antitrust Law.

Photo of Xuezi Dan Xuezi Dan

Xuezi Dan is an associate in the firm’s Beijing office. Her practice focuses on regulatory compliance, with a particular focus on data privacy and cybersecurity. Xuezi helps clients understand and navigate the increasingly complex privacy regulatory issues in China.

She also has experience…

Xuezi Dan is an associate in the firm’s Beijing office. Her practice focuses on regulatory compliance, with a particular focus on data privacy and cybersecurity. Xuezi helps clients understand and navigate the increasingly complex privacy regulatory issues in China.

She also has experience advising clients on general corporate and antitrust matters.

Photo of Jayne Ponder Jayne Ponder

Jayne Ponder counsels national and multinational companies across industries on data privacy, cybersecurity, and emerging technologies, including Artificial Intelligence and Internet of Things.

In particular, Jayne advises clients on compliance with federal, state, and global privacy frameworks, and counsels clients on navigating the…

Jayne Ponder counsels national and multinational companies across industries on data privacy, cybersecurity, and emerging technologies, including Artificial Intelligence and Internet of Things.

In particular, Jayne advises clients on compliance with federal, state, and global privacy frameworks, and counsels clients on navigating the rapidly evolving legal landscape. Her practice includes partnering with clients on the design of new products and services, drafting and negotiating privacy terms with vendors and third parties, developing privacy notices and consent forms, and helping clients design governance programs for the development and deployment of Artificial Intelligence and Internet of Things technologies.

Jayne routinely represents clients in privacy and consumer protection enforcement actions brought by the Federal Trade Commission and state attorneys general, including related to data privacy and advertising topics. She also helps clients articulate their perspectives through the rulemaking processes led by state regulators and privacy agencies.

As part of her practice, Jayne advises companies on cybersecurity incident preparedness and response, including by drafting, revising, and testing incident response plans, conducting cybersecurity gap assessments, engaging vendors, and analyzing obligations under breach notification laws following an incident.