On April 11, 2023, the Cyberspace Administration of China (“CAC”) released draft Administrative Measures for Generative Artificial Intelligence Services (《生成式人工智能服务管理办法(征求意见稿)》) (“draft Measures”) (official Chinese version available here) for public consultation.  The deadline for submitting comments is May 10, 2023.

The draft Measures would regulate generative Artificial Intelligence (“AI”) services that are “provided to the public in mainland China.”  These requirements cover a wide range of issues that are frequently debated in relation to the governance of generative AI globally, such as data protection, non-discrimination, bias and the quality of training data.  The draft Measures also highlight issues arising from the use of generative AI that are of particular concern to the Chinese government, such as content moderation, the completion of a security assessment for new technologies, and algorithmic transparency.  The draft Measures thus reflect the Chinese government’s objective to craft its own governance model for new technologies such as generative AI.

Further, and notwithstanding the requirements introduced by the draft Measures (as described in greater detail below), the text states that the government encourages the (indigenous) development of (and international cooperation in relation to) generative AI technology, and encourages companies to adopt “secure and trustworthy software, tools, computing and data resources” to that end. 

Notably, the draft Measures do not make a distinction between generative AI services offered to individual consumers or enterprise customers, although certain requirements appear to be more directed to consumer-facing services than enterprise services.

This blog post identifies a few highlights of the draft Measures.

Definition and Scope

The draft Measures apply to “research and development into, as well as the use of, generative AI” that is offered to “the public” within the territory of China.  Generative AI is defined as technology that “generates content in the form of text(s), picture(s), audio, video(s) and code(s) based on algorithms, models, and rules.” (Article 2)

It is unclear from the wording of the draft Measures whether “the public” refers to consumers in China, thus excluding generative AI services offered to enterprises from their scope.  It is also unclear whether providers of generative AI outside of China that is not specifically targeting the Chinese market will be subject to these rules. 

The draft Measures define “a provider of generative AI” as an entity or individual that utilizes generative AI products to provide services such as those that can generate chat, text, picture and audio.  Note that this definition includes service providers that allow others to generate chat, text, picture and audio through API or other means. (Article 5) The draft Measures do not distinguish between providers of generative AI offering back-end technologies and those that build services at the application level.  Both are responsible for content produced by generative AI products and are required to protect personal information in accordance with China’s Personal Information Protection Law (“PIPL”).  

Content Moderation

Article 4 of the draft Measures requires providers of generative AI to adhere to the following principles:

  1. ensure that content created by generative AI is consistent with the “social order and societal morals,” and does not endanger national security;
  2. adopt measures to avoid discrimination when designing algorithms, training data sets, or providing services that incorporate generative AI;
  3. ensure that content created by generative AI is true, accurate, and free of fraudulent information; and
  4. respect intellectual property and comply with all other applicable laws and regulations.

Providers of generative AI are required to adopt measures to filter any inappropriate content created by generative AI, and optimize algorithms to prevent the generation of such content within 3 months. (Article 15)  Providers of generative AI are also required to enable the use of tagging mechanisms to identify content/video created by generative AI in accordance with the Provisions on the Management of Deep Synthesis of Internet Information Service (《互联网信息服务深度合成管理规定》). (Article 16)

Security Assessment and Filing

Before offering a generative AI service to the public at large, under the draft Measures a provider must apply to the CAC for a security assessment in accordance with the Provisions on the Security Assessment of Internet Information Services with Characteristics of Opinions or Capable of Social Mobilization (《具有舆论属性或社会动员能力的互联网信息服务安全评估规定》) (“Assessment Provisions”). (Article 6)

The Assessment Provisions were released in 2018 with the objective to govern Internet information services such as public forums, live streaming, and other types of information-sharing activities online.  Under the Assessment Provisions, in-scope service providers are required to carry out a self-assessment or engage a third-party agency to carry out the assessment.  Factors to be considered in the assessment largely overlap with the requirements provided under the draft Measures, including, for instance: (1) verification of the real identity of users; (2) technical measures adopted to protect personal information; and (3) internal mechanisms for content review.

Providers of generative AI are also required to file certain information regarding its use of algorithms with the CAC in accordance with the requirements provided under the Provisions on the Management of Algorithm recommendation of Internet Information Services (《互联网信息服务算法推荐管理规定》) (Article 6) – including, for instance, the name of the service provider, service form, algorithm type, and algorithm self-assessment report.

Protection of the Rights and Interests of End Users

Providers of generative AI are required to ask end users to provide real identity information. (Article 9)  Further, such providers must specify the targeted end users and use cases of the services provided, and adopt measures to prevent end users from becoming addicted to the services. (Article 10)

Providers of generative AI are also required to disclose information that might impact users’ choices, including a “description of the source, scale, type, quality, and other details of pre-training and optimized-training data, rules for manual labeling, the scale and types of manually-labeled data, as well as fundamental algorithms and technical systems.”  At present, it is unclear how such information should be disclosed or to whom such information needs to be disclosed. (Article 17)

Providers of generative AI are further required to protect data submitted by end users, as well as the activity logs of end users.  Providers are prohibited from conducting user profiling or sharing information related to end users with third parties. (Article 11)

Providers of generative AI must also establish a mechanism to intake and review complaints from end users. (Article 13)

Finally, providers of generative AI should “guide” end users to properly utilize generative AI and not to use it to “damage the image, reputation, or other legitimate rights and interests of others, and do not engage in commercial hype or improper marketing.” (Article 18) If a provider of generative AI discovers improper use of the technology by its users, it should suspend or terminate the services provided to such end users. (Article 19) In addition, an end user can report a provider of generative AI to the CAC if the generated content does not comply with the requirements of the draft Measures. (Article 18)

Discrimination and Training Data

Article 7 of the draft Measures also impose obligations on the research and development of generative AI.  Specifically, providers of generative AI must ensure data used for training and optimization is obtained through legal means, and such data must:

  1. comply with requirements stipulated by the Cybersecurity Law;
  2. not contain content that infringes intellectual property;
  3. if it constitutes personal information, be obtained on the basis of consent from data subjects, or otherwise comply with the requirements provided under applicable Chinese laws and regulations;
  4. be accurate, objective and sufficiently diverse; and
  5. comply with other regulatory requirements related to generative AI released by the CAC.

Providers of generative AI must also define clear rules for data annotation and train employees involved in such annotation. (Article 8)  Further, providers of generative AI should not generate discriminatory content based on the race, nationality, gender or other characteristics of the user. (Article 12)

Penalties

Article 20 of the draft Measures specifies that a provider of generative AI that violates the requirements provided under the draft Measures will be penalized in accordance with the Personal Information Protection Law, Cybersecurity Law, Data Security Law orother relevant regulations.  If these laws and regulations do not specify a particular penalty, a violator may receive warnings, be ordered to take corrective actions, suspend services, or pay fines, or be held criminally liable.

Print:
Email this postTweet this postLike this postShare this post on LinkedIn
Photo of Yan Luo Yan Luo

Yan Luo advises clients on a broad range of regulatory matters in connection with data privacy and cybersecurity, antitrust and competition, as well as international trade laws in the United States, EU, and China.

Yan has significant experience assisting multinational companies navigating the…

Yan Luo advises clients on a broad range of regulatory matters in connection with data privacy and cybersecurity, antitrust and competition, as well as international trade laws in the United States, EU, and China.

Yan has significant experience assisting multinational companies navigating the rapidly-evolving Chinese cybersecurity and data privacy rules. Her work includes high-stakes compliance advice on strategic issues such as data localization and cross border data transfer, as well as data protection advice in the context of strategic transactions. She also advises leading Chinese technology companies on global data governance issues and on compliance matters in major jurisdictions such as the European Union and the United States.

Yan regularly contributes to the development of data privacy and cybersecurity rules and standards in China. She chairs Covington’s membership in two working groups of China’s National Information Security Standardization Technical Committee (“TC260”), and serves as an expert in China’s standard-setting group for Artificial Intelligence and Ethics.

Yan is named as Global Data Review’s40 under 40” in 2018 and is frequently quoted by leading media outlets including the Wall Street Journal and the Financial Times.

Prior to joining the firm, Yan completed an internship with the Office of International Affairs of the U.S. Federal Trade Commission in Washington, DC. Her experiences in Brussels include representing major Chinese companies in trade, competition and public procurement matters before the European Commission and national authorities in EU Member States.

Yan is a Certified Information Privacy Professional (CIPP/Asia) by the International Association of Privacy Professionals and an active member of the American Bar Association’s Section of Antitrust Law.

Photo of Xuezi Dan Xuezi Dan

Xuezi Dan is an associate in the firm’s Beijing office. Her practice focuses on regulatory compliance, with a particular focus on data privacy and cybersecurity. Xuezi helps clients understand and navigate the increasingly complex privacy regulatory issues in China.

She also has experience…

Xuezi Dan is an associate in the firm’s Beijing office. Her practice focuses on regulatory compliance, with a particular focus on data privacy and cybersecurity. Xuezi helps clients understand and navigate the increasingly complex privacy regulatory issues in China.

She also has experience advising clients on general corporate and antitrust matters.

Photo of Nicholas Shepherd Nicholas Shepherd

Nicholas Shepherd is an associate in Covington’s Washington, DC office, where he is a member of the Data Privacy and Cybersecurity Practice Group, advising clients on compliance with all aspects of the European General Data Protection Regulation (GDPR), ePrivacy Directive, European direct marketing…

Nicholas Shepherd is an associate in Covington’s Washington, DC office, where he is a member of the Data Privacy and Cybersecurity Practice Group, advising clients on compliance with all aspects of the European General Data Protection Regulation (GDPR), ePrivacy Directive, European direct marketing laws, and other privacy and cybersecurity laws worldwide. Nick counsels on topics that include adtech, anonymization, children’s privacy, cross-border transfer restrictions, and much more, providing advice tailored to product- and service-specific contexts to help clients apply a risk-based approach in addressing requirements in relation to transparency, consent, lawful processing, data sharing, and others.

A U.S.-trained and qualified lawyer with 7 years of working experience in Europe, Nick leverages his multi-faceted legal background and international experience to provide clear and pragmatic advice to help organizations address their privacy compliance obligations across jurisdictions.