Artificial Intelligence (AI)

On April 11, 2023, the Cyberspace Administration of China (“CAC”) released draft Administrative Measures for Generative Artificial Intelligence Services (《生成式人工智能服务管理办法(征求意见稿)》) (“draft Measures”) (official Chinese version available here) for public consultation.  The deadline for submitting comments is May 10, 2023.

The draft Measures would regulate generative Artificial Intelligence (“AI”) services that are “provided to the public in mainland China.”  These requirements cover a wide range of issues that are frequently debated in relation to the governance of generative AI globally, such as data protection, non-discrimination, bias and the quality of training data.  The draft Measures also highlight issues arising from the use of generative AI that are of particular concern to the Chinese government, such as content moderation, the completion of a security assessment for new technologies, and algorithmic transparency.  The draft Measures thus reflect the Chinese government’s objective to craft its own governance model for new technologies such as generative AI.

Further, and notwithstanding the requirements introduced by the draft Measures (as described in greater detail below), the text states that the government encourages the (indigenous) development of (and international cooperation in relation to) generative AI technology, and encourages companies to adopt “secure and trustworthy software, tools, computing and data resources” to that end. 

Notably, the draft Measures do not make a distinction between generative AI services offered to individual consumers or enterprise customers, although certain requirements appear to be more directed to consumer-facing services than enterprise services.

This blog post identifies a few highlights of the draft Measures.Continue Reading China Proposes Draft Measures to Regulate Generative AI

2023 is set to be an important year for developments in AI regulation and policy in the EU. At the end of last year, on December 6, 2022, the Council of the EU (the “Council”) adopted its general approach and compromise text on the proposed Regulation Laying Down Harmonized Rules on Artificial Intelligence (the “AI Act”), bringing the AI Act one step closer to being adopted. The European Parliament is currently developing its own position on the AI Act which is expected to be finalized by March 2023. Following this, the Council, Parliament and European Commission (“Commission”) will enter into trilogue discussions to finalize the Act. Once adopted, it will be directly applicable across all EU Member States and its obligations are likely to apply three years after the AI Act’s entry into force (according to the Council’s compromise text).  

In 2022, the Commission also put forward new liability rules for AI systems via the proposed AI Liability Directive (“AILD”) and updates to the Product Liability Directive (“PLD”). The AILD establishes rules for non-contractual, fault-based civil claims involving AI systems. Specifically, the proposal establishes rules that would govern the preservation and disclosure of evidence in cases involving high-risk AI, as well as rules on the burden of proof and corresponding rebuttable presumptions. Meanwhile, the revised PLD harmonizes rules that apply to no-fault liability claims brought by persons who suffer physical injury or damage to property caused by defective products. Software, including AI systems, are explicitly named as “products” under the proposal meaning that an injured person can claim compensation for damage caused by AI (see our previous blog post for further details on the proposed AILD and PLD). Both pieces of legislation will be reviewed, and potentially amended, by the Council and the European Parliament in 2023.Continue Reading EU AI Policy and Regulation: What to look out for in 2023

On December 1, 2022, a committee of the Brazilian Senate presented a report (currently available only in Portuguese) with research on the regulation of artificial intelligence (“AI”) and a draft AI law (see pages 15-58) (“Draft AI Law”) that will serve as the starting point for deliberations by the Senate on new AI legislation.  When preparing the 900+ page report and Draft AI Law, the Senate committee drew inspiration from earlier proposals for regulating AI in Brazil and its research into how OECD countries are regulating (or planning to regulate) in this area, as well as inputs received during a public hearing and in the form of written comments from stakeholders.  This blog posts highlights 13 key aspects of the Draft AI Law.Continue Reading Brazil’s Senate Committee Publishes AI Report and Draft AI Law

On October 13, 2022, the European Data Protection Supervisor (“EDPS”) released its Opinion 20/2022 on a Recommendation issued by the European Commission in August 2022 calling for a Council Decision authorising the opening of negotiations on behalf of the European Union for a Council of Europe convention on artificial intelligence, human rights, democracy and the

Many employers and employment agencies have turned to artificial intelligence (“AI”) tools to assist them in making better and faster employment decisions, including in the hiring and promotion processes.  The use of AI for these purposes has been scrutinized and will now be regulated in New York City.  The New York City Department of Consumer and Worker Protection (“DCWP”) recently issued a Notice of Public Hearing and Opportunity to Comment on Proposed Rules relating to the implementation of New York City’s law regulating the use of automated employment decision tools (“AEDT”) by NYC employers and employment agencies.  As detailed further below, the comment period is open until October 24, 2022.Continue Reading Artificial Intelligence & NYC Employers:  New York City Seeks Public Comment on Proposed Rules That Would Regulate the Use of AI Tools in the Employment Context

With the growing use of AI systems and the increasing complexity of the legal framework relating to such use, the need for appropriate methods and tools to audit AI systems is becoming more pressing both for professionals and for regulators. The French Supervisory Authority (“CNIL”) has recently tested tools that could potentially help its auditors

The UK Government recently published its AI Governance and Regulation: Policy Statement (the “AI Statement”) setting out its proposed approach to regulating Artificial Intelligence (“AI”) in the UK. The AI Statement was published alongside the draft Data Protection and Digital Information Bill (see our blog post here for further details on the Bill) and is

On July 5, 2022, the Cybersecurity and Infrastructure Security Agency (“CISA”) and the National Institute of Standards and Technology (“NIST”) strongly recommended that organizations begin preparing to transition to a post-quantum cryptographic standard.  “The term ‘post-quantum cryptography’ is often referred to as ‘quantum-resistant cryptography’ and includes, ‘cryptographic algorithms or methods that are assessed not to be specifically vulnerable to attack by” a CRQC (cryptanalytically relevant quantum computer) or a classical computer.  NIST “has announced that a new post-quantum cryptographic standard will replace current public-key cryptography, which is vulnerable to quantum-based attacks.”  NIST does not intend to publish the new post-quantum cryptographic standard for commercial products until 2024 but urges companies to begin preparing now by following the Post-Quantum Cryptography RoadmapContinue Reading CISA and NIST Urge Companies to Prepare to Transition to a Post-Quantum Cryptographic Standard

The National Institute of Standards and Technology (“NIST”) issued its initial draft of the “AI Risk Management Framework” (“AI RMF”), which aims to provide voluntary, risk-based guidance on the design, development, and deployment of AI systems.  NIST is seeking public comments on this draft via email, at AIframework@nist.gov, through April 29, 2022.  Feedback received on this draft will be incorporated into the second draft of the framework, which will be issued this summer or fall.
Continue Reading NIST Releases Draft AI Risk Management Framework for Public Comment

This quarterly update summarizes key federal legislative and regulatory developments in the first quarter of 2022 related to artificial intelligence (“AI”), the Internet of Things (“IoT”), connected and automated vehicles (“CAVs”), and data privacy, and highlights a few particularly notable developments in the States.  In the first quarter of 2022, Congress and the Administration focused on required assessments and funding for AI, restrictions on targeted advertising using personal data collected from individuals and connected devices, creating rules to enhance CAV safety, and children’s privacy topics.
Continue Reading U.S. AI, IoT, CAV, and Privacy Legislative Update – First Quarter 2022