Emerging Technologies

2023 is set to be an important year for developments in AI regulation and policy in the EU. At the end of last year, on December 6, 2022, the Council of the EU (the “Council”) adopted its general approach and compromise text on the proposed Regulation Laying Down Harmonized Rules on Artificial Intelligence (the “AI Act”), bringing the AI Act one step closer to being adopted. The European Parliament is currently developing its own position on the AI Act which is expected to be finalized by March 2023. Following this, the Council, Parliament and European Commission (“Commission”) will enter into trilogue discussions to finalize the Act. Once adopted, it will be directly applicable across all EU Member States and its obligations are likely to apply three years after the AI Act’s entry into force (according to the Council’s compromise text).  

In 2022, the Commission also put forward new liability rules for AI systems via the proposed AI Liability Directive (“AILD”) and updates to the Product Liability Directive (“PLD”). The AILD establishes rules for non-contractual, fault-based civil claims involving AI systems. Specifically, the proposal establishes rules that would govern the preservation and disclosure of evidence in cases involving high-risk AI, as well as rules on the burden of proof and corresponding rebuttable presumptions. Meanwhile, the revised PLD harmonizes rules that apply to no-fault liability claims brought by persons who suffer physical injury or damage to property caused by defective products. Software, including AI systems, are explicitly named as “products” under the proposal meaning that an injured person can claim compensation for damage caused by AI (see our previous blog post for further details on the proposed AILD and PLD). Both pieces of legislation will be reviewed, and potentially amended, by the Council and the European Parliament in 2023.Continue Reading EU AI Policy and Regulation: What to look out for in 2023

On December 1, 2022, a committee of the Brazilian Senate presented a report (currently available only in Portuguese) with research on the regulation of artificial intelligence (“AI”) and a draft AI law (see pages 15-58) (“Draft AI Law”) that will serve as the starting point for deliberations by the Senate on new AI legislation.  When preparing the 900+ page report and Draft AI Law, the Senate committee drew inspiration from earlier proposals for regulating AI in Brazil and its research into how OECD countries are regulating (or planning to regulate) in this area, as well as inputs received during a public hearing and in the form of written comments from stakeholders.  This blog posts highlights 13 key aspects of the Draft AI Law.Continue Reading Brazil’s Senate Committee Publishes AI Report and Draft AI Law

Many employers and employment agencies have turned to artificial intelligence (“AI”) tools to assist them in making better and faster employment decisions, including in the hiring and promotion processes.  The use of AI for these purposes has been scrutinized and will now be regulated in New York City.  The New York City Department of Consumer and Worker Protection (“DCWP”) recently issued a Notice of Public Hearing and Opportunity to Comment on Proposed Rules relating to the implementation of New York City’s law regulating the use of automated employment decision tools (“AEDT”) by NYC employers and employment agencies.  As detailed further below, the comment period is open until October 24, 2022.Continue Reading Artificial Intelligence & NYC Employers:  New York City Seeks Public Comment on Proposed Rules That Would Regulate the Use of AI Tools in the Employment Context

Today, the Federal Trade Commission (FTC) announced that it anticipates proposing a privacy rulemaking this month, with comments closing in August.  This announcement follows the agency’s statement in December that it planned to begin a rulemaking to “curb lax security practices, limit privacy abuses, and ensure that algorithmic decision-making does

Continue Reading FTC Announces Plans to Begin Privacy Rulemaking In June

In January 2022, China released two regulations (one in draft form) that touch on hot topics in technological development – algorithmic recommendations and deep synthesis – making it one of the first countries in the world to directly tackle these cutting edge areas.  In this post, we provide an overview
Continue Reading China Takes the Lead on Regulating Novel Technologies: New Regulations on Algorithmic Recommendations and Deep Synthesis Technologies

On December 10th, the Federal Trade Commission (FTC) published a Statement of Regulatory Priorities that announced the agency’s intent to initiate rulemakings on issues such as privacy, security, algorithmic decision-making, and unfair methods of competition.
Continue Reading FTC Announces Regulatory Priorities for Both Privacy and Competition

In April 2021, the European Commission released its proposed Regulation Laying Down Harmonized Rules on Artificial Intelligence (the “Regulation”), which would establish rules on the development, placing on the market, and use of artificial intelligence systems (“AI systems”) across the EU. The proposal, comprising 85 articles and nine annexes, is part of a wider package of Commission initiatives aimed at positioning the EU as a world leader in trustworthy and ethical AI and technological innovation.

The Commission’s objectives with the Regulation are twofold: to promote the development of AI technologies and harness their potential benefits, while also protecting individuals against potential threats to their health, safety, and fundamental rights posed by AI systems. To that end, the Commission proposal focuses primarily on AI systems identified as “high-risk,” but also prohibits three AI practices and imposes transparency obligations on providers of certain non-high-risk AI systems as well. Notably, it would impose significant administrative costs on high-risk AI systems of around 10 percent of the underlying value, based on compliance, oversight, and verification costs. This blog highlights several key aspects of the proposal.Continue Reading European Commission Proposes New Artificial Intelligence Regulation

Last week, the Ninth Circuit ruled in Lemmon v. Snap, Inc., No. 20-55295 (May 4 2021), that 47 U.S.C. § 230 (“Section 230”) did not bar a claim of negligent product design against Snap, Inc., reversing and remanding a lower court ruling.
Continue Reading Ninth Circuit Denies Section 230 Defense in Products Liability Case

A number of legislative proposals to amend Section 230 of the 1996 Communications Decency Act (“Section 230”) have already been introduced in the new Congress.  Section 230 provides immunity to an owner or user of an “interactive computer service” — generally understood to encompass internet platforms and websites — from liability for content posted by a third party.

On February 8, 2021, Senator Mark Warner (D-VA) introduced the Safeguarding Against Fraud, Exploitation, Threats, Extremism, and Consumer Harms Act (“SAFE TECH Act”), cosponsored by Senators Amy Klobuchar (D-MN) and Mazie Hirono (D-HI).  The bill would narrow the scope of immunity that has been applied to online platforms.  Specifically, the SAFE TECH Act would amend Section 230 in the following ways:
Continue Reading SAFE TECH Act Would Limit Scope and Redesign Framework of Section 230 Immunity

On October 9, 2020, the French Supervisory Authority (“CNIL”) issued guidance on the use of facial recognition technology for identity checks at airports (available here, in French).  The CNIL indicates that it has issued this guidance in response to a request from several operators and service providers of airports in France who are planning to deploy this technology on an experimental basis.  In this blog post, we summarize the main principles that the CNIL says airports should observe when deploying biometric technology.
Continue Reading French Supervisory Authority Releases Strict Guidance on the Use of Facial Recognition Technology at Airports