AI

On December 19, 2023, the Federal Trade Commission (“FTC”) announced that it reached a settlement with Rite Aid Corporation and Rite Aid Headquarters Corporation (collectively, “Rite Aid”) to resolve allegations that the companies violated Section 5 of the FTC Act (as well as a prior settlement with the agency) by failing to implement reasonable procedures to prevent harm to consumers while using facial recognition technology.  As part of the settlement, Rite Aid agreed to cease using “Facial Recognition or Analysis Systems” (defined below) for five years and establish a monitoring program to address certain risks if it seeks to use such systems for certain purposes in the future.Continue Reading Rite Aid Settles FTC Allegations Regarding Use of Facial Recognition Technology

On December 9, 2023, the European Parliament, the Council of the European Union and the European Commission reached a political agreement on the EU Artificial Intelligence Act (“AI Act”) (see here for the Parliament’s press statement, here for the Council’s statement, and here for the Commission’s statement). Following three days of intense negotiations, during the fifth “trilogue” discussions amongst the EU institutions, negotiators reached an agreement on key topics, including: (i) the scope of the AI Act; (ii) AI systems classified as “high-risk” under the Act; and (iii) law enforcement exemptions.

As described in our previous blog posts on the AI Act (see here, here, and here), the Act will establish a comprehensive and horizontal law governing the development, import, deployment and use of AI systems in the EU. In this blog post, we provide a high-level summary of the main points EU legislators appear to have agreed upon, based on the press releases linked above and a further Q&A published by the Commission. However, the text of the political agreement is not yet publicly available. Further, although a political agreement has been reached, a number of details remain to be finalized in follow-up technical working meetings over the coming weeks.Continue Reading EU Artificial Intelligence Act: Nearing the Finish Line

Ahead of its December 8 board meeting, the California Privacy Protection Agency (CPPA) has issued draft risk assessment regulations.  The CPPA has yet to initiate the formal rulemaking process and has stated that it expects to begin formal rulemaking next year, at which time it will also consider draft regulations covering “automated decisionmaking technology” (ADMT), cybersecurity audits, and revisions to existing regulations.  Accordingly, the draft risk assessment regulations are subject to change.  Below are the key takeaways:Continue Reading CPPA Releases Draft Risk Assessment Regulations

Ahead of its December 8 board meeting, the California Privacy Protection Agency (CPPA) has issued draft “automated decisionmaking technology” (ADMT) regulations.  The CPPA has yet to initiate the formal rulemaking process and has stated that it expects to begin formal rulemaking next year.  Accordingly, the draft ADMT regulations are subject to change.  Below are the key takeaways:Continue Reading CPPA Releases Draft Automated Decisionmaking Technology Regulations

On October 30, 2023, days ahead of government leaders convening in the UK for an international AI Safety Summit, the White House issued an Executive Order (“EO”) outlining an expansive strategy to support the development and deployment of safe and secure AI technologies (for further details on the EO, see our blog here). As readers will be aware, the European Commission released its proposed Regulation Laying Down Harmonized Rules on Artificial Intelligence (the EU “AI Act”) in 2021 (see our blog here). EU lawmakers are currently negotiating changes to the Commission text, with hopes of finalizing the text by the end of this year, although many of its obligations would only begin to apply to regulated entities in 2026 or later.

The EO and the AI Act stand as two important developments shaping the future of global AI governance and regulation. This blog post discusses key similarities and differences between the two.Continue Reading From Washington to Brussels: A Comparative Look at the Biden Administration’s Executive Order and the EU’s AI Act

Earlier today, the White House issued a Fact Sheet summarizing its Executive Order on a comprehensive strategy to support the development of safe and secure artificial intelligence (“AI”).  The Executive Order follows a number of actions by the Biden Administration on AI, including its Blueprint for an AI Bill of Rights and voluntary commitments from certain developers of AI systems.  According to the Administration, the Executive Order establishes new AI safety and security standards, protects privacy, advances equity and civil rights, protects workers, consumers, and patients, promotes innovation and competition, and advances American leadership.  This blog post summarizes these key components.Continue Reading Biden Administration Announces Artificial Intelligence Executive Order

On October 3, the Federal Trade Commission (“FTC”) released a blog post titled Consumers Are Voicing Concerns About AI, which discusses consumer concerns that the FTC received via its Consumer Sentinel Network concerning artificial intelligence (“AI”) and priority areas the agency is watching.  Although the FTC’s blog post acknowledged that it did not investigate

This quarterly update summarizes key legislative and regulatory developments in the third quarter of 2023 related to key technologies and related topics, including Artificial Intelligence (“AI”), connected and automated vehicles (“CAVs”), and data privacy and cybersecurity.Continue Reading U.S. Tech Legislative & Regulatory Update – Third Quarter 2023

On September 8, 2023, Senators Richard Blumenthal (D-CT) and Josh Hawley (R-MO), Chair and Ranking Member of the Senate Judiciary Subcommittee on Privacy, Technology, and the Law, announced a new bipartisan framework for artificial intelligence (“AI”) legislation.  Senator Blumenthal said, “This bipartisan framework is a milestone – the first tough, comprehensive legislative blueprint for real, enforceable AI protections. It should put us on a path to addressing the promise and peril AI portends.” He also told CTInsider that he hopes to have a “detailed legislative proposal” ready for Congress by the end of this year.Continue Reading Senators Release Bipartisan Framework for AI Legislation

On July 13, 2023, the Cybersecurity Administration of China (“CAC”), in conjunction with six other agencies, jointly issued the Interim Administrative Measures for Generative Artificial Intelligence Services (《生成式人工智能管理暂行办法》) (“Generative AI Measures” or “Measures”) (official Chinese version here).  The Generative AI Measures are set to take effect on August 15, 2023. 

As the first comprehensive AI regulation in China, the Measures cover a wide range of topics touching upon how Generative AI Services are developed and how such services can be offered.  These topics range from AI governance, training data, tagging and labeling to data protection and user rights.  In this blog post, we will spotlight a few most important points that could potentially impact a company’s decision to develop and deploy their Generative AI Services in China.

This final version follows a first draft which was released for public consultation in April 2023 (see our previous post here). Several requirements were removed from the April 2023 draft, including, for example, the prohibition of user profiling, user real-name verification, and the requirement to take measures within three months through model optimization training to prevent illegal content from being generated again.  However, several provisions in the final version remain vague (potentially by design) and leave room to future regulatory guidance as the generative AI landscape continues to evolve.Continue Reading Key Takeaways from China’s Finalized Generative Artificial Intelligence Measures