On October 30, 2023, days ahead of government leaders convening in the UK for an international AI Safety Summit, the White House issued an Executive Order (“EO”) outlining an expansive strategy to support the development and deployment of safe and secure AI technologies (for further details on the EO, see our blog here). As readers will be aware, the European Commission released its proposed Regulation Laying Down Harmonized Rules on Artificial Intelligence (the EU “AI Act”) in 2021 (see our blog here). EU lawmakers are currently negotiating changes to the Commission text, with hopes of finalizing the text by the end of this year, although many of its obligations would only begin to apply to regulated entities in 2026 or later.
The EO and the AI Act stand as two important developments shaping the future of global AI governance and regulation. This blog post discusses key similarities and differences between the two.
Comparison in Approach
Through the AI Act, the EU is seeking to implement a new regulation, modeled on EU product-safety legislation, that would impose a detailed set of technical and organizational requirements on “providers” and “users” of AI systems. Providers of “high-risk” AI systems would bear the bulk of obligations, from data governance, training, testing and validation, to conformity assessments, risk management systems, and post-market monitoring. The Act would also prohibit some uses of AI systems altogether, and impose transparency obligations on others.
In contrast, the EO does not create new legislative obligations. Rather, it introduces a number of directions for government agencies, including instructing the Department of Commerce to develop rules requiring disclosures from companies that develop or provide infrastructure for AI models under certain circumstances. The EO is also broader in scope than the AI Act in some respects; for instance it covers social issues such as advancing equity and civil rights and protecting workers; outlines requirements related to attracting and retaining highly skilled AI workers; and directs the State Department to lead an effort to establish international frameworks governing AI.
Another difference between the AI Act and the EO relates to enforcement. The proposed AI Act includes a complex oversight and enforcement regime, and infringements of the Act could incur penalties of up to EUR 30 million or 2-6% of global annual turnover depending on the violation, whereas the EO does not contain enforcement provisions.
Areas of Common Ground
- Focus on High-Risk AI Systems
The proposed AI Act adopts a risk-based approach and imposes the most significant compliance requirements on providers of AI systems that it classifies as “high-risk”. As noted above, high-risk systems are subject to a number of obligations, including requirements that they are designed to enable record-keeping; allow for human oversight; and achieve an appropriate level of accuracy, robustness and cybersecurity. Notably, the EU Parliament’s version of the AI Act (see our blog post here) proposes introducing specific obligations for “foundation models” in addition to high-risk AI systems. The EU Parliament text defines “foundation models” as AI models that are “trained on broad data at scale, … designed for generality of output, and can be adapted to a wide range of distinctive tasks”.
The EO similarly focuses on high-risk AI systems by requiring developers of certain dual-use foundation models to share safety test results, including from red-team safety tests, and other critical information with the U.S. government. The red-teaming and reporting requirements are scoped to models that present a “serious risk” to security and related topics and that also meet certain technical requirements outlined in Section 4.2(b)(i) of the EO.
- Transparency and Labeling Requirements
The proposed AI Act requires providers of AI systems intended to interact with natural persons to develop them in such a way that people know they are interacting with the system. Similarly, users of AI systems that engage in “emotion recognition” and “biometric categorisation” must inform people who are exposed to them, and users of AI systems that generate or manipulate images, audio, or video “deepfakes” must disclose that the content is not authentic.
The EO also addresses transparency requirements for AI generated content by requiring the Secretary of Commerce, together with other relevant agencies, to submit a report within 240 days identifying standards, tools, methods, and practices for (i) authenticating content and tracking its provenance, (ii) labeling synthetic content, such as by watermarking, (iii) detecting synthetic content, (iv) preventing generative AI from producing CSAM or non-consensual intimate imagery of real individuals, (v) testing software for such purposes, and (vi) auditing and maintaining synthetic content. Following the report, the Director of the Office of Management and Budget (“OMB”), in coordination with the heads of other relevant agencies, must issue guidance to agencies for the labeling and authenticating of synthetic content.
- AI Standards and Sandboxes
The proposed AI Act provides for the creation of AI regulatory “sandboxes” (controlled environments intended to encourage developers to test new technologies for a limited period of time, with a view to complying with the Act) and the development of harmonized technical standards for the design, development and deployment of AI systems. The EO similarly requires the creation of new standards by directing the U.S. National Institute for Standards and Technology to issue guidelines for AI development with the aim of promoting consensus with industry standards, and requiring the Secretary of Energy to implement a plan for the development of AI model evaluation tools and AI testbeds. As part of efforts related to international leadership, the EO also directs the Secretary of State to establish a plan for global engagement on prompting and developing AI standards. Collaboration in this area between the U.S. and EU will undoubtedly be facilitated through the U.S.-EU Trade and Technology Council’s joint Roadmap for Trustworthy AI and Risk Management of December 2022, which aims to advance collaborative approaches in international standards bodies related to AI (for further details, see our blog).
The Covington team continues to monitor regulatory developments on AI, and we regularly advise the world’s top technology companies on their most challenging regulatory and compliance issues in the EU and other major markets. If you have questions about AI regulation, or other tech regulatory matters, we are happy to assist with any queries.