On October 30, 2023, days ahead of government leaders convening in the UK for an international AI Safety Summit, the White House issued an Executive Order (“EO”) outlining an expansive strategy to support the development and deployment of safe and secure AI technologies (for further details on the EO, see our blog here). As readers will be aware, the European Commission released its proposed Regulation Laying Down Harmonized Rules on Artificial Intelligence (the EU “AI Act”) in 2021 (see our blog here). EU lawmakers are currently negotiating changes to the Commission text, with hopes of finalizing the text by the end of this year, although many of its obligations would only begin to apply to regulated entities in 2026 or later.

The EO and the AI Act stand as two important developments shaping the future of global AI governance and regulation. This blog post discusses key similarities and differences between the two.

Comparison in Approach

Through the AI Act, the EU is seeking to implement a new regulation, modeled on EU product-safety legislation, that would impose a detailed set of technical and organizational requirements on “providers” and “users” of AI systems. Providers of “high-risk” AI systems would bear the bulk of obligations, from data governance, training, testing and validation, to conformity assessments, risk management systems, and post-market monitoring. The Act would also prohibit some uses of AI systems altogether, and impose transparency obligations on others.

In contrast, the EO does not create new legislative obligations. Rather, it introduces a number of directions for government agencies, including instructing the Department of Commerce to develop rules requiring disclosures from companies that develop or provide infrastructure for AI models under certain circumstances. The EO is also broader in scope than the AI Act in some respects; for instance it covers social issues such as advancing equity and civil rights and protecting workers; outlines requirements related to attracting and retaining highly skilled AI workers; and directs the State Department to lead an effort to establish international frameworks governing AI. 

Another difference between the AI Act and the EO relates to enforcement. The proposed AI Act includes a complex oversight and enforcement regime, and infringements of the Act could incur penalties of up to EUR 30 million or 2-6% of global annual turnover depending on the violation, whereas the EO does not contain enforcement provisions.

Areas of Common Ground

  • Focus on High-Risk AI Systems

The proposed AI Act adopts a risk-based approach and imposes the most significant compliance requirements on providers of AI systems that it classifies as “high-risk”. As noted above, high-risk systems are subject to a number of obligations, including requirements that they are designed to enable record-keeping; allow for human oversight; and achieve an appropriate level of accuracy, robustness and cybersecurity. Notably, the EU Parliament’s version of the AI Act (see our blog post here) proposes introducing specific obligations for “foundation models” in addition to high-risk AI systems. The EU Parliament text defines “foundation models” as AI models that are “trained on broad data at scale, … designed for generality of output, and can be adapted to a wide range of distinctive tasks”.

The EO similarly focuses on high-risk AI systems by requiring developers of certain dual-use foundation models to share safety test results, including from red-team safety tests, and other critical information with the U.S. government. The red-teaming and reporting requirements are scoped to models that present a “serious risk” to security and related topics and that also meet certain technical requirements outlined in Section 4.2(b)(i) of the EO.

  • Transparency and Labeling Requirements

The proposed AI Act requires providers of AI systems intended to interact with natural persons to develop them in such a way that people know they are interacting with the system. Similarly, users of AI systems that engage in “emotion recognition” and “biometric categorisation” must inform people who are exposed to them, and users of AI systems that generate or manipulate images, audio, or video “deepfakes” must disclose that the content is not authentic.

The EO also addresses transparency requirements for AI generated content by requiring the Secretary of Commerce, together with other relevant agencies, to submit a report within 240 days identifying standards, tools, methods, and practices for (i) authenticating content and tracking its provenance, (ii) labeling synthetic content, such as by watermarking, (iii) detecting synthetic content, (iv) preventing generative AI from producing CSAM or non-consensual intimate imagery of real individuals, (v) testing software for such purposes, and (vi) auditing and maintaining synthetic content. Following the report, the Director of the Office of Management and Budget (“OMB”), in coordination with the heads of other relevant agencies, must issue guidance to agencies for the labeling and authenticating of synthetic content.

  • AI Standards and Sandboxes

The proposed AI Act provides for the creation of AI regulatory “sandboxes” (controlled environments intended to encourage developers to test new technologies for a limited period of time, with a view to complying with the Act) and the development of harmonized technical standards for the design, development and deployment of AI systems. The EO similarly requires the creation of new standards by directing the U.S. National Institute for Standards and Technology to issue guidelines for AI development with the aim of promoting consensus with industry standards, and requiring the Secretary of Energy to implement a plan for the development of AI model evaluation tools and AI testbeds. As part of efforts related to international leadership, the EO also directs the Secretary of State to establish a plan for global engagement on prompting and developing AI standards. Collaboration in this area between the U.S. and EU will undoubtedly be facilitated through the U.S.-EU Trade and Technology Council’s joint Roadmap for Trustworthy AI and Risk Management of December 2022, which aims to advance collaborative approaches in international standards bodies related to AI (for further details, see our blog).


The Covington team continues to monitor regulatory developments on AI, and we regularly advise the world’s top technology companies on their most challenging regulatory and compliance issues in the EU and other major markets. If you have questions about AI regulation, or other tech regulatory matters, we are happy to assist with any queries.

Print:
Email this postTweet this postLike this postShare this post on LinkedIn
Photo of Marianna Drake Marianna Drake

Marianna Drake counsels leading multinational companies on some of their most complex regulatory, policy and compliance-related issues, including data privacy and AI regulation. She focuses her practice on compliance with UK, EU and global privacy frameworks, and new policy proposals and regulations relating…

Marianna Drake counsels leading multinational companies on some of their most complex regulatory, policy and compliance-related issues, including data privacy and AI regulation. She focuses her practice on compliance with UK, EU and global privacy frameworks, and new policy proposals and regulations relating to AI and data. She also advises clients on matters relating to children’s privacy, online safety and consumer protection and product safety laws.

Her practice includes defending organizations in cross-border, contentious investigations and regulatory enforcement in the UK and EU Member States. Marianna also routinely partners with clients on the design of new products and services, drafting and negotiating privacy terms, developing privacy notices and consent forms, and helping clients design governance programs for the development and deployment of AI technologies.

Marianna’s pro bono work includes providing data protection advice to UK-based human rights charities, and supporting a non-profit organization in conducting legal research for strategic litigation.

Photo of Marty Hansen Marty Hansen

Martin Hansen has over two decades of experience representing some of the world’s leading innovative companies in the internet, IT, e-commerce, and life sciences sectors on a broad range of regulatory, intellectual property, and competition issues. Martin has extensive experience in advising clients…

Martin Hansen has over two decades of experience representing some of the world’s leading innovative companies in the internet, IT, e-commerce, and life sciences sectors on a broad range of regulatory, intellectual property, and competition issues. Martin has extensive experience in advising clients on matters arising under EU and U.S. law, UK law, the World Trade Organization agreements, and other trade agreements.

Photo of Lisa Peets Lisa Peets

Lisa Peets is co-chair of the firm’s Technology and Communications Regulation Practice Group and a member of the firm’s global Management Committee. Lisa divides her time between London and Brussels, and her practice embraces regulatory compliance and investigations alongside legislative advocacy. In this…

Lisa Peets is co-chair of the firm’s Technology and Communications Regulation Practice Group and a member of the firm’s global Management Committee. Lisa divides her time between London and Brussels, and her practice embraces regulatory compliance and investigations alongside legislative advocacy. In this context, she has worked closely with many of the world’s best-known technology companies.

Lisa counsels clients on a range of EU and UK legal frameworks affecting technology providers, including data protection, content moderation, platform regulation, copyright, e-commerce and consumer protection, and the rapidly expanding universe of additional rules applicable to technology, data and online services. Lisa also routinely advises clients in and outside of the technology sector on trade related matters, including EU trade controls rules.

According to Chambers UK (2024 edition), “Lisa provides an excellent service and familiarity with client needs.”

Will Capstick

Will Capstick is a Trainee who attended BPP Law School.

Photo of Jayne Ponder Jayne Ponder

Jayne Ponder counsels national and multinational companies across industries on data privacy, cybersecurity, and emerging technologies, including Artificial Intelligence and Internet of Things.

In particular, Jayne advises clients on compliance with federal, state, and global privacy frameworks, and counsels clients on navigating the…

Jayne Ponder counsels national and multinational companies across industries on data privacy, cybersecurity, and emerging technologies, including Artificial Intelligence and Internet of Things.

In particular, Jayne advises clients on compliance with federal, state, and global privacy frameworks, and counsels clients on navigating the rapidly evolving legal landscape. Her practice includes partnering with clients on the design of new products and services, drafting and negotiating privacy terms with vendors and third parties, developing privacy notices and consent forms, and helping clients design governance programs for the development and deployment of Artificial Intelligence and Internet of Things technologies.

Jayne routinely represents clients in privacy and consumer protection enforcement actions brought by the Federal Trade Commission and state attorneys general, including related to data privacy and advertising topics. She also helps clients articulate their perspectives through the rulemaking processes led by state regulators and privacy agencies.

As part of her practice, Jayne advises companies on cybersecurity incident preparedness and response, including by drafting, revising, and testing incident response plans, conducting cybersecurity gap assessments, engaging vendors, and analyzing obligations under breach notification laws following an incident.

Photo of Yaron Dori Yaron Dori

Yaron Dori has over 25 years of experience advising technology, telecommunications, media, life sciences, and other types of companies on their most pressing business challenges. He is a former chair of the firm’s technology, communications and media practices and currently serves on the…

Yaron Dori has over 25 years of experience advising technology, telecommunications, media, life sciences, and other types of companies on their most pressing business challenges. He is a former chair of the firm’s technology, communications and media practices and currently serves on the firm’s eight-person Management Committee.

Yaron’s practice advises clients on strategic planning, policy development, transactions, investigations and enforcement, and regulatory compliance.

Early in his career, Yaron advised telecommunications companies and investors on regulatory policy and frameworks that led to the development of broadband networks. When those networks became bidirectional and enabled companies to collect consumer data, he advised those companies on their data privacy and consumer protection obligations. Today, as new technologies such as Artificial Intelligence (AI) are being used to enhance the applications and services offered by such companies, he advises them on associated legal and regulatory obligations and risks. It is this varied background – which tracks the evolution of the technology industry – that enables Yaron to provide clients with a holistic, 360-degree view of technology policy, regulation, compliance, and enforcement.

Yaron represents clients before federal regulatory agencies—including the Federal Communications Commission (FCC), the Federal Trade Commission (FTC), and the Department of Commerce (DOC)—and the U.S. Congress in connection with a range of issues under the Communications Act, the Federal Trade Commission Act, and similar statutes. He also represents clients on state regulatory and enforcement matters, including those that pertain to telecommunications, data privacy, and consumer protection regulation. His deep experience in each of these areas enables him to advise clients on a wide range of technology regulations and key business issues in which these areas intersect.

With respect to technology and telecommunications matters, Yaron advises clients on a broad range of business, policy and consumer-facing issues, including:

  • Artificial Intelligence and the Internet of Things;
  • Broadband deployment and regulation;
  • IP-enabled applications, services and content;
  • Section 230 and digital safety considerations;
  • Equipment and device authorization procedures;
  • The Communications Assistance for Law Enforcement Act (CALEA);
  • Customer Proprietary Network Information (CPNI) requirements;
  • The Cable Privacy Act
  • Net Neutrality; and
  • Local competition, universal service, and intercarrier compensation.

Yaron also has extensive experience in structuring transactions and securing regulatory approvals at both the federal and state levels for mergers, asset acquisitions and similar transactions involving large and small FCC and state communication licensees.

With respect to privacy and consumer protection matters, Yaron advises clients on a range of business, strategic, policy and compliance issues, including those that pertain to:

  • The FTC Act and related agency guidance and regulations;
  • State privacy laws, such as the California Consumer Privacy Act (CCPA) and California Privacy Rights Act, the Colorado Privacy Act, the Connecticut Data Privacy Act, the Virginia Consumer Data Protection Act, and the Utah Consumer Privacy Act;
  • The Electronic Communications Privacy Act (ECPA);
  • Location-based services that use WiFi, beacons or similar technologies;
  • Digital advertising practices, including native advertising and endorsements and testimonials; and
  • The application of federal and state telemarketing, commercial fax, and other consumer protection laws, such as the Telephone Consumer Protection Act (TCPA), to voice, text, and video transmissions.

Yaron also has experience advising companies on congressional, FCC, FTC and state attorney general investigations into various consumer protection and communications matters, including those pertaining to social media influencers, digital disclosures, product discontinuance, and advertising claims.