On December 9, 2023, the European Parliament, the Council of the European Union and the European Commission reached a political agreement on the EU Artificial Intelligence Act (“AI Act”) (see here for the Parliament’s press statement, here for the Council’s statement, and here for the Commission’s statement). Following three days of intense negotiations, during the fifth “trilogue” discussions amongst the EU institutions, negotiators reached an agreement on key topics, including: (i) the scope of the AI Act; (ii) AI systems classified as “high-risk” under the Act; and (iii) law enforcement exemptions.

As described in our previous blog posts on the AI Act (see here, here, and here), the Act will establish a comprehensive and horizontal law governing the development, import, deployment and use of AI systems in the EU. In this blog post, we provide a high-level summary of the main points EU legislators appear to have agreed upon, based on the press releases linked above and a further Q&A published by the Commission. However, the text of the political agreement is not yet publicly available. Further, although a political agreement has been reached, a number of details remain to be finalized in follow-up technical working meetings over the coming weeks.

  • Prohibited AI practices. Each EU institution’s press release confirms that the AI Act prohibits certain AI practices. According to the Parliament’s press release, banned AI practices include (among others): (i) untargeted scraping of facial images from the internet or CCTV to create facial recognition databases; (ii) biometric categorization of natural persons that uses sensitive characteristics; (iii) emotion recognition in the workplace and educational institutions; and (iv) social scoring based on social behaviour or personal characteristics.
  • High-risk AI systems. Each EU institution’s press release confirms that AI systems classified as “high-risk” will have to comply with additional obligations. The Commission’s Q&A highlights the following obligations (among others): (i) conformity assessments; (ii) quality and risks management systems; and (iii) fundamental rights impact assessments that will apply to “deployers that are bodies governed by public law or private operators providing public services, and operators providing high-risk systems”.
  • Right to lodge complaints. According to the Parliament’s press release, the AI Act will grant citizens the “right to launch complaints about AI systems and receive explanations about decisions based on high-risk AI systems that impact their rights.”
  • General-purpose AI systems. Each EU institution’s press release confirms their agreement on rules covering general purpose AI models (“GPAI”). The Parliament’s press release indicates that GPAI “will have to adhere to transparency requirements” including “drawing up technical documentation, complying with EU copyright law, and disseminating detailed summaries about the content used for training”.
  • Oversight. The Commission’s Q&A states that the Commission will establish a new European AI Office and AI Board. According to the Q&A, the AI Office will “enforce and supervise the new rules for general purpose AI models. This includes drawing up codes of practice to detail out rules, its role in classifying models with systemic risks and monitoring the effective implementation and compliance” with the AI Act.  
  • Sanctions. The Commission’s press releases states that non-compliance with the AI Act could lead to fines of up to (i) €35 million or 7% of global turnover, whichever is higher, for violations of banned AI practices; (ii) €15 million or 3% of global turnover for violations of other obligations; and (iii) €7.5 million or 1.5% of global turnover for supplying incorrect information required by the AI Act.
  • Entry into force. The Commission’s Q&A confirms that the AI Act will enter into force 20 days after its publication in the EU Official Journal and will start applying to organizations two years after its entry into force, except for some specific provisions including: prohibitions on certain AI systems will apply after 6 months and rules on GPAI will apply after 12 months.

To bridge the transitional period before the AI Act becomes generally applicable, the Commission will be launching an AI Pact for AI developers to voluntarily commit to complying with the Act’s key obligations ahead of it becoming legally binding.

The Covington team has deep experience advising clients on European data-related and privacy regulations, including on the AI Act, and is closely monitoring any development in relation to the AI Act and Member States initiatives on AI. If you have any questions on how the AI Act and other upcoming EU legislation will affect your business, our team is happy to assist.

Print:
Email this postTweet this postLike this postShare this post on LinkedIn
Photo of Dan Cooper Dan Cooper

Daniel Cooper is co-chair of Covington’s Data Privacy and Cyber Security Practice, and advises clients on information technology regulatory and policy issues, particularly data protection, consumer protection, AI, and data security matters. He has over 20 years of experience in the field, representing…

Daniel Cooper is co-chair of Covington’s Data Privacy and Cyber Security Practice, and advises clients on information technology regulatory and policy issues, particularly data protection, consumer protection, AI, and data security matters. He has over 20 years of experience in the field, representing clients in regulatory proceedings before privacy authorities in Europe and counseling them on their global compliance and government affairs strategies. Dan regularly lectures on the topic, and was instrumental in drafting the privacy standards applied in professional sport.

According to Chambers UK, his “level of expertise is second to none, but it’s also equally paired with a keen understanding of our business and direction.” It was noted that “he is very good at calibrating and helping to gauge risk.”

Dan is qualified to practice law in the United States, the United Kingdom, Ireland and Belgium. He has also been appointed to the advisory and expert boards of privacy NGOs and agencies, such as the IAPP’s European Advisory Board, Privacy International and the European security agency, ENISA.

Photo of Lisa Peets Lisa Peets

Lisa Peets is co-chair of the firm’s Technology and Communications Regulation Practice Group and a member of the firm’s global Management Committee. Lisa divides her time between London and Brussels, and her practice encompasses regulatory compliance and investigations alongside legislative advocacy. For more…

Lisa Peets is co-chair of the firm’s Technology and Communications Regulation Practice Group and a member of the firm’s global Management Committee. Lisa divides her time between London and Brussels, and her practice encompasses regulatory compliance and investigations alongside legislative advocacy. For more than two decades, she has worked closely with many of the world’s best-known technology companies.

Lisa counsels clients on a range of EU and UK legal frameworks affecting technology providers, including data protection, content moderation, artificial intelligence, platform regulation, copyright, e-commerce and consumer protection, and the rapidly expanding universe of additional rules applicable to technology, data and online services.

Lisa also supports Covington’s disputes team in litigation involving technology providers.

According to Chambers UK (2024 edition), “Lisa provides an excellent service and familiarity with client needs.”

Photo of Marty Hansen Marty Hansen

Martin Hansen has over two decades of experience representing some of the world’s leading innovative companies in the internet, IT, e-commerce, and life sciences sectors on a broad range of regulatory, intellectual property, and competition issues, including related to artificial intelligence. Martin has…

Martin Hansen has over two decades of experience representing some of the world’s leading innovative companies in the internet, IT, e-commerce, and life sciences sectors on a broad range of regulatory, intellectual property, and competition issues, including related to artificial intelligence. Martin has extensive experience in advising clients on matters arising under EU and U.S. law, UK law, the World Trade Organization agreements, and other trade agreements.

Photo of Sam Jungyun Choi Sam Jungyun Choi

Recognized by Law.com International as a Rising Star (2023), Sam Jungyun Choi is an associate in the technology regulatory group in Brussels. She advises leading multinationals on European and UK data protection law and new regulations and policy relating to innovative technologies, such…

Recognized by Law.com International as a Rising Star (2023), Sam Jungyun Choi is an associate in the technology regulatory group in Brussels. She advises leading multinationals on European and UK data protection law and new regulations and policy relating to innovative technologies, such as AI, digital health, and autonomous vehicles.

Sam is an expert on the EU General Data Protection Regulation (GDPR) and the UK Data Protection Act, having advised on these laws since they started to apply. In recent years, her work has evolved to include advising companies on new data and digital laws in the EU, including the AI Act, Data Act and the Digital Services Act.

Sam’s practice includes advising on regulatory, compliance and policy issues that affect leading companies in the technology, life sciences and gaming companies on laws relating to privacy and data protection, digital services and AI. She advises clients on designing of new products and services, preparing privacy documentation, and developing data and AI governance programs. She also advises clients on matters relating to children’s privacy and policy initiatives relating to online safety.

Photo of Marianna Drake Marianna Drake

Marianna Drake counsels leading multinational companies on some of their most complex regulatory, policy and compliance-related issues, including data privacy and AI regulation. She focuses her practice on compliance with UK, EU and global privacy frameworks, and new policy proposals and regulations relating…

Marianna Drake counsels leading multinational companies on some of their most complex regulatory, policy and compliance-related issues, including data privacy and AI regulation. She focuses her practice on compliance with UK, EU and global privacy frameworks, and new policy proposals and regulations relating to AI and data. She also advises clients on matters relating to children’s privacy, online safety and consumer protection and product safety laws.

Her practice includes defending organizations in cross-border, contentious investigations and regulatory enforcement in the UK and EU Member States. Marianna also routinely partners with clients on the design of new products and services, drafting and negotiating privacy terms, developing privacy notices and consent forms, and helping clients design governance programs for the development and deployment of AI technologies.

Marianna’s pro bono work includes providing data protection advice to UK-based human rights charities, and supporting a non-profit organization in conducting legal research for strategic litigation.

Diane Valat

Diane Valat is a trainee who attended IE University.