The EU’s AI Act Proposal is continuing to make its way through the ordinary legislative procedure.  In December 2022, the Council published its sixth and final compromise text (see our previous blog post), and over the last few months, the European Parliament has been negotiating its own amendments to the AI Act Proposal.  The European Parliament is expected to finalize its position in the upcoming weeks, before entering into trilogue negotiations with the Commission and the Council, which could begin as early as April 2023.  The AI Act is expected to be adopted before the end of 2023, during the Spanish presidency of the Council, and ahead of the European elections in 2024. 

During negotiations between the Council and the European Parliament, we can expect further changes to the Commission’s AI Act proposal, in an attempt to iron out any differences and agree on a final version of the Act.  Below, we outline the key amendments proposed by the European Parliament in the course of its negotiations with the Council.

Definition of an AI system

The European Parliament has sought to amend the definition of an AI system, aligning with the OECD’s definition of AI systems.  As it currently stands, the European Parliament’s text defines an AI system as:

“a machine-based system that is designed to operate with varying levels of autonomy and that can, for explicit or implicit objectives, generate output such as predictions, recommendations, or decisions influencing physical or virtual environments.” (AI Act Proposal, Compromise Proposal, Draft dated March 3, 2023).

According to reports in the trade press, the European Parliament plans to narrow the definition by removing the reference of “machine-based” and by emphasizing the autonomy of AI systems.

Scope of AI Act

The European Parliament has changed the scope of the AI Act proposal, clarifying that the Act applies to providers and users of AI systems, regardless of where they are located, as long as the output produced by the system is intended to be used in the EU.  This is different from the Council’s approach, which provided that the AI Act applied where the output produced by the AI system is used in the EU, independently of the provider’s intention. 

Further, in contrast with the proposals of the Commission and the Council, the European Parliament proposes to prohibit providers located within the EU from placing on the market or putting into service prohibited AI systems outside of the EU.

The European Parliament has excluded from the scope of the AI Act “research, testing and development activities regarding an AI system prior to this system being placed on the market or put into service.”  In addition, the European Parliament introduced a clause whereby the AI Act will not apply to “open source” AI systems until those systems “are put into service or made available on the market in return for payment”.  (AI Act Proposal, Compromise Amendments, Draft dated February 15, 2023).

Classification of high-risk AI systems

Under the Parliament’s proposed amendments, AI systems, listed in Annex III, will be considered as high-risk only if they pose a significant risk of harm to the health, safety, or fundamental rights of natural persons.  The European Parliament proposes to expand the list in Annex III, classifying the following AI systems (among others) as high-risk:

  • Biometrics AI. “AI systems intended to be used to make inferences about personal characteristics of natural persons on the basis of biometric or biometrics based data, including emotion recognition systems”;
  • Insurance AI. “AI systems intended to be used for making decisions or assisting in making decisions on the eligibility of natural persons for health and life insurance”;
  • AI used by Children. “AI systems intended to be used by children in a way that may seriously affect a child’s personal development”;
  • Generative AI. “AI systems intended to be used to generate, on the basis of limited human input, complex text content that would falsely appear to a person to be human generated and authentic, such as news articles, opinion articles, novels, scripts, and scientific articles, with the exceptions of AI systems used exclusively for content that undergoes human review and for the publication of which a natural or legal person is liable or holds editorial responsibility”;
  • Deep Fake AI. “AI systems intended to be used to generate or manipulate audio or video content that features existing natural persons appearing to say or do something they have never said or done in a manner that would falsely appear to be authentic, with the exception of AI systems used exclusively for content that forms part of an evidently artistic, creative or fictional cinematographic and analogous work”; and
  • Subliminal AI. “AI systems that deploy subliminal techniques for scientific research and for therapeutical purposes”.  (AI Act Proposal, Compromise Amendments, Draft dated February 15, 2023).
  • General Purpose AI.  The European Parliament has suggested that  general purpose AI systems that are integrated into a high-risk AI system, and also general purpose AI systems that are provided as standalone systems should be subject to some of the requirements for high-risk AI systems.  Previously, general purpose AI systems were only regulated if they were integrated in high risk AI.

Obligations of users of high-risk AI systems

The European Parliament amended the obligations for users of high-risk AI systems.  Among other things, the European Parliament proposed the following amendments:

  • requiring users, prior to putting  into service or use a high-risk AI system in an employment setting, to “consult workers representatives, inform the affected employees that they will be subject to the system, and obtain their consent”;
  • requiring users of high-risk AI systems, which make decisions or assist in making decisions related to natural persons to “inform the natural persons that they are subject to the use of the high-risk AI system”;
  • requiring users of high-risk AI systems to conduct a fundamental rights impact assessment of high-risk AI systems’ impact in “the specific context of use prior to putting the AI system into use.”  (AI Act Proposal, Compromise Amendments, Draft dated January 16, 2023).

Prohibited AI systems

The European Parliament has expanded the scope of prohibited AI systems, to include:

  • AI systems capable of “assess[ing] the risk of a natural person for offending or reoffending or for predicting the occurrence or reoccurrence of an actual or potential criminal offence based on profiling of a natural person or on assessing personality traits and characteristics, including the person’s location, or past criminal behaviour of natural persons or groups of natural persons”; and
  • AI systems “that create or expand facial recognition databases through the untargeted scraping of facial images from social media profiles or CCTV footage.” (AI Act Proposal, Compromise Amendments, Draft dated February 15, 2023).

General Principles

The European Parliament also added an article with general principles applicable to all AI systems (i.e., not only high-risk AI systems).  This is similar to the approach taken by the Brazilian legislature, which has enshrined a list of principles, based on the OECD’s AI Principles, applicable to all AI systems in the Brazilian draft AI law.

*****

The Covington team continues to monitor developments on the AI Act, and we regularly advise the world’s top technology companies on their most challenging regulatory and compliance issues in the EU and other major markets.  If you have questions about the AI Act, or other tech regulatory matters, we are happy to assist with any queries.

Print:
Email this postTweet this postLike this postShare this post on LinkedIn
Photo of Dan Cooper Dan Cooper

Daniel Cooper is co-chair of Covington’s Data Privacy and Cyber Security Practice, and advises clients on information technology regulatory and policy issues, particularly data protection, consumer protection, AI, and data security matters. He has over 20 years of experience in the field, representing…

Daniel Cooper is co-chair of Covington’s Data Privacy and Cyber Security Practice, and advises clients on information technology regulatory and policy issues, particularly data protection, consumer protection, AI, and data security matters. He has over 20 years of experience in the field, representing clients in regulatory proceedings before privacy authorities in Europe and counseling them on their global compliance and government affairs strategies. Dan regularly lectures on the topic, and was instrumental in drafting the privacy standards applied in professional sport.

According to Chambers UK, his “level of expertise is second to none, but it’s also equally paired with a keen understanding of our business and direction.” It was noted that “he is very good at calibrating and helping to gauge risk.”

Dan is qualified to practice law in the United States, the United Kingdom, Ireland and Belgium. He has also been appointed to the advisory and expert boards of privacy NGOs and agencies, such as the IAPP’s European Advisory Board, Privacy International and the European security agency, ENISA.

Photo of Lisa Peets Lisa Peets

Lisa Peets is co-chair of the firm’s Technology and Communications Regulation Practice Group and a member of the firm’s global Management Committee. Lisa divides her time between London and Brussels, and her practice embraces regulatory compliance and investigations alongside legislative advocacy. In this…

Lisa Peets is co-chair of the firm’s Technology and Communications Regulation Practice Group and a member of the firm’s global Management Committee. Lisa divides her time between London and Brussels, and her practice embraces regulatory compliance and investigations alongside legislative advocacy. In this context, she has worked closely with many of the world’s best-known technology companies.

Lisa counsels clients on a range of EU and UK legal frameworks affecting technology providers, including data protection, content moderation, platform regulation, copyright, e-commerce and consumer protection, and the rapidly expanding universe of additional rules applicable to technology, data and online services. Lisa also routinely advises clients in and outside of the technology sector on trade related matters, including EU trade controls rules.

According to Chambers UK (2024 edition), “Lisa provides an excellent service and familiarity with client needs.”

Photo of Marty Hansen Marty Hansen

Martin Hansen has over two decades of experience representing some of the world’s leading innovative companies in the internet, IT, e-commerce, and life sciences sectors on a broad range of regulatory, intellectual property, and competition issues. Martin has extensive experience in advising clients…

Martin Hansen has over two decades of experience representing some of the world’s leading innovative companies in the internet, IT, e-commerce, and life sciences sectors on a broad range of regulatory, intellectual property, and competition issues. Martin has extensive experience in advising clients on matters arising under EU and U.S. law, UK law, the World Trade Organization agreements, and other trade agreements.

Photo of Sam Jungyun Choi Sam Jungyun Choi

Recognized by Law.com International as a Rising Star (2023), Sam Jungyun Choi is an associate in the technology regulatory group in Brussels. She advises leading multinationals on European and UK data protection law and new regulations and policy relating to innovative technologies, such…

Recognized by Law.com International as a Rising Star (2023), Sam Jungyun Choi is an associate in the technology regulatory group in Brussels. She advises leading multinationals on European and UK data protection law and new regulations and policy relating to innovative technologies, such as AI, digital health, and autonomous vehicles.

Sam is an expert on the EU General Data Protection Regulation (GDPR) and the UK Data Protection Act, having advised on these laws since they started to apply. In recent years, her work has evolved to include advising companies on new data and digital laws in the EU, including the AI Act, Data Act and the Digital Services Act.

Sam’s practice includes advising on regulatory, compliance and policy issues that affect leading companies in the technology, life sciences and gaming companies on laws relating to privacy and data protection, digital services and AI. She advises clients on designing of new products and services, preparing privacy documentation, and developing data and AI governance programs. She also advises clients on matters relating to children’s privacy and policy initiatives relating to online safety.

Photo of Marianna Drake Marianna Drake

Marianna Drake counsels leading multinational companies on some of their most complex regulatory, policy and compliance-related issues, including data privacy and AI regulation. She focuses her practice on compliance with UK, EU and global privacy frameworks, and new policy proposals and regulations relating…

Marianna Drake counsels leading multinational companies on some of their most complex regulatory, policy and compliance-related issues, including data privacy and AI regulation. She focuses her practice on compliance with UK, EU and global privacy frameworks, and new policy proposals and regulations relating to AI and data. She also advises clients on matters relating to children’s privacy, online safety and consumer protection and product safety laws.

Her practice includes defending organizations in cross-border, contentious investigations and regulatory enforcement in the UK and EU Member States. Marianna also routinely partners with clients on the design of new products and services, drafting and negotiating privacy terms, developing privacy notices and consent forms, and helping clients design governance programs for the development and deployment of AI technologies.

Marianna’s pro bono work includes providing data protection advice to UK-based human rights charities, and supporting a non-profit organization in conducting legal research for strategic litigation.

Photo of Anna Oberschelp de Meneses Anna Oberschelp de Meneses

Anna Sophia Oberschelp de Meneses is an associate in the Data Privacy and Cybersecurity Practice Group.

Anna is a qualified Portuguese lawyer, but is both a native Portuguese and German speaker.

Anna advises companies on European data protection law and helps clients coordinate…

Anna Sophia Oberschelp de Meneses is an associate in the Data Privacy and Cybersecurity Practice Group.

Anna is a qualified Portuguese lawyer, but is both a native Portuguese and German speaker.

Anna advises companies on European data protection law and helps clients coordinate international data protection law projects.

She has obtained a certificate for “corporate data protection officer” by the German Association for Data Protection and Data Security (“Gesellschaft für Datenschutz und Datensicherheit e.V.”). She is also Certified Information Privacy Professional Europe (CIPPE/EU) by the International Association of Privacy Professionals (IAPP).

Anna also advises companies in the field of EU consumer law and has been closely tracking the developments in this area.

Her extensive language skills allow her to monitor developments and help clients tackle EU Data Privacy, Cybersecurity and Consumer Law issues in various EU and ROW jurisdictions.