The EU’s AI Act Proposal is continuing to make its way through the ordinary legislative procedure. In December 2022, the Council published its sixth and final compromise text (see our previous blog post), and over the last few months, the European Parliament has been negotiating its own amendments to the AI Act Proposal. The European Parliament is expected to finalize its position in the upcoming weeks, before entering into trilogue negotiations with the Commission and the Council, which could begin as early as April 2023. The AI Act is expected to be adopted before the end of 2023, during the Spanish presidency of the Council, and ahead of the European elections in 2024.
During negotiations between the Council and the European Parliament, we can expect further changes to the Commission’s AI Act proposal, in an attempt to iron out any differences and agree on a final version of the Act. Below, we outline the key amendments proposed by the European Parliament in the course of its negotiations with the Council.
Definition of an AI system
The European Parliament has sought to amend the definition of an AI system, aligning with the OECD’s definition of AI systems. As it currently stands, the European Parliament’s text defines an AI system as:
“a machine-based system that is designed to operate with varying levels of autonomy and that can, for explicit or implicit objectives, generate output such as predictions, recommendations, or decisions influencing physical or virtual environments.” (AI Act Proposal, Compromise Proposal, Draft dated March 3, 2023).
According to reports in the trade press, the European Parliament plans to narrow the definition by removing the reference of “machine-based” and by emphasizing the autonomy of AI systems.
Scope of AI Act
The European Parliament has changed the scope of the AI Act proposal, clarifying that the Act applies to providers and users of AI systems, regardless of where they are located, as long as the output produced by the system is intended to be used in the EU. This is different from the Council’s approach, which provided that the AI Act applied where the output produced by the AI system is used in the EU, independently of the provider’s intention.
Further, in contrast with the proposals of the Commission and the Council, the European Parliament proposes to prohibit providers located within the EU from placing on the market or putting into service prohibited AI systems outside of the EU.
The European Parliament has excluded from the scope of the AI Act “research, testing and development activities regarding an AI system prior to this system being placed on the market or put into service.” In addition, the European Parliament introduced a clause whereby the AI Act will not apply to “open source” AI systems until those systems “are put into service or made available on the market in return for payment”. (AI Act Proposal, Compromise Amendments, Draft dated February 15, 2023).
Classification of high-risk AI systems
Under the Parliament’s proposed amendments, AI systems, listed in Annex III, will be considered as high-risk only if they pose a significant risk of harm to the health, safety, or fundamental rights of natural persons. The European Parliament proposes to expand the list in Annex III, classifying the following AI systems (among others) as high-risk:
- Biometrics AI. “AI systems intended to be used to make inferences about personal characteristics of natural persons on the basis of biometric or biometrics based data, including emotion recognition systems”;
- Insurance AI. “AI systems intended to be used for making decisions or assisting in making decisions on the eligibility of natural persons for health and life insurance”;
- AI used by Children. “AI systems intended to be used by children in a way that may seriously affect a child’s personal development”;
- Generative AI. “AI systems intended to be used to generate, on the basis of limited human input, complex text content that would falsely appear to a person to be human generated and authentic, such as news articles, opinion articles, novels, scripts, and scientific articles, with the exceptions of AI systems used exclusively for content that undergoes human review and for the publication of which a natural or legal person is liable or holds editorial responsibility”;
- Deep Fake AI. “AI systems intended to be used to generate or manipulate audio or video content that features existing natural persons appearing to say or do something they have never said or done in a manner that would falsely appear to be authentic, with the exception of AI systems used exclusively for content that forms part of an evidently artistic, creative or fictional cinematographic and analogous work”; and
- Subliminal AI. “AI systems that deploy subliminal techniques for scientific research and for therapeutical purposes”. (AI Act Proposal, Compromise Amendments, Draft dated February 15, 2023).
- General Purpose AI. The European Parliament has suggested that general purpose AI systems that are integrated into a high-risk AI system, and also general purpose AI systems that are provided as standalone systems should be subject to some of the requirements for high-risk AI systems. Previously, general purpose AI systems were only regulated if they were integrated in high risk AI.
Obligations of users of high-risk AI systems
The European Parliament amended the obligations for users of high-risk AI systems. Among other things, the European Parliament proposed the following amendments:
- requiring users, prior to putting into service or use a high-risk AI system in an employment setting, to “consult workers representatives, inform the affected employees that they will be subject to the system, and obtain their consent”;
- requiring users of high-risk AI systems, which make decisions or assist in making decisions related to natural persons to “inform the natural persons that they are subject to the use of the high-risk AI system”;
- requiring users of high-risk AI systems to conduct a fundamental rights impact assessment of high-risk AI systems’ impact in “the specific context of use prior to putting the AI system into use.” (AI Act Proposal, Compromise Amendments, Draft dated January 16, 2023).
Prohibited AI systems
The European Parliament has expanded the scope of prohibited AI systems, to include:
- AI systems capable of “assess[ing] the risk of a natural person for offending or reoffending or for predicting the occurrence or reoccurrence of an actual or potential criminal offence based on profiling of a natural person or on assessing personality traits and characteristics, including the person’s location, or past criminal behaviour of natural persons or groups of natural persons”; and
- AI systems “that create or expand facial recognition databases through the untargeted scraping of facial images from social media profiles or CCTV footage.” (AI Act Proposal, Compromise Amendments, Draft dated February 15, 2023).
General Principles
The European Parliament also added an article with general principles applicable to all AI systems (i.e., not only high-risk AI systems). This is similar to the approach taken by the Brazilian legislature, which has enshrined a list of principles, based on the OECD’s AI Principles, applicable to all AI systems in the Brazilian draft AI law.
*****
The Covington team continues to monitor developments on the AI Act, and we regularly advise the world’s top technology companies on their most challenging regulatory and compliance issues in the EU and other major markets. If you have questions about the AI Act, or other tech regulatory matters, we are happy to assist with any queries.