On February 13, 2024, the European Parliament’s Committee on Internal Market and Consumer Protection and its Committee on Civil Liberties, Justice and Home Affairs (the “Parliament Committees”) voted overwhelmingly to adopt the EU’s proposed AI Act. This follows a vote to approve the text earlier this month by the Council of Ministers’ Permanent Representatives Committee (“Coreper“). This brings the Act closer to final; the last step in the legislative process is a vote by the full European Parliament, currently scheduled to take place in April 2024.
The compromise text approved by Coreper and the Parliament Committees includes a number of significant changes as compared to earlier drafts. In this blog post, we set out some key takeaways.
- General-purpose AI models: After much debate, it appears that the final Act will regulate general purpose AI (“GPAI”) models. Among other requirements, providers of GPAI models must create and maintain technical documentation of the model that includes certain minimum elements, provide detailed information and documentation to providers that integrate these models into their AI systems, adopt a policy to respect EU copyright law, and make publicly available a “sufficiently” detailed summary of the content used for training the GPAI model.
- General-purpose AI models with systemic risk: The Act imposes heightened obligations on providers of GPAI models “with systemic risks”. These include requirements to perform model evaluations, including adversarial testing of the model, to assess and mitigate possible EU-level systemic risks, and ensure adequate cybersecurity protection.
- Exception to the qualification of high-risk AI systems: Consistent with prior versions of the text, the Act’s most sweeping obligations apply to “high-risk” AI systems. The Act identifies two types of AI systems that are high-risk: (1) AI systems intended to be used as products (or safety components of products) that are covered by specific EU legislation listed in Annex II of the Act, and (2) AI systems used for the purposes listed in Annex III of the Act, such as certain uses of remote biometric identification systems and certain AI systems used for law enforcement. The compromise text, however, also includes an exception to this requirement: if an AI system falling within scope of Annex III does “not pose a significant risk of harm to the health, safety or fundamental rights of natural persons”, a provider can document this and on that basis exclude the system from the Act’s obligations on such systems. Market surveillance authorities are empowered to evaluate systems that they have reason to consider have been misclassified, and order corrective measures. Providers will also be subject to fines should a market surveillance authority determine that the provider misclassified its AI system in order to circumvent the application of obligations on high-risk AI systems.
- Fundamental rights impact assessment for banks, insurers, and governments: Deployers that are bodies governed by public law, private operators providing public services, and (with some exceptions) operators deploying high-risk AI systems to evaluate a natural person’s creditworthiness, establish a natural person’s credit score, or assess risk and prices in relation to a natural person’s life or health insurance must perform a fundamental rights impact assessment prior to deploying a high-risk AI system listed in Annex III. This requires these entities to assess:
- the deployer’s processes in which the high-risk AI system will be used in line with its intended purpose;
- a description of the period of time and frequency in which the high-risk AI system is intended to be used;
- the categories of natural persons and groups likely to be affected by its use in the specific context;
- the specific risks of harm likely to impact the categories of persons or group of persons identified as likely to be affected, taking into account the information provided by the provider pursuant to its transparency obligations under Article 13;
- a description of the implementation of human oversight measures, according to the instructions of use; and
- the measures to be taken in case of the materialization of these risks, including their arrangements for internal governance and complaint mechanisms.
- Transparency obligations: The AI Act imposes transparency obligations on providers and users of certain AI systems and GPAI models, including (1) providers of AI and GPAI systems generating synthetic audio, image, video, or text content, (2) deployers of emotion recognition or biometric categorization systems, (3) deployers of AI systems that generate or manipulate image, audio, or video content constituting a deepfake, and (4) deployers of systems that generate or manipulate text published with the purpose of informing the public on matters of public interest (the Act imposes additional transparency obligations on deployers of certain Annex III high-risk AI systems). In some cases, content will have to be labelled in a machine-readable way so that it can be identified as artificially generated or manipulated. The AI Act provides exceptions in some circumstances, including when the AI system is used for artistic, satirical, creative, or similar purposes.
- Entry into force: The AI Act will enter into force 20 days after its publication in the EU Official Journal and generally will start applying to organizations two years after its entry into force, with certain exceptions: prohibitions on certain AI practices will apply after 6 months, rules on GPAI models will apply after 12 months (except for GPAI models that have been placed on the market before this date; these will apply after an additional 24 months), and rules applicable to Annex II high-risk AI systems will apply after 36 months.
The Covington team continues to monitor developments on the AI Act, and we regularly advise the world’s top technology companies on their most challenging regulatory and compliance issues in the EU and other major markets. If you have questions about the AI Act, or other tech regulatory matters, we are happy to assist with any queries.