On February 13, 2024, the European Parliament’s Committee on Internal Market and Consumer Protection and its Committee on Civil Liberties, Justice and Home Affairs (the “Parliament Committees”) voted overwhelmingly to adopt the EU’s proposed AI Act. This follows a vote to approve the text earlier this month by the Council of Ministers’ Permanent Representatives Committee (“Coreper“). This brings the Act closer to final; the last step in the legislative process is a vote by the full European Parliament, currently scheduled to take place in April 2024.

The compromise text approved by Coreper and the Parliament Committees includes a number of significant changes as compared to earlier drafts. In this blog post, we set out some key takeaways.

  • General-purpose AI models: After much debate, it appears that the final Act will regulate general purpose AI (“GPAI”) models. Among other requirements, providers of GPAI models must create and maintain technical documentation of the model that includes certain minimum elements, provide detailed information and documentation to providers that integrate these models into their AI systems, adopt a policy to respect EU copyright law, and make publicly available a “sufficiently” detailed summary of the content used for training the GPAI model.
  • General-purpose AI models with systemic risk: The Act imposes heightened obligations on providers of GPAI models “with systemic risks”.  These include requirements to perform model evaluations, including adversarial testing of the model, to assess and mitigate possible EU-level systemic risks, and ensure adequate cybersecurity protection.
  • Exception to the qualification of high-risk AI systems: Consistent with prior versions of the text, the Act’s most sweeping obligations apply to “high-risk” AI systems. The Act identifies two types of AI systems that are high-risk: (1) AI systems intended to be used as  products (or safety components of products) that are covered by specific EU legislation listed in Annex II of the Act, and (2) AI systems used for the purposes listed in Annex III of the Act, such as certain uses of remote biometric identification systems and certain AI systems used for law enforcement. The compromise text, however, also includes an exception to this requirement: if an AI system falling within scope of Annex III does “not pose a significant risk of harm to the health, safety or fundamental rights of natural persons”, a provider can document this and on that basis exclude the system from the Act’s obligations on such systems. Market surveillance authorities are empowered to evaluate systems that they have reason to consider have been misclassified, and order corrective measures. Providers will also be subject to fines should a market surveillance authority determine that the provider misclassified its AI system in order to circumvent the application of obligations on high-risk AI systems.
  • Fundamental rights impact assessment for banks, insurers, and governments: Deployers that are bodies governed by public law, private operators providing public services, and (with some exceptions) operators deploying high-risk AI systems to evaluate a natural person’s creditworthiness, establish a natural person’s credit score, or assess risk and prices in relation to a natural person’s life or health insurance must perform a fundamental rights impact assessment prior to deploying a high-risk AI system listed in Annex III. This requires these entities to assess:
    • the deployer’s processes in which the high-risk AI system will be used in line with its intended purpose;
    • a description of the period of time and frequency in which the high-risk AI system is intended to be used;
    • the categories of natural persons and groups likely to be affected by its use in the specific context;
    • the specific risks of harm likely to impact the categories of persons or group of persons identified as likely to be affected, taking into account the information provided by the provider pursuant to its transparency obligations under Article 13;
    • a description of the implementation of human oversight measures, according to the instructions of use; and
    • the measures to be taken in case of the materialization of these risks, including their arrangements for internal governance and complaint mechanisms.
  • Transparency obligations: The AI Act imposes transparency obligations on providers and users of certain AI systems and GPAI models, including (1) providers of AI and GPAI systems generating synthetic audio, image, video, or text content, (2) deployers of emotion recognition or biometric categorization systems, (3) deployers of AI systems that generate or manipulate image, audio, or video content constituting a deepfake, and (4) deployers of systems that generate or manipulate text published with the purpose of informing the public on matters of public interest (the Act imposes additional transparency obligations on deployers of certain Annex III high-risk AI systems). In some cases, content will have to be labelled in a machine-readable way so that it can be identified as artificially generated or manipulated. The AI Act provides exceptions in some circumstances, including when the AI system is used for artistic, satirical, creative, or similar purposes.
  • Entry into force: The AI Act will enter into force 20 days after its publication in the EU Official Journal and generally will start applying to organizations two years after its entry into force, with certain exceptions: prohibitions on certain AI practices will apply after 6 months, rules on GPAI models will apply after 12 months (except for GPAI models that have been placed on the market before this date; these will apply after an additional 24 months), and rules applicable to Annex II high-risk AI systems will apply after 36 months.

The Covington team continues to monitor developments on the AI Act, and we regularly advise the world’s top technology companies on their most challenging regulatory and compliance issues in the EU and other major markets. If you have questions about the AI Act, or other tech regulatory matters, we are happy to assist with any queries.

Print:
Email this postTweet this postLike this postShare this post on LinkedIn
Photo of Lisa Peets Lisa Peets

Lisa Peets is co-chair of the firm’s Technology and Communications Regulation Practice Group and a member of the firm’s global Management Committee. Lisa divides her time between London and Brussels, and her practice encompasses regulatory compliance and investigations alongside legislative advocacy. For more…

Lisa Peets is co-chair of the firm’s Technology and Communications Regulation Practice Group and a member of the firm’s global Management Committee. Lisa divides her time between London and Brussels, and her practice encompasses regulatory compliance and investigations alongside legislative advocacy. For more than two decades, she has worked closely with many of the world’s best-known technology companies.

Lisa counsels clients on a range of EU and UK legal frameworks affecting technology providers, including data protection, content moderation, artificial intelligence, platform regulation, copyright, e-commerce and consumer protection, and the rapidly expanding universe of additional rules applicable to technology, data and online services.

Lisa also supports Covington’s disputes team in litigation involving technology providers.

According to Chambers UK (2024 edition), “Lisa provides an excellent service and familiarity with client needs.”

Photo of Marianna Drake Marianna Drake

Marianna Drake counsels leading multinational companies on some of their most complex regulatory, policy and compliance-related issues, including data privacy and AI regulation. She focuses her practice on compliance with UK, EU and global privacy frameworks, and new policy proposals and regulations relating…

Marianna Drake counsels leading multinational companies on some of their most complex regulatory, policy and compliance-related issues, including data privacy and AI regulation. She focuses her practice on compliance with UK, EU and global privacy frameworks, and new policy proposals and regulations relating to AI and data. She also advises clients on matters relating to children’s privacy, online safety and consumer protection and product safety laws.

Her practice includes defending organizations in cross-border, contentious investigations and regulatory enforcement in the UK and EU Member States. Marianna also routinely partners with clients on the design of new products and services, drafting and negotiating privacy terms, developing privacy notices and consent forms, and helping clients design governance programs for the development and deployment of AI technologies.

Marianna’s pro bono work includes providing data protection advice to UK-based human rights charities, and supporting a non-profit organization in conducting legal research for strategic litigation.

Photo of Marty Hansen Marty Hansen

Martin Hansen has over two decades of experience representing some of the world’s leading innovative companies in the internet, IT, e-commerce, and life sciences sectors on a broad range of regulatory, intellectual property, and competition issues, including related to artificial intelligence. Martin has…

Martin Hansen has over two decades of experience representing some of the world’s leading innovative companies in the internet, IT, e-commerce, and life sciences sectors on a broad range of regulatory, intellectual property, and competition issues, including related to artificial intelligence. Martin has extensive experience in advising clients on matters arising under EU and U.S. law, UK law, the World Trade Organization agreements, and other trade agreements.