In April 2021, the European Commission released its proposed Regulation Laying Down Harmonized Rules on Artificial Intelligence (the “Regulation”), which would establish rules on the development, placing on the market, and use of artificial intelligence systems (“AI systems”) across the EU. The proposal, comprising 85 articles and nine annexes, is part of a wider package of Commission initiatives aimed at positioning the EU as a world leader in trustworthy and ethical AI and technological innovation.

The Commission’s objectives with the Regulation are twofold: to promote the development of AI technologies and harness their potential benefits, while also protecting individuals against potential threats to their health, safety, and fundamental rights posed by AI systems. To that end, the Commission proposal focuses primarily on AI systems identified as “high-risk,” but also prohibits three AI practices and imposes transparency obligations on providers of certain non-high-risk AI systems as well. Notably, it would impose significant administrative costs on high-risk AI systems of around 10 percent of the underlying value, based on compliance, oversight, and verification costs. This blog highlights several key aspects of the proposal.

Definition of AI systems (Article 3)

The Regulation defines AI systems as software using one or more “techniques and approaches” and which “generate outputs such as content, predictions, recommendations or decisions influencing the environments they interact with.” These techniques and approaches, set out in Annex I of the Regulation, include machine learning approaches; logic- and knowledge- based approaches; and “statistical approaches, Bayesian estimation, [and] search and optimisation methods.” Given the breadth of these terms, a wide range of technologies could fall within scope of the Regulation’s definition of AI.

Territorial scope (Article 2)

The Regulation would apply not only to AI systems placed on the market, put into service, or used in the EU, but also to systems, wherever marketed or used, “where the output produced by the system is used in the Union.” The latter requirement could raise compliance challenges for suppliers of AI systems, who might not always know, or be able to control, where their customers will use the outputs generated by their systems.

Prohibited AI practices (Article 5)

The Regulation prohibits certain AI practices that are deemed to pose an unacceptable level of risk and contravene EU values. These practices include the provision or use of AI systems that either deploy subliminal techniques (beyond a person’s consciousness) to materially distort a person’s behaviour, or exploit the vulnerabilities of specific groups (such as children or persons with disabilities), in both cases where physical or psychological harm is likely to occur. The Regulation also prohibits public authorities from using AI systems for “social scoring”, where this leads to detrimental or unfavourable treatment in social contexts unrelated to the contexts in which the data was generated, or is otherwise unjustified or disproportionate. Finally, the Regulation bans law enforcement from using ‘real-time’ remote biometric identification systems in publicly accessible spaces, subject to certain limited exceptions (such as searching for crime victims, preventing threat to life or safety, or criminal law enforcement for significant offenses).

Classification of high-risk AI systems (Article 6)

The Regulation classifies certain AI systems as inherently high-risk. These systems, enumerated exhaustively in Annexes II and III of the Regulation, include AI systems that are, or are safety components of, products already subject to EU harmonised safety regimes (e.g., machinery; toys; elevators; medical devices, etc.); products covered by other EU legislation (e.g., motor vehicles; civil aviation; marine equipment, etc.); and AI systems that are used in certain specific contexts or for specific purposes (e.g.; for biometric identification; for educational or vocational training, etc.).

Obligations on providers of high-risk AI (Title III, Chapters 2-3)

The Regulation imposes a range of obligations on providers of high-risk AI systems. In particular, providers must design their high-risk AI systems to enable record-keeping; allow for human oversight; and achieve an appropriate level of accuracy, robustness and cybersecurity. Data used to train, validate, or test such systems must meet quality criteria and be subject to specified data governance practices. Providers must prepare detailed technical documentation, provide specific information to users, and adopt comprehensive risk management and quality management systems.

Prior to placing a high-risk AI system on the EU market or putting it into service, providers must subject their systems to the applicable conformity assessment procedure, either self-assessment or third-party assessment. To demonstrate compliance, providers must draw up an EU declaration of conformity and affix the CE marking of conformity. Providers of certain high-risk AI systems must register their systems in a database maintained by the Commission and accessible to the public.

After the AI system has been placed on the market or put into service, providers must engage in post-market monitoring. This requires providers to take corrective actions when their system is in breach (e.g., by bringing it into conformity with EU law, withdrawing it, or recalling it); informing the relevant national authority where the high-risk AI system presents a risk; reporting serious incidents or malfunctioning of systems to such authority; and cooperating with authorities to demonstrate conformity. Providers must also retain certain documentation for 10 years after the AI system is placed on the market or put into service.

Obligations on third parties (Articles 24, 26-29)

Third parties other than the provider are also subject to obligations. Importers and distributors must ensure that the high-risk AI system has been subject to the relevant conformity assessment procedure and bears the proper conformity marking before placing it on the market. Users must follow the instructions that accompany high-risk AI systems and must monitor the operation of the system based on those instructions. If the system presents certain risks or malfunctions, users must inform the provider or distributor and suspend use. The Regulation also identifies circumstances where third parties may be subject to the obligations of the provider, including where they attach their own name or trademark to the high-risk AI system, or where they modify the intended purpose or make a substantial modification to the system itself.

Requirements on non-high-risk AI systems (Article 52)

The Regulation also imposes transparency obligations on certain non-high-risk AI systems. Specifically, providers of AI systems intended to interact with natural persons must develop them in such a way that people know they are interacting with the system. Similarly, users of  “emotion recognition” and “biometric categorisation” must inform people who are exposed to them, and users of AI systems that generate or manipulate images, audio, or video content must disclose to people that the content is not authentic.

Delegated acts (Articles 73-74)

The Regulation empowers the Commission to adopt delegated acts (an expedited legislative process) to update various aspects of the Regulation in order to keep abreast of emerging technologies and changing market demands. For instance, the Commission can update the definition of AI systems and designate new systems as high-risk, provided they are intended to be used in any of the areas listed in Annex III and pose an equivalent or greater level of risk of harm than those already listed.

Measures in support of innovation (Title V, Articles 53-55)

The Regulation proposes the creation of “AI regulatory sandboxes,” which are controlled environments intended to encourage developers to test new technologies for a limited period of time, with a view to complying with the Regulation. Among other things, these regulatory sandboxes would allow personal data lawfully collected for a separate purpose to be used to develop or test innovative AI systems. The Regulation would also require Member States to adopt measures benefitting small-scale providers and start-ups, such as priority access to the regulatory sandboxes and support in complying with the Regulation and other EU rules.

Enforcement and penalties (Articles 56-59, 61-68, 70-72)

The Regulation establishes a detailed regulatory oversight and enforcement regime. Broadly speaking, national supervisory authorities designated by individual Member States would oversee and enforce the Regulation, with support from a new European Artificial Intelligence Board. National authorities would carry out market surveillance activities and report to the Commission. If a national authority believes that an AI system presents a risk to health, safety, or fundamental rights, it could take corrective action even if the system otherwise complies with the Regulation.

Violations of the Regulation attract potentially significant administrative fines, ranging from 2-6% of worldwide annual turnover, depending on the violation. Various factors will be taken into account in assessing fines, including the nature, gravity and duration of the infringement and its consequences; whether a fine has already been imposed by another authority; and the size and market share of the infringing entity.

Next steps in the legislative process

With its broad scope, detailed rules on prohibited practices, obligations in relation to high-risk systems, and comprehensive oversight and enforcement regime, the proposed Regulation has potentially far-reaching impacts across a wide range of sectors. The Regulation will now be reviewed by the Council of the EU and the European Parliament, both of which may propose amendments. Once adopted, the Regulation will be directly applicable across all EU Member States. It will enter into force 20 days after its publication in the Official Journal of the European Union, and its obligations on providers of AI systems will apply two years after the Regulation’s entry into force.

*  *  *  *  *

Covington regularly advises the world’s top technology companies on their most challenging regulatory and compliance issues in the EU and other major markets. If you have questions about the Commission’s proposed AI Regulation, or other tech regulatory matters, please feel free to reach out to any of the following:

Dan Cooper

Marty Hansen

Lisa Peets

Mark Young

Print:
Email this postTweet this postLike this postShare this post on LinkedIn
Photo of Marty Hansen Marty Hansen

Martin Hansen has over two decades of experience representing some of the world’s leading innovative companies in the internet, IT, e-commerce, and life sciences sectors on a broad range of regulatory, intellectual property, and competition issues, including related to artificial intelligence. Martin has…

Martin Hansen has over two decades of experience representing some of the world’s leading innovative companies in the internet, IT, e-commerce, and life sciences sectors on a broad range of regulatory, intellectual property, and competition issues, including related to artificial intelligence. Martin has extensive experience in advising clients on matters arising under EU and U.S. law, UK law, the World Trade Organization agreements, and other trade agreements.

Photo of Vicky Ling Vicky Ling

Vicky Ling is an associate in Covington’s competition team. She advises on all aspects of EU and UK competition law, including merger control, abuse of dominance, antitrust litigation, regulatory investigations and enforcement.

Photo of Lisa Peets Lisa Peets

Lisa Peets is co-chair of the firm’s Technology and Communications Regulation Practice Group and a member of the firm’s global Management Committee. Lisa divides her time between London and Brussels, and her practice encompasses regulatory compliance and investigations alongside legislative advocacy. For more…

Lisa Peets is co-chair of the firm’s Technology and Communications Regulation Practice Group and a member of the firm’s global Management Committee. Lisa divides her time between London and Brussels, and her practice encompasses regulatory compliance and investigations alongside legislative advocacy. For more than two decades, she has worked closely with many of the world’s best-known technology companies.

Lisa counsels clients on a range of EU and UK legal frameworks affecting technology providers, including data protection, content moderation, artificial intelligence, platform regulation, copyright, e-commerce and consumer protection, and the rapidly expanding universe of additional rules applicable to technology, data and online services.

Lisa also supports Covington’s disputes team in litigation involving technology providers.

According to Chambers UK (2024 edition), “Lisa provides an excellent service and familiarity with client needs.”