In February 2025, the European Commission published two sets of guidelines to clarify key aspects of the EU Artificial Intelligence Act (“AI Act”): Guidelines on the definition of an AI system and Guidelines on prohibited AI practices. These guidelines are intended to provide guidance on the set of AI Act obligations that started to apply on February 2, 2025 – which includes the definitions section of the AI Act, obligations relating to AI literacy, and prohibitions on certain AI practices.

This article summarizes the key takeaways from the Commission’s guidelines on prohibited AI practices (“Guidelines”). Please see our blogs on the guidelines on the definition of AI systems here, and our blog on AI literacy requirements under the AI Act here.

The Guidelines are well over 100 pages long and provide detailed guidance on how to interpret and apply each of the eight prohibited AI practices listed in Article 5 of the AI Act. As a reminder, Article 5 identifies practices that are prohibited in relation to AI systems. The Guidelines address the relationship between high-risk AI systems (regulated under the Act) and prohibited practices, explaining that in some cases, use of a high-risk AI system may qualify as a prohibited practice; in others, AI systems that fall under an exception in Article 5 may qualify as high-risk under Article 6.

Key takeaways from the Guidelines include the following:

  • Personalised ads. Article 5(1)(a) prohibits the “placing on the market, the putting into service or the use of an AI system that deploys subliminal techniques beyond a person’s consciousness or purposefully manipulative or deceptive techniques, with the objective, or the effect of materially distorting the behaviour of a person or a group of persons . . .”. The Guidelines indicate that the use of AI to personalise ads based on user preferences is “not inherently manipulative” (emphasis added) so long as it does “not deploy subliminal, purposefully manipulative or deceptive techniques that subvert individual autonomy or exploit vulnerabilities” (para. 133, example box) and suggest that compliance with the GDPR in this context should mitigate the risks of manipulation.
  • Lawful persuasion. Article 5(1)(b) prohibits “the placing on the market, the putting into service or the use of an AI system that exploits any of the vulnerabilities of a natural person or a specific group of persons . . .”, and, like Article 5(1)(a), such practice must have the objective or effect of “materially distorting the behaviour of that person or a person belonging to that group”. However, to qualify as prohibited AI practices, such distortion must go beyond “lawful persuasion” (para. 78). The Guidelines explain that an AI system is likely to engage in “lawful persuasion” where it operates transparently, facilitates free and informed consent, and complies with relevant legal and regulatory frameworks (paras. 129-131).
  • Vulnerability and addiction, scams, and predatory targeting. Article 5(1)(b) specifically prohibits the exploitation of a person’s or group’s vulnerability based on “age, disability or a specific socio-economic situation” (para. 101). The Guidelines give as examples of such exploitation AI systems that “[create] personalised and unpredictable rewards through addictive reinforcement schedules and dopamine-like loops to encourage excessive play and compulsive usage” (para. 105, example box); “target older people with deceptive personalised offers or scams” (para. 106, example box); and “target with advertisements for predatory financial products people who live in low-income post-codes and are in a dire financial situation” (para. 110, example box), among others.
  • Profiling and social scoring. Article 5(1)(c) prohibits “the placing on the market, the putting into service or the use of AI systems for the evaluation or classification of natural persons or groups of persons over a certain period of time based on their social behaviour or known, inferred or predicted personal or personality characteristics, with the social score leading to” certain detrimental outcomes. The Guidelines explain that profiling (as the term is used in the GDPR) is a “form of evaluation” (para. 154), and therefore certain kinds of profiling conducted through AI systems might fall within the scope of Article 5(1)(c). The Guidelines give the example of an insurance company using individuals’ spending and financial information to determine eligibility for life insurance as an example of an “unacceptable social scoring” practice (para. 170, example box). That said, “legitimate scoring practices in line with Union and national law” are not prohibited by Article 5(1)(c); these include certain “[f]inancial credit scoring systems” and “AI-enabled targeted commercial advertising” subject to certain conditions (para. 177, example box).
  • Predictive policing and private actors. Article 5(1)(d), often referred to as the prohibition on AI systems that engage in “predictive policing” (para. 213), prohibits the “placing on the market, the putting into service for this specific purpose, or the use of an AI system for making risk assessments of natural persons in order to assess or predict the risk of a natural person committing a criminal offence, based solely on the profiling of a natural person or on assessing their personality traits and characteristics”. The Guidelines explain that this prohibition applies to law enforcement, but it can also apply to private actors requested to act on behalf of law enforcement. As an example, the Guidelines highlight that a “private company providing advanced AI-based crime analytic software” might fall within scope of Article 5(1)(d) if requested to “analyse a large amount of data from multiple sources and databases, such as national registers, banking transactions, communication data, geo-spatial data, etc., to predict or assess the risk of individuals as potential offenders of human trafficking offences”, assuming all of the criteria set out in Article 5(1)(d) were met (para. 208, example box). That said, the Guidelines also state that where a private actor engages in profiling to protect its business operations and safety or financial interests “without the purpose of assessing or predicting the risk of the customer committing a specific criminal offence,” its activities would fall outside the scope of Article 5(1)(d) (para. 210).
  • Facial image scraping and AI model training. Article 5(1)(e) prohibits “the placing on the market, the putting into service for this specific purpose, or the use of AI systems that create or expand facial recognition databases through the untargeted scraping of facial images from the internet or CCTV footage”. The Guidelines indicate that this prohibition does not extend to databases that “are not used for the recognition of persons” (para. 234). This means “facial image databases used for AI model training or testing purposes” are out of scope “where the persons are not identified” (id.).
  • Emotion recognition in the workplace. Article 5(1)(f) prohibits “the placing on the market, the putting into service for this specific purpose, or the use of AI systems to infer emotions of a natural person in the areas of workplace and education institutions,” subject to certain exceptions. The Guidelines explain that this prohibition applies to the inferring of workers’ emotions, but not of customers’ emotions. The use of a voice recognition system in a call center to track customer emotions, for example, would not be within scope (para. 254, example box). The Guidelines note that physical states, such as pain and fatigue, are not considered emotions (Recital 18), so using an AI system “to infer a professional pilot’s or driver’s fatigue to alert them and suggest when to take brakes to avoid accidents” is not emotion recognition (para. 249, example box).

In addition to providing detailed guidance on each prohibited AI practice, the Guidelines also briefly consider how to interpret the scope of the AI Act and its exceptions:

  • Defining “placing on the market”, “putting into service”, and “use”. With one exception, each of the Article 5 prohibited practices prohibits “the placing on the market, the putting into service [or in some cases, the ‘putting into service for this specific purpose’] or the use” of the AI system for the relevant practice. (The exception is Article 5(1)(h), which prohibits the “use” of certain “real-time” remote biometric identification systems.)
    • The Guidelines state that “placing on the market” means the making available of a system “regardless of the means of supply” (para. 12). This includes through an API, the cloud, and via embedding the system in other products, among other mechanisms.
    • Regarding “putting into service”, the Guidelines underscore that the definition encompasses supplying the system to third parties for their use and “in-house development and deployment” (para. 13).
    • The Guidelines also explain that the term “use” should be understood broadly and covers use or deployment “at any moment [in the AI system’s] lifecycle” after being placed on the market or put into service (para. 14). The Guidelines also state that for the purpose of Article 5, “use” should “include any misuse of an AI system (‘reasonably foreseeable’ or not) that may amount to a prohibited practice” (id. (citing Recital 28)).
  • Research and development (“R&D”) exclusions. Article 2(8) of the AI Act states that the Act does not apply to “any research, testing or development activity regarding AI systems or AI models prior to their being placed on the market” (para. 30). This means, according to the Guidelines, that “AI developers have the freedom to experiment and test new functionalities which might involve techniques that could be seen as manipulative and covered by Article 5(1)(a) AI Act, if used in consumer-facing applications” (para. 30, example box). That said, the Guidelines explain that, once an AI system is placed on the market or put into service as a result of R&D, the Act will apply (see id.). Article 2(6) states that the Act does not apply to “AI systems or AI models, including their outputs, specifically developed and put into service for the sole purpose of scientific research and development.” The Guidelines explain that, for example, AI systems or AI models specifically developed and put into service for the sole purpose of “research[ing] into cognitive and behavioural responses to AI-driven subliminal or deceptive stimuli” are outside the scope of the AI Act (para. 31, example box).
  • Application to general-purpose AI systems. The Guidelines make clear that the Article 5 prohibitions apply to both general-purpose AI systems and AI systems with an intended purpose. They explain that “[w]hile the harm often arises from the way the AI systems are used in practice, providers also have a responsibility not to place on the market or put into service AI systems, including general-purpose AI systems, that are reasonably likely to behave or be directly used in a manner prohibited by Article 5 AI Act” (para. 40). Accordingly, providers of general-purpose AI systems are expected to build in safeguards to prevent misuse; prohibit deployers from using the general-purpose AI system for any of the prohibited practices; and provide deployers with instructions regarding human oversight.

The Covington team continues to monitor regulatory developments on AI, and we regularly advise the world’s top technology companies on their most challenging regulatory and compliance issues in the EU and other major markets. If you have questions about AI regulation, or other tech regulatory matters, we are happy to assist with any queries.

Print:
Email this postTweet this postLike this postShare this post on LinkedIn
Photo of Madelaine Harrington Madelaine Harrington

Madelaine Harrington is an associate in the technology and media group. Her practice covers a wide range of regulatory and policy matters at the cross-section of privacy, content moderation, artificial intelligence, and free expression. Madelaine has deep experience with regulatory investigations, and has…

Madelaine Harrington is an associate in the technology and media group. Her practice covers a wide range of regulatory and policy matters at the cross-section of privacy, content moderation, artificial intelligence, and free expression. Madelaine has deep experience with regulatory investigations, and has counseled multi-national companies on complex cross-jurisdictional fact-gathering exercises and responses to alleged non-compliance. She routinely counsels clients on compliance within the EU regulatory framework, including the General Data Protection Regulation (GDPR), among other EU laws and legislative proposals.

Madelaine’s representative matters include:

coordinating responses to investigations into the handling of personal information under the GDPR,
counseling major technology companies on the use of artificial intelligence, specifically facial recognition technology in public spaces,
advising a major technology company on the legality of hacking defense tactics,
advising a content company on compliance obligations under the DSA, including rules regarding recommender systems.

Madelaine’s work has previously involved representing U.S.-based clients on a wide range of First Amendment issues, including defamation lawsuits, access to courts, and FOIA. She maintains an active pro-bono practice representing journalists with various news-gathering needs.

Photo of Sam Jungyun Choi Sam Jungyun Choi

Recognized by Law.com International as a Rising Star (2023), Sam Jungyun Choi is an associate in the technology regulatory group in Brussels. She advises leading multinationals on European and UK data protection law and new regulations and policy relating to innovative technologies, such…

Recognized by Law.com International as a Rising Star (2023), Sam Jungyun Choi is an associate in the technology regulatory group in Brussels. She advises leading multinationals on European and UK data protection law and new regulations and policy relating to innovative technologies, such as AI, digital health, and autonomous vehicles.

Sam is an expert on the EU General Data Protection Regulation (GDPR) and the UK Data Protection Act, having advised on these laws since they started to apply. In recent years, her work has evolved to include advising companies on new data and digital laws in the EU, including the AI Act, Data Act and the Digital Services Act.

Sam’s practice includes advising on regulatory, compliance and policy issues that affect leading companies in the technology, life sciences and gaming companies on laws relating to privacy and data protection, digital services and AI. She advises clients on designing of new products and services, preparing privacy documentation, and developing data and AI governance programs. She also advises clients on matters relating to children’s privacy and policy initiatives relating to online safety.

Photo of Lisa Peets Lisa Peets

Lisa Peets is co-chair of the firm’s Technology and Communications Regulation Practice Group and a member of the firm’s global Management Committee. Lisa divides her time between London and Brussels, and her practice encompasses regulatory compliance and investigations alongside legislative advocacy. For more…

Lisa Peets is co-chair of the firm’s Technology and Communications Regulation Practice Group and a member of the firm’s global Management Committee. Lisa divides her time between London and Brussels, and her practice encompasses regulatory compliance and investigations alongside legislative advocacy. For more than two decades, she has worked closely with many of the world’s best-known technology companies.

Lisa counsels clients on a range of EU and UK legal frameworks affecting technology providers, including data protection, content moderation, artificial intelligence, platform regulation, copyright, e-commerce and consumer protection, and the rapidly expanding universe of additional rules applicable to technology, data and online services.

Lisa also supports Covington’s disputes team in litigation involving technology providers.

According to Chambers UK (2024 edition), “Lisa provides an excellent service and familiarity with client needs.”