There is an ongoing debate in Brussels about the circumstances under which AI-based safety components integrated into radio equipment are subject to the requirements for high-risk AI systems of the EU Artificial Intelligence Act 2024/1689 (the “AI Act”). The debate is particularly relevant because, if AI-based safety components are considered high-risk under the AI Act, they will be subject to a comprehensive set of regulatory requirements under the AI Act as of August 2, 2027. These requirements include risk management, data quality measures, transparency towards users, human oversight, as well as obligations relating to accuracy, robustness, and cybersecurity.

The discussion affects devices like smartphones with AI-driven emergency call features, smart home safety systems, smart home appliances and drones using AI for obstacle avoidance and emergency landing. In effect, many, if not all, of the AI-based safety components of internet-connected radio equipment could be subject to the AI Act’s requirements for high-risk AI systems.

Below we briefly outline the framework of the current debate.

The AI Act classifies a safety component as a high-risk AI system when the following three conditions are cumulatively met:

  1. the safety component is an AI system;
  2. the safety component is either integrated (or intended to be integrated) into a product covered by EU harmonization legislation listed in Annex I to the AI Act; and
  3. that EU harmonization legislation requires the product to undergo a third-party assessment.

For completeness, an AI-based safety component is also classified as high-risk under the AI Act if it is used in the management and operation of critical digital infrastructure, road traffic, or the supply of water, gas, heating or electricity, regardless of whether it meets these conditions (Annex III(2) to the AI Act). This is unless it does not pose a significant risk of harm to the health, safety or fundamental rights of natural persons, including by not materially influencing the outcome of decision-making. However, this blog post does not discuss this scenario. It only covers safety components that are not used for these purposes, but that may be high-risk AI systems because they fulfil the three aforementioned cumulative conditions.

1. What is an AI-based “safety component”?

The AI Act defines a “safety component” as “a component of a product or of an AI system which fulfils a safety function for that product or AI system, or the failure or malfunctioning of which endangers the health and safety of persons or property” (emphasis added).

A safety component is classified as an “AI system” if it is a “machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.” See our blog post on the European Commission’s guidelines on the definition of an AI system.

2. What is an AI-based safety component that is integrated (or intended to be integrated) into radio equipment?

As indicated above, one of the conditions for an AI-based safety component to be a high-risk AI system is that it be integrated (or intended to be integrated) into a product covered by EU harmonization legislation listed in Annex I to the AI Act. That list includes Directive 2014/53/EU (“Radio Equipment Directive” or “RED”).

The RED applies to “radio equipment,” which it defines as an electrical or electronic product that intentionally emits and/or receives radio waves for the purpose of radio communication or radiodetermination. This also includes products that require accessories, such as antennas, to perform these functions. The concept of radio equipment includes devices such as mobile phones, laptops, radars, broadcasting devices, fitness trackers, smartwatches, routers, and smart appliances—essentially any conventional goods that can send and/or receive radio signals.

In line with the criteria under 1 outlined above, AI-based safety components of radio equipment are those intended to ensure the safety of the equipment and those using it—in other words, have a “safety function.” In practical terms this could arguably mean that the component must be intended to ensure compliance with specific “safety” requirements that are part of the “essential requirements” listed in Article 3 of the RED. These would include components intended to:

  • protect health and safety of persons and of domestic animals and the protection of property, including the objectives with respect to safety requirements set out in EU Low Voltage Directive 2014/35/EU, but with no voltage limit applying; (Article 3(1)(a))
    • For example, an AI-powered thermal sensor system in a smart radio device that detects overheating or electrical faults and automatically shuts down the device to prevent fire or electric shock, even if the device operates outside traditional voltage limits.
  • prevent harm to the network or its functioning, or the misuse of network resources, which would cause an unacceptable degradation of service; (Article 3(3)(d))
    • For example, an AI-driven firewall integrated into a smartphone that continuously monitors network traffic in real time to detect and block abnormal patterns–such as denial-of-service attacks or unauthorized access attempts–and automatically adjusts device behaviour to prevent service degradation and protect network integrity.
  • incorporate safeguards to ensure that the personal data and privacy of the user and of the subscriber are protected; (Article 3(3)(e))
    • For example, an AI-based anomaly detection system in a radio device that monitors data flows to identify and prevent unauthorized data transmissions or privacy breaches, such as detecting malware attempting to exfiltrate user data or preventing unintentional recording/transmission of sensitive information.
  • support certain features ensuring protection from fraud; (Article 3(3)(f)) and
    • For example, an AI-driven authentication system integrated into a wireless router analyses user behaviour and device usage patterns to detect fraudulent activities–such as SIM swapping, connection to fake base stations, or unauthorized device cloning–and automatically blocks suspicious transactions or access attempts to protect users and network security.
  • support certain features ensuring access to emergency services. (Article 3(3)(g))
    • For example, an AI-enhanced emergency call system integrated into a smartphone prioritizes emergency communications, automatically recognizes distress signals, and reroutes calls to the nearest emergency centre during network congestion or failure, ensuring reliable access to emergency services even under adverse conditions.

3. When is radio equipment required to undergo a third-party conformity assessment under the RED?

The RED defines “conformity assessment” as “the process demonstrating whether the essential requirements of [the RED] […] have been fulfilled.” The third party performing this conformity assessment is in effect a “Notified Body”–a conformity assessment body that meets the RED’s requirements and has been officially designated and notified by Member States to the European Commission and other Member States.

With respect to the safety essential requirements outlined above, radio equipment must undergo a third-party conformity assessment only for those essential requirements listed in Article 3(3)(d) to (g) of the RED. This also means that AI-based safety components of radio equipment intended to protect health and safety of persons and of domestic animals and the protection of property (Article 3(1)(a) of the RED) will not be a high-risk AI system.

However, the RED states that a third-party conformity assessment is not required if the manufacturer applies in full an applicable harmonized standard published by the European Commission in the EU Official Journal. In contrast, where an applicable harmonized standard has not been published in the Official Journal or the manufacturer does not follow it in full, the conformity assessment of the radio equipment must be made by a third party (Notified Body).

The current debate centers on two conflicting interpretations regarding third-party conformity assessment under the RED. The question is whether the AI Act’s requirement that the radio equipment must be subject to a third-party conformity assessment is met under the RED where the RED provides for such conformity assessment: (i) even if manufacturers can avoid it if they apply published harmonized standards in full; or (ii) only where there are no applicable published harmonized standards, or the manufacturer chooses not to apply them fully.

The first approach would result in that a very broad range of AI-based safety components of radio equipment would be classified as high risks AI systems. All AI-based safety components integrated (or intended to be integrated) into radio equipment subject to the RED’s safety-related safety essential requirements listed in Article 3(3)(d) to (g) of the RED would risk being classified as high-risk AI systems under the AI Act. More specifically, all or most AI-based safety components of “internet-connected radio equipment,” subject to the Commission’s Delegated Regulation 2022/30 of 29 October 2021, intended to address cybersecurity or related risks would be considered high-risk AI systems. In practice, this could mean that, for example, an AI-driven intrusion detection system embedded in a smart home router, or an AI-based authentication mechanism in a connected walkie-talkie or wireless communication device, would both fall under the high-risk classification. The rationale behind this broader, first interpretation may stem from concerns that existing harmonized standards do not fully address all relevant risks arising from AI.

In contrast, the second interpretation would only classify as high-risk those AI-based safety components for radio equipment and their safety essential requirements for which there are no published harmonized standards, or the manufacturer chooses not to apply them. Its scope would be narrower but, arguably, much more subjective.

The outcome of this ongoing discussion will be highly significant for manufacturers of AI-based safety components and radio equipment, as it will determine whether they must comply with the stringent requirements for high-risk AI systems under the AI Act, including obligations related to risk management, data governance, and post-market monitoring.

Print:
Email this postTweet this postLike this postShare this post on LinkedIn
Photo of Anna Sophia Oberschelp de Meneses Anna Sophia Oberschelp de Meneses

Anna Sophia Oberschelp de Meneses is special counsel in the Data Privacy and Cybersecurity Practice Group.

Anna is a qualified Portuguese lawyer, but is both a native Portuguese and German speaker.

Anna advises companies on European data protection law and helps clients coordinate…

Anna Sophia Oberschelp de Meneses is special counsel in the Data Privacy and Cybersecurity Practice Group.

Anna is a qualified Portuguese lawyer, but is both a native Portuguese and German speaker.

Anna advises companies on European data protection law and helps clients coordinate international data protection law projects.

She has obtained a certificate for “corporate data protection officer” by the German Association for Data Protection and Data Security (“Gesellschaft für Datenschutz und Datensicherheit e.V.”). She is also Certified Information Privacy Professional Europe (CIPPE/EU) by the International Association of Privacy Professionals (IAPP).

Anna also advises companies in the field of EU consumer law and has been closely tracking the developments in this area.

Her extensive language skills allow her to monitor developments and help clients tackle EU Data Privacy, Cybersecurity and Consumer Law issues in various EU and ROW jurisdictions.

Photo of Lasse Luecke Lasse Luecke

Lasse Luecke advises clients on EU regulatory and policy matters with a focus on environmental, technology, and product safety legislation. He has particular expertise in radio equipment legislation, including radiofrequency (RF) spectrum use and availability, data center regulation, and ESG reporting frameworks, where…

Lasse Luecke advises clients on EU regulatory and policy matters with a focus on environmental, technology, and product safety legislation. He has particular expertise in radio equipment legislation, including radiofrequency (RF) spectrum use and availability, data center regulation, and ESG reporting frameworks, where he supports companies in meeting complex and rapidly evolving compliance obligations. Lasse also helps clients anticipate legislative developments, shape regulatory strategy, and engage constructively with EU institutions and policymakers.

Photo of Cándido García Molyneux Cándido García Molyneux

Cándido García Molyneux provides clients with regulatory, policy and strategic advice on EU environmental and product safety legislation. He helps clients influence EU legislation and guidance and comply with requirements in an efficient manner, representing them before the EU Courts and institutions.

Cándido…

Cándido García Molyneux provides clients with regulatory, policy and strategic advice on EU environmental and product safety legislation. He helps clients influence EU legislation and guidance and comply with requirements in an efficient manner, representing them before the EU Courts and institutions.

Cándido co-chairs the firm’s Environmental Practice Group.

Cándido has a deep knowledge of EU requirements on chemicals, circular economy and waste management, climate change, energy efficiency, renewable energies as well as their interrelationship with specific product categories and industries, such as electronics, cosmetics, healthcare products, and more general consumer products. He has worked on energy consumption and energy efficiency requirements of AI models under the EU AI Act.

In addition, Cándido has particular expertise on EU institutional and trade law, and the import of food products into the EU. Cándido also regularly advises clients on Spanish food and drug law.

Cándido is described by Chambers Europe as being “creative and frighteningly smart.” His clients note that “he has a very measured, considered, deliberative manner,” and that “he has superb analytical and writing skills.”

Photo of Dan Cooper Dan Cooper

Daniel Cooper is co-chair of Covington’s Data Privacy and Cyber Security Practice, and advises clients on information technology regulatory and policy issues, particularly data protection, consumer protection, AI, and data security matters. He has over 20 years of experience in the field, representing…

Daniel Cooper is co-chair of Covington’s Data Privacy and Cyber Security Practice, and advises clients on information technology regulatory and policy issues, particularly data protection, consumer protection, AI, and data security matters. He has over 20 years of experience in the field, representing clients in regulatory proceedings before privacy authorities in Europe and counseling them on their global compliance and government affairs strategies. Dan regularly lectures on the topic, and was instrumental in drafting the privacy standards applied in professional sport.

According to Chambers UK, his “level of expertise is second to none, but it’s also equally paired with a keen understanding of our business and direction.” It was noted that “he is very good at calibrating and helping to gauge risk.”

Dan is qualified to practice law in the United States, the United Kingdom, Ireland and Belgium. He has also been appointed to the advisory and expert boards of privacy NGOs and agencies, such as the IAPP’s European Advisory Board, Privacy International and the European security agency, ENISA.