On January 20, 2026, the European Data Protection Board (“EDPB”) and the European Data Protection Supervisor (“EDPS”) (together, the “Authorities”) adopted Joint Opinion 1/2026 on the European Commission’s proposal to amend the EU AI Act (hereafter the “Proposal”, summarized in our previous blog). Overall, the Authorities acknowledge the complexity of the AI Act and agree that targeted simplifications can support legal certainty and efficient administration. However, they warn that simplification should not result in lowering the protection of fundamental rights, including data protection rights. This blog outlines some of the Authorities’ main recommendations as expressed in their Joint Opinion.

Clear boundaries for the processing of sensitive data for bias mitigation

The Proposal envisages expanding the legal basis under the AI Act to allow the processing of special categories of personal data for bias detection and correction in AI models and systems. The Authorities accept that some bias mitigation may require sensitive personal data, but insist such processing should meet a “strict necessity” threshold (not just a “necessary” threshold as proposed). They further recommend clearly circumscribing the use of this legal basis, and limiting such processing activity to cases where it would be justified in light of the risk of adverse effects arising from the processing of such data.

Maintaining registration and training obligations for certain AI providers and deployers

Although the Authorities generally support easing administrative burdens for companies subject to the AI Act, they object to the Proposal’s suggestion to remove registration requirements for AI systems that providers have determined fall outside the high‑risk classification (as per Article 6(3) AI Act). The Authorities flag the risks of differing interpretations and incorrect assessments, and express concerns that such a measure would reduce visibility for competent authorities or bodies over potentially high-risk AI systems, thus underpinning effective supervision and redress.

Similarly, the Authorities consider transforming the AI literacy obligation for providers and deployers into an obligation applicable to the Commission and Member States, obliging them to “encourage” providers and deployers to ensure AI literacy among staff and other persons dealing with the operation and use of AI systems on their behalf, would undermine the effectiveness of this requirement.

Reinforcing institutional coordination for EU‑level AI regulatory sandboxes

Although broadly supportive of EU-level AI regulatory sandboxes maintained by the Commission, the Authorities recommend clarifying that competent national data protection authorities should be involved in the operation of such EU-level sandboxes, along with the EDPB – who, according to the Authorities, should also be granted the status of observer on the European Artificial Intelligence Board. The Authorities also call for a clear distinction to be made between AI sandboxes for EU bodies set up by the EDPS, and EU-level AI sandboxes established by the Commission’s AI Office.

Clarifying rules on supervision and enforcement

With regard to supervision and enforcement mechanisms under the AI Act, the Authorities underline the need for strong cooperation among the various authorities and bodies that may be involved, including the AI Office, national market surveillance authorities, and national data protection authorities. They further highlight the need to clarify the competence and powers of these various stakeholders. Some of the requested clarifications concern, for instance, (i) the types of general-purpose AI systems that would trigger the AI Office’s exclusive competence, or (ii) the role of market surveillance authorities as a mere administrative point of contact.

Warning against postponing high‑risk obligations

Finally, the Authorities express unease regarding the proposed postponement of certain high‑risk AI system obligations. They point out that such delays would result in more high-risk AI systems remaining out of scope of the Act’s high-risk requirements (per Article 111(2) of the AI Act), as they would have been put on the market prior to entry into force of these provisions and thus exempted. While acknowledging implementation pressures, they invite legislators to consider:

  • maintaining the original timelines for obligations with direct rights‑protective effects, such as transparency;
  • limiting any delays in timelines to what is strictly necessary; and
  • avoiding prolonged legal uncertainty that could undermine both compliance planning and public trust.

*                            *                                  *

The Covington team regularly advises the world’s top companies on their most challenging technology regulatory, compliance, and public policy issues in the EU and other major markets. Please reach out to a member of the team if you need any assistance.

Print:
Email this postTweet this postLike this postShare this post on LinkedIn
Photo of Dan Cooper Dan Cooper

Daniel Cooper is co-chair of Covington’s Data Privacy and Cyber Security Practice, and advises clients on information technology regulatory and policy issues, particularly data protection, consumer protection, AI, and data security matters. He has over 20 years of experience in the field, representing…

Daniel Cooper is co-chair of Covington’s Data Privacy and Cyber Security Practice, and advises clients on information technology regulatory and policy issues, particularly data protection, consumer protection, AI, and data security matters. He has over 20 years of experience in the field, representing clients in regulatory proceedings before privacy authorities in Europe and counseling them on their global compliance and government affairs strategies. Dan regularly lectures on the topic, and was instrumental in drafting the privacy standards applied in professional sport.

According to Chambers UK, his “level of expertise is second to none, but it’s also equally paired with a keen understanding of our business and direction.” It was noted that “he is very good at calibrating and helping to gauge risk.”

Dan is qualified to practice law in the United States, the United Kingdom, Ireland and Belgium. He has also been appointed to the advisory and expert boards of privacy NGOs and agencies, such as the IAPP’s European Advisory Board, Privacy International and the European security agency, ENISA.

Photo of Alix Bertrand Alix Bertrand

Alix advises clients on EU data protection and technology law, with a particular focus on French privacy and data protection requirements. She regularly assists clients in relation to international data transfers, direct marketing rules as well as IT and data protection contracts. Alix…

Alix advises clients on EU data protection and technology law, with a particular focus on French privacy and data protection requirements. She regularly assists clients in relation to international data transfers, direct marketing rules as well as IT and data protection contracts. Alix is a member of the Paris and Brussels Bars.