On 18 March 2026, the European Parliament’s Committee on the Internal Market and Consumer Protection (“IMCO”) and the Committee on Civil Liberties, Justice and Home Affairs (“LIBE”) adopted their joint negotiating position on the European Commission’s proposed Digital Omnibus on AI (which we previously analysed here). The position will now proceed to a plenary vote, expected on 26 March 2026. The Council of the EU had previously adopted its negotiating position on 13 March 2026. This sets up trilogue negotiations between the Parliament, Council, and Commission.

Ahead of those negotiations, some key areas in play include:

1. Extended Timelines for High-Risk AI Systems

The Parliament and Council propose fixed deadlines for the delayed application of high-risk AI system obligations. Requirements for Annex III high-risk systems (e.g., biometrics, education, employment) would apply from 2 December 2027, and requirements for Annex I systems (AI embedded in regulated products) from 2 August 2028. This reflects a shared concern that the Commission’s original proposal would create uncertainty as to when the rules would take effect. The Commission’s draft would have enabled it to bring forward application dates from a backstop if it determined that there were sufficient harmonized EU standards or other support tools in place. To remove this uncertainty, the Parliament and Council propose to fix the backstop dates as the application dates, removing that flexibility for the Commission.

2. New Prohibition on Non-Consensual AI-Generated Intimate Imagery

Both the Parliament and Council introduce a new prohibited practice under Article 5 of the EU AI Act (“AI Act”), targeting AI systems capable of generating realistic sexually explicit images or videos of identifiable individuals without their consent (and, in the Council’s position, child pornography). Both approaches seek to limit the applicability of this prohibition to systems that lack adequate technical and other safeguards to reliably prevent such generation.

3. Shortened Transparency Grace Period

The Parliament’s compromise shortens the transitional period for providers to comply with Article 50(2) marking obligations (for AI systems placed on the market before 2 August 2026). The Commission had proposed a six-month grace period until 2 February 2027, but the Parliament’s text reduces this to approximately three months, until 2 November 2026.

4. Rollback of Proposed Simplification Measures: Art. 6(4) Registration, Sensitive Data, and AI Literacy

Several of the Commission’s proposed simplifications have been struck out or narrowed by both co-legislators, in line with concerns raised by the EDPB and EDPS in their Joint Opinion 1/2026 (which we described here).

6(4) Registration requirements. The Commission had proposed removing the obligation for providers to register AI systems in the EU database where they self-assess those systems as non-high-risk under Article 6(3). Both the Council and Parliament have rejected this proposal, instead reinstating the registration requirement while agreeing to streamline the content requirements for the registration entry (simplifying Section B of Annex VIII). This alignment suggests the final text is very likely to preserve some form of registration requirement, reflecting the EDPB/EDPS’s concern that removing registration would reduce regulatory visibility over potentially high-risk AI systems.

Processing of special categories of personal data. The Commission had proposed lowering the threshold for processing special categories of personal data for bias detection from “strictly necessary” to “necessary,” and extending this legal basis from high-risk AI systems to all AI systems and models. Both the Council and Parliament proposals reinstate the “strict necessity” standard, and only permit extension to other AI systems and models on an “exceptional” basis, subject to a requirement that the processing be “necessary and proportionate” and limited to cases where bias is likely to affect health and safety, fundamental rights, or lead to discrimination prohibited under Union law. They also clarify that this provision does not create any obligation to conduct bias detection and correction with special categories of personal data.

AI literacy. The Commission’s and Council’s proposals soften the AI literacy obligation under Article 4, shifting it from a binding requirement on providers and deployers to an encouragement framework led by the Commission and Member States. Parliament’s compromise reinstates the mandatory obligation on providers and deployers, but lowers the standard from ensuring “a sufficient level of AI literacy” of staff to “support[ing] the improvement of AI literacy” (emphasis added). Parliament’s compromise also adds a requirement for the Commission to issue practical implementation guidance and encourages public-private partnerships to support broader literacy efforts.

4. Other Notable Changes

Restructuring the Relationship Between the AI Act and Sectoral Product Legislation. The Parliament’s compromise includes one of the most structurally significant changes in the Omnibus: the deletion of Section A of Annex I, and the incorporation of the New Legislative Framework (“NLF”) legislation listed there into Section B instead. Under the existing AI Act, Section A legislation (including the Medical Devices Regulation, In Vitro Diagnostic Medical Devices Regulation, and the new Machinery Regulation) triggers a combined conformity assessment in which both AI Act and sectoral requirements are assessed together. Moving this legislation to Section B would make sectoral conformity assessment procedures the primary compliance pathway, with the AI Act’s requirements integrated into the relevant sectoral legislation and only a limited set of AI Act provisions applying directly in the interim. Notably, the Commission had also proposed various amendments intended to alleviate friction between the AI Act and the Section A frameworks—indicating that there is a shared recognition of potential areas of overlap and confusion.

AI Office Powers and Resourcing. Both the Parliament and Council positions reinforce the AI Office’s supervisory role over AI systems based on general-purpose AI (“GPAI”) models developed by the same provider, as well as AI systems integrated into very large online platforms or search engines under the Digital Services Act (“DSA”). In addition, both drafts would give the AI Office supervisory authority over AI systems based on GPAI models developed by providers belonging to the same undertaking. The Parliament’s compromise adds a provision requiring adequate human, financial, and technical resourcing of the AI Office. In various ways, both the Parliament and Council drafts also introduce limitations to the AI Office’s exclusive competence over GPAI models by providing new exceptions where national authorities remain competent.

What’s Next

Once Parliament adopts the draft in plenary—in a vote expected on 26 March 2026—trilogue negotiations between the Council, Parliament, and Commission are expected to begin in April. The Cypriot Presidency of the Council has made the Digital Omnibus on AI a priority among its digital files, with the aim of reaching an agreed text in May 2026.

This pace is driven by intense pressure to finalize amendments before the AI Act’s general application date of 2 August 2026. If the Omnibus on AI is not adopted by then, the original high-risk obligations timeline would apply as currently written—likely before the necessary supporting standards and guidance are in place. The broad alignment between the Council and Parliament—both proposing to roll back many of the Commission’s simplifications—suggests that this ambitious timetable is achievable. For businesses, the Omnibus on AI looks likely to bring near-term benefit, providing more time and a clearer fixed date for key obligations to apply. In the longer term, however, its impact on the substance of those obligations may be more limited than the Commission’s original, more ambitious simplification proposals would have delivered.

Print:
Email this postTweet this postLike this postShare this post on LinkedIn
Photo of Jadzia Pierce Jadzia Pierce

Jadzia Pierce advises clients developing and deploying technology on a range of regulatory matters, including the intersection of AI governance and data protection. Jadzia draws on her experience in senior in house leadership roles and extensive, hands on engagement with regulators worldwide. Prior…

Jadzia Pierce advises clients developing and deploying technology on a range of regulatory matters, including the intersection of AI governance and data protection. Jadzia draws on her experience in senior in house leadership roles and extensive, hands on engagement with regulators worldwide. Prior to rejoining Covington in 2026, Jadzia served as Global Data Protection Officer at Microsoft, where she oversaw and advised on the company’s GDPR/UK GDPR program and acted as a primary point of contact for supervisory authorities on matters including AI, children’s data, advertising, and data subject rights.

Jadzia previously was Director of Microsoft’s Global Privacy Policy function and served as Associate General Counsel for Cybersecurity at McKinsey & Company. She began her career at Covington, advising Fortune 100 companies on privacy, cybersecurity, incident preparedness and response, investigations, and data driven transactions.

At Covington, Jadzia helps clients operationalize defensible, scalable approaches to AI enabled products and services, aligning privacy and security obligations with rapidly evolving regulatory frameworks across jurisdictions—with a particular focus on anticipating enforcement trends and navigating inter regulator dynamics.

Photo of Dan Cooper Dan Cooper

Daniel Cooper is co-chair of Covington’s Data Privacy and Cyber Security Practice, and advises clients on information technology regulatory and policy issues, particularly data protection, consumer protection, AI, and data security matters. He has over 20 years of experience in the field, representing…

Daniel Cooper is co-chair of Covington’s Data Privacy and Cyber Security Practice, and advises clients on information technology regulatory and policy issues, particularly data protection, consumer protection, AI, and data security matters. He has over 20 years of experience in the field, representing clients in regulatory proceedings before privacy authorities in Europe and counseling them on their global compliance and government affairs strategies. Dan regularly lectures on the topic, and was instrumental in drafting the privacy standards applied in professional sport.

According to Chambers UK, his “level of expertise is second to none, but it’s also equally paired with a keen understanding of our business and direction.” It was noted that “he is very good at calibrating and helping to gauge risk.”

Dan is qualified to practice law in the United States, the United Kingdom, Ireland and Belgium. He has also been appointed to the advisory and expert boards of privacy NGOs and agencies, such as the IAPP’s European Advisory Board, Privacy International and the European security agency, ENISA.

Photo of Marty Hansen Marty Hansen

Martin Hansen has over two decades of experience representing some of the world’s leading innovative companies in the internet, IT, e-commerce, and life sciences sectors on a broad range of regulatory, intellectual property, and competition issues, including related to artificial intelligence. Martin has…

Martin Hansen has over two decades of experience representing some of the world’s leading innovative companies in the internet, IT, e-commerce, and life sciences sectors on a broad range of regulatory, intellectual property, and competition issues, including related to artificial intelligence. Martin has extensive experience in advising clients on matters arising under EU and U.S. law, UK law, the World Trade Organization agreements, and other trade agreements.

Photo of Atli Stannard Atli Stannard

Atli Stannard advises clients on EU trade law and policy, technology regulation, and the governance of strategically significant industrial sectors, with a particular focus on the geoeconomic forces shaping European regulation, industrial policy, and the transatlantic relationship. Clients describe him as providing “exceptional…

Atli Stannard advises clients on EU trade law and policy, technology regulation, and the governance of strategically significant industrial sectors, with a particular focus on the geoeconomic forces shaping European regulation, industrial policy, and the transatlantic relationship. Clients describe him as providing “exceptional levels of insight.”

Atli guides clients in highly regulated industries through complex EU policymaking processes, protecting and advancing their core business and regulatory priorities. He is a member of the firm’s Public Policy, International Trade, Sustainability, and Business & Human Rights practices.

Atli’s trade practice covers the full suite of EU trade instruments, including the EU Anti‑Coercion Instrument, trade defence investigations, customs classification and market‑access issues, investment-related tools (FDI and the foreign subsidies regulation), and environmental-related trade tools such as CBAM. He frequently advises on regulatory issues at the intersection of trade and technology—covering platform, data, AI, and competition policy—where digital and geoeconomic considerations converge.

His work also encompasses the EU frameworks governing medical technologies and other strategically important industrial sectors—such as automotive, and food and beverage—and includes supporting clients on environmental and EU ESG policymaking. Across these domains, he helps clients identify regulatory risks early, anticipate institutional dynamics, and build clear, actionable strategies—working closely with them to engage effectively with the European Commission, European Parliament, Council of the EU, and Member State and UK governments.