EMEA Tech Regulation

Last month, the European Commission published a draft Implementing Regulation (“IR”) under the EU’s revised Network and Information Systems Directive (“NIS2”). The draft IR applies to entities in the digital infrastructure sector, ICT service management and digital service providers (e.g., cloud computing providers, online marketplaces, and online social networks). It sets out further detail on (i) the specific cybersecurity risk-management measures those entities must implement; and (ii) when an incident affecting those entities is considered to be “significant”. Once finalized, it will apply from October 18, 2024.

Many companies may be taken aback by the granular nature of some of the technical measures listed and the criteria to determine if an incident is significant and reportable – especially coming so close to the October deadline for Member States to start applying their national transpositions of NIS2.

The IR is open for feedback via the Commission’s Have Your Say portal until July 25.Continue Reading NIS2: Commission Publishes Long-Awaited Draft Implementing Regulation On Technical And Methodological Requirements And Significant Incidents

Earlier this week, Members of the European Parliament (MEPs) cast their votes in favor of the much-anticipated AI Act. With 523 votes in favor, 46 votes against, and 49 abstentions, the vote is a culmination of an effort that began in April 2021, when the EU Commission first published its proposal for the Act.

Here’s what lies ahead:Continue Reading EU Parliament Adopts AI Act

From February 17, 2024, the Digital Services Act (“DSA”) will apply to providers of intermediary services (e.g., cloud services, file-sharing services, search engines, social networks and online marketplaces). These entities will be required to comply with a number of obligations, including implementing notice-and-action mechanisms, complying with detailed rules on terms and conditions, and publishing transparency reports on content moderation practices, among others. For more information on the DSA, see our previous blog posts here and here.

As part of its powers conferred under the DSA, the European Commission is empowered to adopt delegated and implementing acts* on certain aspects of implementation and enforcement of the DSA. In 2023, the Commission adopted one delegated act on supervisory fees to be paid by very large online platforms and very large online search engines (“VLOPs” and “VLOSEs” respectively), and one implementing act on procedural matters relating to the Commission’s enforcement powers. The Commission has proposed several other delegated and implementing acts, which we set out below. The consultation period for these draft acts have now passed, and we anticipate that they will be adopted in the coming months.Continue Reading Draft Delegated and Implementing Acts Pursuant to the Digital Services Act

On August 22, 2023, the Spanish Council of Ministers approved the Statute of the Spanish Agency for the Supervision of Artificial Intelligence (“AESIA”) thus creating the first AI regulatory body in the EU. The AESIA will start operating from December 2023, in anticipation of the upcoming EU AI Act (for a summary of the AI Act, see our EMEA Tech Regulation Toolkit). In line with its National Artificial Intelligence Strategy, Spain has been playing an active role in the development of AI initiatives, including a pilot for the EU’s first AI Regulatory Sandbox and guidelines on AI transparency.
Continue Reading Spain Creates AI Regulator to Enforce the AI Act

On June 27, 2023, the European Parliament and the Council of the EU reached a political agreement on the Data Act (see our previous blog post here), after 18 months of negotiations since the tabling of the Commission’s proposal in February 2022 (see our previous blog post here).  EU lawmakers bridged their differences on a number of topics, including governance matters, territorial scope, protection of trade secrets, and certain defined terms, among others.

The Data Act is a key component of the European strategy for data. Its objective is to remove barriers to the use and re-use of non-personal data, particularly as it relates to data generated by connected products and related services, including virtual assistants. It also seeks to facilitate the ability of customers to switch between providers of data processing services.

We’ve outlined below some key aspects of the new legislation.Continue Reading European Parliament and Council Release Agreed Text on Data Act

Late yesterday, the EU institutions reached political agreement on the European Data Act (see the European Commission’s press release here and the Council’s press release here).  The proposal for a Data Act was first tabled by the European Commission in February 2022 as a key piece of the European Strategy for Data (see our previous blogpost here). The Data Act will sit alongside the EU’s General Data Protection Regulation (“GDPR”), Data Governance Act, Digital Services Act, and the Digital Markets Act.Continue Reading Political Agreement Reached on the European Data Act

On 11 May 2023, members of the European Parliament’s internal market (IMCO) and civil liberties (LIBE) committees agreed their final text on the EU’s proposed AI Act. After MEPs formalize their position through a plenary vote (expected this summer), the AI Act will enter the last stage of the legislative process: “trilogue” negotiations with the European Commission, Parliament and the Council, which adopted its own amendments in late 2022 (see our blog post here for further details). European lawmakers hope to adopt the final AI Act before the end of 2023, ahead of the European Parliament elections in 2024.

In perhaps the most significant change from the Commission and Council draft, under MEPs’ proposals, providers of foundation models – a term defined as an AI model that is “trained on broad data at scale, is designed for generality of output, and can be adapted to a wide range of distinctive tasks” (Article 3(1c)) – would be subject to a series of obligations. For example, providers would be under a duty to “demonstrate through appropriate design, testing and analysis that the identification, the reduction and mitigation of reasonably foreseeable risks to health, safety, fundamental rights, the environment and democracy and the rule of law prior and throughout development” (Article 28b(2)(a)), as well as to draw up “extensive technical documentation and intelligible instructions for use” to help those that build AI systems using the foundation model (Article 28b(2)(e)).Continue Reading EU Parliament’s AI Act Proposals Introduce New Obligations for Foundation Models and Generative AI

On 29 March 2023, the UK Information Commissioner’s Office (“ICO”) published updated Guidance on AI and data protection (the “Guidance”) following “requests from UK industry to clarify requirements for fairness in AI”. AI has been a strategic priority for the ICO for several years. In 2020, the ICO published its first set of guidance on AI (as discussed in our blog post here) which it complemented with supplementary recommendations on Explaining Decisions Made with AI and an AI and Data Protection risk toolkit in 2022. The updated Guidance forms part of the UK’s wider efforts to adopt a “pro-innovation” approach to AI regulation which will require existing regulators to take responsibility for promoting and overseeing responsible AI within their sectors (for further information on the UK Government’s approach to AI regulation, see our blog post here).

The updated Guidance covers the ICO’s view of best practice for data protection-compliant AI, as well as how the ICO interprets data protection law in the context of AI systems that process personal data. The Guidance has been restructured in line with the UK GDPR’s data protection principles, and features new content, including guidance on fairness, transparency, lawfulness and accountability when using AI systems.Continue Reading UK ICO Updates Guidance on Artificial Intelligence and Data Protection

On 29 March 2023, the UK Government published a White Paper entitled “A pro-innovation approach to AI regulation” (“White Paper”). The White Paper elaborates on the approach to AI set out by the Government in its 2022 AI Governance and Regulation Policy Statement (“Policy Statement” – covered in our blog post here). This announcement comes following the Government’s commitments, in the Spring Budget 2023, to build an expert taskforce to develop the UK’s capabilities in AI foundation models and produce guidance on the relationship between intellectual property law and generative AI (for more details of these initiatives, see here).

In its White Paper, the UK Government confirms that, unlike the EU, it does not plan to adopt new legislation to regulate AI, nor will it create a new regulator for AI (for further details on the EU’s proposed AI regulation see our blog posts here and here). Instead, the UK would require existing regulators, including the UK Information Commissioner’s Office (“ICO”), to take responsibility for the establishment, promotion, and oversight of responsible AI in their respective sectors. Regulators’ activities would be reinforced by the establishment of new support and oversight functions within central Government. This approach is already beginning to play out in certain regulated areas in the UK. For example, in October 2022, the Bank of England and Financial Conduct Authority (“FCA”) jointly released a Discussion Paper on Artificial Intelligence and Machine Learning considering how AI in financial services should be regulated and, in March 2023, the ICO updated its Guidance on AI and Data Protection.  Continue Reading UK Government Adopts a “Pro-Innovation” Approach to AI Regulation

2023 is set to be an important year for developments in AI regulation and policy in the EU. At the end of last year, on December 6, 2022, the Council of the EU (the “Council”) adopted its general approach and compromise text on the proposed Regulation Laying Down Harmonized Rules on Artificial Intelligence (the “AI Act”), bringing the AI Act one step closer to being adopted. The European Parliament is currently developing its own position on the AI Act which is expected to be finalized by March 2023. Following this, the Council, Parliament and European Commission (“Commission”) will enter into trilogue discussions to finalize the Act. Once adopted, it will be directly applicable across all EU Member States and its obligations are likely to apply three years after the AI Act’s entry into force (according to the Council’s compromise text).  

In 2022, the Commission also put forward new liability rules for AI systems via the proposed AI Liability Directive (“AILD”) and updates to the Product Liability Directive (“PLD”). The AILD establishes rules for non-contractual, fault-based civil claims involving AI systems. Specifically, the proposal establishes rules that would govern the preservation and disclosure of evidence in cases involving high-risk AI, as well as rules on the burden of proof and corresponding rebuttable presumptions. Meanwhile, the revised PLD harmonizes rules that apply to no-fault liability claims brought by persons who suffer physical injury or damage to property caused by defective products. Software, including AI systems, are explicitly named as “products” under the proposal meaning that an injured person can claim compensation for damage caused by AI (see our previous blog post for further details on the proposed AILD and PLD). Both pieces of legislation will be reviewed, and potentially amended, by the Council and the European Parliament in 2023.Continue Reading EU AI Policy and Regulation: What to look out for in 2023