Artificial Intelligence (AI)

On 8 October 2025, the European Commission published its Apply AI Strategy (the “Strategy”), a comprehensive policy framework aimed at accelerating the adoption and integration of artificial intelligence (“AI”) across strategic industrial sectors and the public sector in the EU.

The Strategy is structured around three pillars: (1) introducing sectoral flagships to boost AI use in key industrial sectors; (2) addressing cross-cutting challenges; and (3) establishing a single governance mechanism to provide sectoral stakeholders a way to participate in AI policymaking.

The Apply AI Strategy is accompanied by the AI in Science Strategy, and it will be complemented by the Data Union Strategy (which is anticipated later this year).Continue Reading European Commission Publishes Apply AI Strategy to Accelerate Sectoral AI Adoption Across the EU

On September 23, 2025, the Italian law on artificial intelligence (hereinafter, “Italian AI Law”) was signed into law, after receiving final approval by the Italian Senate on September 17, 2025. 

The law consists of varied provisions, including general principles and targeted sectoral rules in certain areas not covered by the EU AI Act.  The Italian AI Law will enter into force on October 10, 2025. We provide below an overview of key aspects of the final text of the Italian AI Law.  For full detail, please see our previous blogpost here.Continue Reading Italy Adopts Artificial Intelligence Law

The California Civil Rights Council and the California Privacy Protection Agency have recently passed regulations that impose requirements on employers who use “automated-decision systems” or “automated decisionmaking technology,” respectively, in employment decisions or certain HR processes. On the legislative side, the California Legislature passed SB 7, which would impose

Continue Reading Navigating California’s New and Emerging AI Employment Regulations

On September 16, 2025, the European Commission launched a call for evidence to collect feedback and best practices on simplifying several key areas of the EU digital rulebook, ahead of its planned Digital Omnibus package. This initiative targets legislation related to data, cybersecurity, and artificial intelligence, aiming to reduce administrative burdens and compliance costs for businesses while preserving high standards of fairness, security, and privacy online.Continue Reading Commission Collects Feedback to Simplify Rules on Data, Cybersecurity and Artificial Intelligence in Upcoming Digital Omnibus

On July 30, 2025, the Italian Data Protection Authority (“Garante”) released a statement addressing the risks of using AI to interpret medical data.  In this statement, the Garante recognizes the growing trend of individuals uploading medical analyses, X-rays, and other reports onto generative artificial intelligence platforms to obtain interpretations and diagnoses.  It warns users of these AI services to carefully evaluate the implications of sharing health-related data with AI providers and relying on automatically generated responses.Continue Reading Italian Garante Adopts Statement on Health Data and AI

On July 23, the White House released its AI Action Plan, outlining the key priorities of the Trump Administration’s AI policy agenda.  In parallel, President Trump signed three AI executive orders directing the Executive Branch to implement the AI Action Plan’s policies on “Preventing Woke AI in

Continue Reading Trump Administration Issues AI Action Plan and Series of AI Executive Orders

On June 26, 2025, the European Parliament’s Committee on Employment and Social Affairs published a draft report (“Draft Report”) recommending that the Commission initiate the legislative process for an EU Directive on algorithmic management in the workplace.  The Draft Report defines algorithmic management as the use of automated systemsincluding those involving artificial intelligenceto monitor, assess, or make decisions affecting workers and solo self-employed persons.

This Draft Report follows a Commission study published in March 2025 (“Commission Study”), which found that while existing EU legislation, such as the GDPR, addresses some risks to workers from algorithmic management, others remain.  The Commission Study also recognizes that the AI Act does not establish specific rights for workers in the context of AI use, which is noted as a concern.

The Draft Report encloses the proposed text for a new Directive on algorithmic management in the workplace (“Proposed Directive”).  The Draft Report has not yet been endorsed by the European Parliament.Continue Reading European Parliament Committee Recommends Commission to Propose EU Directive on Algorithmic Management

There is an ongoing debate in Brussels about the circumstances under which AI-based safety components integrated into radio equipment are subject to the requirements for high-risk AI systems of the EU Artificial Intelligence Act 2024/1689 (the “AI Act”). The debate is particularly relevant because, if AI-based safety components are considered high-risk under the AI Act, they will be subject to a comprehensive set of regulatory requirements under the AI Act as of August 2, 2027. These requirements include risk management, data quality measures, transparency towards users, human oversight, as well as obligations relating to accuracy, robustness, and cybersecurity.

The discussion affects devices like smartphones with AI-driven emergency call features, smart home safety systems, smart home appliances and drones using AI for obstacle avoidance and emergency landing. In effect, many, if not all, of the AI-based safety components of internet-connected radio equipment could be subject to the AI Act’s requirements for high-risk AI systems.

Below we briefly outline the framework of the current debate.Continue Reading When is a Safety Component of Radio Equipment a High-Risk AI System Under the EU Artificial Intelligence Act?

On June 5, 2025, the UK’s Information Commissioner’s Office (“ICO”) launched its new AI and biometrics strategy. The strategy aims to increase its scrutiny of AI and biometric technologies focusing on three priority situations, namely where: stakes are high; there is clear public concern for the technology; and regulatory clarity can provide immediate impact.

The ICO identified three areas of focus in its strategy:

  1. Transparency and explainability, i.e., when and how the technologies affect people;
  2. Bias and discrimination, particularly where the technologies have been trained on “flawed, incomplete or unrepresentative information”; and
  3. Rights and redress, i.e., making sure that systems are accurate, appropriate safeguards are in place to protect people’s rights, and that there are ways to challenge and correct outcomes that result in harm.

Continue Reading The ICO’s AI and biometrics strategy

Federal legislation to “pause” state artificial intelligence regulations will not become law—for now—after the Senate stripped the measure from the budget reconciliation package, the One Big Beautiful Bill Act (H.R. 1).

The Senate voted 99–1 to strike the moratorium language from the bill during a marathon 27-hour “vote-a-rama” on July 1. The Senate voted 51–50, with Vice President J.D. Vance breaking the tie, to pass the bill (without the moratorium) and send it back to the House.  The House passed the Senate-amended bill on July 3 by a vote of 218–214, with all Democrats and two Republicans voting against.  President Trump signed the bill into law on July 4.Continue Reading Senate Nixes State AI Enforcement Moratorium, For Now