Artificial Intelligence (AI)

On June 5, 2025, the UK’s Information Commissioner’s Office (“ICO”) launched its new AI and biometrics strategy. The strategy aims to increase its scrutiny of AI and biometric technologies focusing on three priority situations, namely where: stakes are high; there is clear public concern for the technology; and regulatory clarity can provide immediate impact.

The ICO identified three areas of focus in its strategy:

  1. Transparency and explainability, i.e., when and how the technologies affect people;
  2. Bias and discrimination, particularly where the technologies have been trained on “flawed, incomplete or unrepresentative information”; and
  3. Rights and redress, i.e., making sure that systems are accurate, appropriate safeguards are in place to protect people’s rights, and that there are ways to challenge and correct outcomes that result in harm.

Continue Reading The ICO’s AI and biometrics strategy

Federal legislation to “pause” state artificial intelligence regulations will not become law—for now—after the Senate stripped the measure from the budget reconciliation package, the One Big Beautiful Bill Act (H.R. 1).

The Senate voted 99–1 to strike the moratorium language from the bill during a marathon 27-hour “vote-a-rama” on July 1. The Senate voted 51–50, with Vice President J.D. Vance breaking the tie, to pass the bill (without the moratorium) and send it back to the House.  The House passed the Senate-amended bill on July 3 by a vote of 218–214, with all Democrats and two Republicans voting against.  President Trump signed the bill into law on July 4.Continue Reading Senate Nixes State AI Enforcement Moratorium, For Now

On June 22, Texas Governor Greg Abbott (R) signed the Texas Responsible AI Governance Act (“TRAIGA”) (HB 149) into law.  The law, which takes effect on January 1, 2026, makes Texas the second state to enact comprehensive AI consumer protection legislation, following the 2024 enactment of the Colorado

Continue Reading Texas Enacts AI Consumer Protection Law

This year, state lawmakers have introduced over a dozen bills to regulate “surveillance,” “personalized,” or “dynamic” pricing.  Although many of these proposals have failed as 2025 state legislative sessions come to a close, lawmakers in New York, California, and a handful of other states are moving forward with a range

Continue Reading State Legislatures Advance Surveillance Pricing Regulations

On June 19, 2025, the French Data Protection Authority (“CNIL”) published two recommendations for AI developers.  The first recommendation covers reliance on the GDPR’s legitimate interest legal basis for developing an AI model.  It provides examples of legitimate interests that can justify the use of personal data for AI development.  The second recommendation discusses measures to implement when collecting personal data through “web scraping.”  It provides a list of measures that, if followed, will ensure compliance with the GDPR’s accountability principle.Continue Reading CNIL Publishes Recommendations on Legitimate Interest as a Legal Basis for AI Training

On May 22, 2025, the Cybersecurity and Infrastructure Security Agency (“CISA”), which sits within the Department of Homeland Security (“DHS”) released guidance for AI system operators regarding managing data security risks.  The associated press release explains that the guidance provides “best practices for system operators to mitigate cyber risks through the artificial intelligence lifecycle, including consideration on securing the data supply chain and protecting data against unauthorized modification by threat actors.”  CISA published the guidance in conjunction with the National Security Agency, the Federal Bureau of Investigation, and cyber agencies from Australia, the United Kingdom, and New Zealand.  This guidance is intended for organizations using AI systems in their operations, including Defense Industrial Bases, National Security Systems owners, federal agencies, and Critical Infrastructure owners and operators. This guidance builds on the Joint Guidance on Deploying AI Systems Security released by CISA and several other U.S. and foreign agencies in April 2024.Continue Reading CISA Releases AI Data Security Guidance

In February 2025, the European Commission published two sets of guidelines to clarify key aspects of the EU Artificial Intelligence Act (“AI Act”): Guidelines on the definition of an AI system and Guidelines on prohibited AI practices. These guidelines are intended to provide guidance on the set of AI Act obligations that started to apply on February 2, 2025 – which includes the definitions section of the AI Act, obligations relating to AI literacy, and prohibitions on certain AI practices.

This article summarizes the key takeaways from the Commission’s guidelines on the definition of AI systems (the “Guidelines”). Please see our blogs on the guidelines on prohibited AI practices here, and our blog on AI literacy requirements under the AI Act here.Continue Reading European Commission Guidelines on the Definition of an “AI System”

In February 2025, the European Commission published two sets of guidelines to clarify key aspects of the EU Artificial Intelligence Act (“AI Act”): Guidelines on the definition of an AI system and Guidelines on prohibited AI practices. These guidelines are intended to provide guidance on the set of AI Act obligations that started to apply on February 2, 2025 – which includes the definitions section of the AI Act, obligations relating to AI literacy, and prohibitions on certain AI practices.

This article summarizes the key takeaways from the Commission’s guidelines on prohibited AI practices (“Guidelines”). Please see our blogs on the guidelines on the definition of AI systems here, and our blog on AI literacy requirements under the AI Act here.Continue Reading European Commission Guidelines on Prohibited AI Practices under the EU Artificial Intelligence Act

On April 3, 2025, the Budapest District Court made a request for a preliminary ruling to the Court of Justice of the European Union (“CJEU”) relating to the application of EU copyright rules to outputs generated by large language model (LLM)-based chatbots, specifically Google’s Gemini (formerly Bard), in response to a user prompt. This Case C-250/25 involves a dispute between Like Company, a Hungarian news publisher, and Google Ireland Ltd.Continue Reading CJEU Receives Questions on Copyright Rules Applying to AI Chatbot

AI chatbots are transforming how businesses handle consumer inquiries and complaints, offering speed and availability that traditional channels often cannot match.  However, the European Commission’s recent Digital Fairness Act Fitness Check has spotlighted a gap: EU consumers currently lack a cross-sectoral right to demand human contact when interacting with AI chatbots in business-to-consumer settings.  It is still unclear whether and how the European Commission is proposing to address this.  The Digital Fairness Act could do so, but the Commission’s proposal is only planned to be published in the 3rd quarter of 2026.  This post highlights key consumer protection considerations for companies deploying AI chatbots in the EU market.Continue Reading Digital Fairness Act Series: Topic 2 – Transparency and Disclosure Obligations for AI Chatbots in Consumer Interactions