Artificial Intelligence (AI)

On 29 March 2023, the UK Information Commissioner’s Office (“ICO”) published updated Guidance on AI and data protection (the “Guidance”) following “requests from UK industry to clarify requirements for fairness in AI”. AI has been a strategic priority for the ICO for several years. In 2020, the ICO published its first set of guidance on AI (as discussed in our blog post here) which it complemented with supplementary recommendations on Explaining Decisions Made with AI and an AI and Data Protection risk toolkit in 2022. The updated Guidance forms part of the UK’s wider efforts to adopt a “pro-innovation” approach to AI regulation which will require existing regulators to take responsibility for promoting and overseeing responsible AI within their sectors (for further information on the UK Government’s approach to AI regulation, see our blog post here).

The updated Guidance covers the ICO’s view of best practice for data protection-compliant AI, as well as how the ICO interprets data protection law in the context of AI systems that process personal data. The Guidance has been restructured in line with the UK GDPR’s data protection principles, and features new content, including guidance on fairness, transparency, lawfulness and accountability when using AI systems.Continue Reading UK ICO Updates Guidance on Artificial Intelligence and Data Protection

On April 25, 2023, four federal agencies — the Department of Justice (“DOJ”), Federal Trade Commission (“FTC”), Consumer Financial Protection Bureau (“CFPB”), and Equal Employment Opportunity Commission (“EEOC”) — released a joint statement on the agencies’ efforts to address discrimination and bias in automated systems. Continue Reading DOJ, FTC, CFPB, and EEOC Statement on Discrimination and AI

The New York City Department of Consumer and Worker Protection (“DCWP”) recently issued a Notice of Adoption of Final Rule (“Final Rule”) relating to the implementation of New York City’s law regulating the use of automated employment decision tools (“AEDT”) by NYC employers and employment agencies.

NYC’s Local Law 144 now takes effect on July 5, 2023.  As discussed in our prior post, Local Law 144 prohibits employers and employment agencies from using certain Artificial Intelligence (“AI”) tools in the hiring or promotion process unless the tool has been subject to a bias audit within one year prior to its use, the results of the audit are publicly available, and notice requirements to employees or job candidates are satisfied.

The issuance of DCWP’s Final Rule follows the prior release of two sets of proposed rules in September 2022 and December 2022.  The Final Rule’s most significant updates from the December 2022 proposal include an expansion of the definition of AEDTs and modifications to the requirements for bias audits.  Key provisions of the Final Rule are summarized below.Continue Reading NYC Artificial Intelligence Rule to Take Effect July 5, 2023: New York City Issues Final Rule Regulating the Use of AI Tools by Employers

On 29 March 2023, the UK Government published a White Paper entitled “A pro-innovation approach to AI regulation” (“White Paper”). The White Paper elaborates on the approach to AI set out by the Government in its 2022 AI Governance and Regulation Policy Statement (“Policy Statement” – covered in our blog post here). This announcement comes following the Government’s commitments, in the Spring Budget 2023, to build an expert taskforce to develop the UK’s capabilities in AI foundation models and produce guidance on the relationship between intellectual property law and generative AI (for more details of these initiatives, see here).

In its White Paper, the UK Government confirms that, unlike the EU, it does not plan to adopt new legislation to regulate AI, nor will it create a new regulator for AI (for further details on the EU’s proposed AI regulation see our blog posts here and here). Instead, the UK would require existing regulators, including the UK Information Commissioner’s Office (“ICO”), to take responsibility for the establishment, promotion, and oversight of responsible AI in their respective sectors. Regulators’ activities would be reinforced by the establishment of new support and oversight functions within central Government. This approach is already beginning to play out in certain regulated areas in the UK. For example, in October 2022, the Bank of England and Financial Conduct Authority (“FCA”) jointly released a Discussion Paper on Artificial Intelligence and Machine Learning considering how AI in financial services should be regulated and, in March 2023, the ICO updated its Guidance on AI and Data Protection.  Continue Reading UK Government Adopts a “Pro-Innovation” Approach to AI Regulation

On April 11, 2023, the Cyberspace Administration of China (“CAC”) released draft Administrative Measures for Generative Artificial Intelligence Services (《生成式人工智能服务管理办法(征求意见稿)》) (“draft Measures”) (official Chinese version available here) for public consultation.  The deadline for submitting comments is May 10, 2023.

The draft Measures would regulate generative Artificial Intelligence (“AI”) services that are “provided to the public in mainland China.”  These requirements cover a wide range of issues that are frequently debated in relation to the governance of generative AI globally, such as data protection, non-discrimination, bias and the quality of training data.  The draft Measures also highlight issues arising from the use of generative AI that are of particular concern to the Chinese government, such as content moderation, the completion of a security assessment for new technologies, and algorithmic transparency.  The draft Measures thus reflect the Chinese government’s objective to craft its own governance model for new technologies such as generative AI.

Further, and notwithstanding the requirements introduced by the draft Measures (as described in greater detail below), the text states that the government encourages the (indigenous) development of (and international cooperation in relation to) generative AI technology, and encourages companies to adopt “secure and trustworthy software, tools, computing and data resources” to that end. 

Notably, the draft Measures do not make a distinction between generative AI services offered to individual consumers or enterprise customers, although certain requirements appear to be more directed to consumer-facing services than enterprise services.

This blog post identifies a few highlights of the draft Measures.Continue Reading China Proposes Draft Measures to Regulate Generative AI

This quarterly update summarizes key legislative and regulatory developments in the first quarter of 2023 related to Artificial Intelligence (“AI”), the Internet of Things (“IoT”), connected and autonomous vehicles (“CAVs”), and data privacy and cybersecurity.Continue Reading U.S. AI, IoT, CAV, and Privacy & Cybersecurity Legislative & Regulatory Update – First Quarter 2023

The EU’s AI Act Proposal is continuing to make its way through the ordinary legislative procedure.  In December 2022, the Council published its sixth and final compromise text (see our previous blog post), and over the last few months, the European Parliament has been negotiating its own amendments to the AI Act Proposal.  The European Parliament is expected to finalize its position in the upcoming weeks, before entering into trilogue negotiations with the Commission and the Council, which could begin as early as April 2023.  The AI Act is expected to be adopted before the end of 2023, during the Spanish presidency of the Council, and ahead of the European elections in 2024. 

During negotiations between the Council and the European Parliament, we can expect further changes to the Commission’s AI Act proposal, in an attempt to iron out any differences and agree on a final version of the Act.  Below, we outline the key amendments proposed by the European Parliament in the course of its negotiations with the Council.Continue Reading A Preview into the European Parliament’s Position on the EU’s AI Act Proposal

On 24 January 2023, the Italian Supervisory Authority (“Garante”) announced it fined three hospitals in the amount of 55,000 EUR each for their unlawful use an artificial intelligence (“AI”) system for risk stratification purposes, i.e., to systematically categorize patients based on their health status. The Garante also ordered the hospitals to erase all the data they obtained as a consequence of that unlawful processing.Continue Reading Italian Garante Fines Three Hospitals Over Their Use of AI for Risk Stratification Purposes, Establishes That Predictive Medicine Processing Requires the Patient’s Explicit Consent

At the CPPA board meeting last week, the agency adopted the regulations and directed the staff to file the rulemaking package with the Office of Administrative Law (“OAL”). Before these regulations can become effective (and therefore enforceable), the OAL must complete its review of the regulations.  It has 30 working days to complete its review

2023 is set to be an important year for developments in AI regulation and policy in the EU. At the end of last year, on December 6, 2022, the Council of the EU (the “Council”) adopted its general approach and compromise text on the proposed Regulation Laying Down Harmonized Rules on Artificial Intelligence (the “AI Act”), bringing the AI Act one step closer to being adopted. The European Parliament is currently developing its own position on the AI Act which is expected to be finalized by March 2023. Following this, the Council, Parliament and European Commission (“Commission”) will enter into trilogue discussions to finalize the Act. Once adopted, it will be directly applicable across all EU Member States and its obligations are likely to apply three years after the AI Act’s entry into force (according to the Council’s compromise text).  

In 2022, the Commission also put forward new liability rules for AI systems via the proposed AI Liability Directive (“AILD”) and updates to the Product Liability Directive (“PLD”). The AILD establishes rules for non-contractual, fault-based civil claims involving AI systems. Specifically, the proposal establishes rules that would govern the preservation and disclosure of evidence in cases involving high-risk AI, as well as rules on the burden of proof and corresponding rebuttable presumptions. Meanwhile, the revised PLD harmonizes rules that apply to no-fault liability claims brought by persons who suffer physical injury or damage to property caused by defective products. Software, including AI systems, are explicitly named as “products” under the proposal meaning that an injured person can claim compensation for damage caused by AI (see our previous blog post for further details on the proposed AILD and PLD). Both pieces of legislation will be reviewed, and potentially amended, by the Council and the European Parliament in 2023.Continue Reading EU AI Policy and Regulation: What to look out for in 2023