Artificial Intelligence (AI)

On 24 January 2023, the Italian Supervisory Authority (“Garante”) announced it fined three hospitals in the amount of 55,000 EUR each for their unlawful use an artificial intelligence (“AI”) system for risk stratification purposes, i.e., to systematically categorize patients based on their health status. The Garante also ordered the hospitals to erase all the data they obtained as a consequence of that unlawful processing.

Continue Reading Italian Garante Fines Three Hospitals Over Their Use of AI for Risk Stratification Purposes, Establishes That Predictive Medicine Processing Requires the Patient’s Explicit Consent

At the CPPA board meeting last week, the agency adopted the regulations and directed the staff to file the rulemaking package with the Office of Administrative Law (“OAL”). Before these regulations can become effective (and therefore enforceable), the OAL must complete its review of the regulations.  It has 30 working days to complete its review

2023 is set to be an important year for developments in AI regulation and policy in the EU. At the end of last year, on December 6, 2022, the Council of the EU (the “Council”) adopted its general approach and compromise text on the proposed Regulation Laying Down Harmonized Rules on Artificial Intelligence (the “AI Act”), bringing the AI Act one step closer to being adopted. The European Parliament is currently developing its own position on the AI Act which is expected to be finalized by March 2023. Following this, the Council, Parliament and European Commission (“Commission”) will enter into trilogue discussions to finalize the Act. Once adopted, it will be directly applicable across all EU Member States and its obligations are likely to apply three years after the AI Act’s entry into force (according to the Council’s compromise text).  

In 2022, the Commission also put forward new liability rules for AI systems via the proposed AI Liability Directive (“AILD”) and updates to the Product Liability Directive (“PLD”). The AILD establishes rules for non-contractual, fault-based civil claims involving AI systems. Specifically, the proposal establishes rules that would govern the preservation and disclosure of evidence in cases involving high-risk AI, as well as rules on the burden of proof and corresponding rebuttable presumptions. Meanwhile, the revised PLD harmonizes rules that apply to no-fault liability claims brought by persons who suffer physical injury or damage to property caused by defective products. Software, including AI systems, are explicitly named as “products” under the proposal meaning that an injured person can claim compensation for damage caused by AI (see our previous blog post for further details on the proposed AILD and PLD). Both pieces of legislation will be reviewed, and potentially amended, by the Council and the European Parliament in 2023.

Continue Reading EU AI Policy and Regulation: What to look out for in 2023

On January 26, 2023, the U.S. Department of Commerce’s National Institute of Standards and Technology (NIST) released its Artificial Intelligence Risk Management Framework (the “Framework”) guidance document, alongside a companion AI RMF Playbook that suggests ways to navigate and use the Framework. 

Continue Reading NIST Releases New Artificial Intelligence Risk Management Framework

On December 1, 2022, a committee of the Brazilian Senate presented a report (currently available only in Portuguese) with research on the regulation of artificial intelligence (“AI”) and a draft AI law (see pages 15-58) (“Draft AI Law”) that will serve as the starting point for deliberations by the Senate on new AI legislation.  When preparing the 900+ page report and Draft AI Law, the Senate committee drew inspiration from earlier proposals for regulating AI in Brazil and its research into how OECD countries are regulating (or planning to regulate) in this area, as well as inputs received during a public hearing and in the form of written comments from stakeholders.  This blog posts highlights 13 key aspects of the Draft AI Law.

Continue Reading Brazil’s Senate Committee Publishes AI Report and Draft AI Law

This quarterly update summarizes key legislative and regulatory developments in the fourth quarter of 2022 related to Artificial Intelligence (“AI”), the Internet of Things (“IoT”), connected and autonomous vehicles (“CAVs”), and data privacy and cybersecurity.

Continue Reading U.S. AI, IoT, CAV, and Privacy Legislative Update – Fourth Quarter 2022

On October 13, 2022, the European Data Protection Supervisor (“EDPS”) released its Opinion 20/2022 on a Recommendation issued by the European Commission in August 2022 calling for a Council Decision authorising the opening of negotiations on behalf of the European Union for a Council of Europe convention on artificial intelligence, human rights, democracy and the

This quarterly update summarizes key legislative and regulatory developments in the third quarter of 2022 related to Artificial Intelligence (“AI”), the Internet of Things (“IoT”), connected and autonomous vehicles (“CAVs”), and data privacy and cybersecurity. 

Continue Reading U.S. AI, IoT, CAV, and Privacy Legislative Update – Third Quarter 2022

Many employers and employment agencies have turned to artificial intelligence (“AI”) tools to assist them in making better and faster employment decisions, including in the hiring and promotion processes.  The use of AI for these purposes has been scrutinized and will now be regulated in New York City.  The New York City Department of Consumer and Worker Protection (“DCWP”) recently issued a Notice of Public Hearing and Opportunity to Comment on Proposed Rules relating to the implementation of New York City’s law regulating the use of automated employment decision tools (“AEDT”) by NYC employers and employment agencies.  As detailed further below, the comment period is open until October 24, 2022.

Continue Reading Artificial Intelligence & NYC Employers:  New York City Seeks Public Comment on Proposed Rules That Would Regulate the Use of AI Tools in the Employment Context

On September 28, 2022, the European Commission published its long-promised proposal for an AI Liability Directive.  The draft Directive is intended to complement the EU AI Act, which the EU’s institutions are still negotiating.  In parallel, the European Commission also published its proposal to update the EU’s 1985 Product Liability Directive.  If adopted, the proposals will change the liability rules for software and AI systems in the EU.

The draft AI Liability Directive establishes rules applicable to non-contractual, fault-based civil claims involving AI systems.  Specifically, the proposal establishes rules that would govern the preservation and disclosure of evidence in cases involving high-risk AI, as well as rules on the burden of proof and corresponding rebuttable presumptions.  If adopted as proposed, the draft AI Liability Directive will apply to damages that occur two years or more after the Directive enters into force; five years after its entry into force, the Commission will consider the need for rules on no-fault liability for AI claims.

As for the draft Directive on Liability of Defective Products, if adopted, EU Member States will have one year from its entry into force to implement it in their national laws.  The draft Directive would apply to products placed on the market one year after it enters into force.

Continue Reading European Commission Publishes Directive on the Liability of Artificial Intelligence Systems