Artificial Intelligence (AI)

2023 is set to be an important year for developments in AI regulation and policy in the EU. At the end of last year, on December 6, 2022, the Council of the EU (the “Council”) adopted its general approach and compromise text on the proposed Regulation Laying Down Harmonized Rules on Artificial Intelligence (the “AI Act”), bringing the AI Act one step closer to being adopted. The European Parliament is currently developing its own position on the AI Act which is expected to be finalized by March 2023. Following this, the Council, Parliament and European Commission (“Commission”) will enter into trilogue discussions to finalize the Act. Once adopted, it will be directly applicable across all EU Member States and its obligations are likely to apply three years after the AI Act’s entry into force (according to the Council’s compromise text).  

In 2022, the Commission also put forward new liability rules for AI systems via the proposed AI Liability Directive (“AILD”) and updates to the Product Liability Directive (“PLD”). The AILD establishes rules for non-contractual, fault-based civil claims involving AI systems. Specifically, the proposal establishes rules that would govern the preservation and disclosure of evidence in cases involving high-risk AI, as well as rules on the burden of proof and corresponding rebuttable presumptions. Meanwhile, the revised PLD harmonizes rules that apply to no-fault liability claims brought by persons who suffer physical injury or damage to property caused by defective products. Software, including AI systems, are explicitly named as “products” under the proposal meaning that an injured person can claim compensation for damage caused by AI (see our previous blog post for further details on the proposed AILD and PLD). Both pieces of legislation will be reviewed, and potentially amended, by the Council and the European Parliament in 2023.

Continue Reading EU AI Policy and Regulation: What to look out for in 2023

On December 1, 2022, a committee of the Brazilian Senate presented a report (currently available only in Portuguese) with research on the regulation of artificial intelligence (“AI”) and a draft AI law (see pages 15-58) (“Draft AI Law”) that will serve as the starting point for deliberations by the Senate on new AI legislation.  When preparing the 900+ page report and Draft AI Law, the Senate committee drew inspiration from earlier proposals for regulating AI in Brazil and its research into how OECD countries are regulating (or planning to regulate) in this area, as well as inputs received during a public hearing and in the form of written comments from stakeholders.  This blog posts highlights 13 key aspects of the Draft AI Law.

Continue Reading Brazil’s Senate Committee Publishes AI Report and Draft AI Law

On October 13, 2022, the European Data Protection Supervisor (“EDPS”) released its Opinion 20/2022 on a Recommendation issued by the European Commission in August 2022 calling for a Council Decision authorising the opening of negotiations on behalf of the European Union for a Council of Europe convention on artificial intelligence, human rights, democracy and the

Many employers and employment agencies have turned to artificial intelligence (“AI”) tools to assist them in making better and faster employment decisions, including in the hiring and promotion processes.  The use of AI for these purposes has been scrutinized and will now be regulated in New York City.  The New York City Department of Consumer and Worker Protection (“DCWP”) recently issued a Notice of Public Hearing and Opportunity to Comment on Proposed Rules relating to the implementation of New York City’s law regulating the use of automated employment decision tools (“AEDT”) by NYC employers and employment agencies.  As detailed further below, the comment period is open until October 24, 2022.

Continue Reading Artificial Intelligence & NYC Employers:  New York City Seeks Public Comment on Proposed Rules That Would Regulate the Use of AI Tools in the Employment Context

With the growing use of AI systems and the increasing complexity of the legal framework relating to such use, the need for appropriate methods and tools to audit AI systems is becoming more pressing both for professionals and for regulators. The French Supervisory Authority (“CNIL”) has recently tested tools that could potentially help its auditors

The UK Government recently published its AI Governance and Regulation: Policy Statement (the “AI Statement”) setting out its proposed approach to regulating Artificial Intelligence (“AI”) in the UK. The AI Statement was published alongside the draft Data Protection and Digital Information Bill (see our blog post here for further details on the Bill) and is

On July 5, 2022, the Cybersecurity and Infrastructure Security Agency (“CISA”) and the National Institute of Standards and Technology (“NIST”) strongly recommended that organizations begin preparing to transition to a post-quantum cryptographic standard.  “The term ‘post-quantum cryptography’ is often referred to as ‘quantum-resistant cryptography’ and includes, ‘cryptographic algorithms or methods that are assessed not to be specifically vulnerable to attack by” a CRQC (cryptanalytically relevant quantum computer) or a classical computer.  NIST “has announced that a new post-quantum cryptographic standard will replace current public-key cryptography, which is vulnerable to quantum-based attacks.”  NIST does not intend to publish the new post-quantum cryptographic standard for commercial products until 2024 but urges companies to begin preparing now by following the Post-Quantum Cryptography Roadmap

Continue Reading CISA and NIST Urge Companies to Prepare to Transition to a Post-Quantum Cryptographic Standard

The National Institute of Standards and Technology (“NIST”) issued its initial draft of the “AI Risk Management Framework” (“AI RMF”), which aims to provide voluntary, risk-based guidance on the design, development, and deployment of AI systems.  NIST is seeking public comments on this draft via email, at AIframework@nist.gov, through April 29, 2022.  Feedback received on this draft will be incorporated into the second draft of the framework, which will be issued this summer or fall.
Continue Reading NIST Releases Draft AI Risk Management Framework for Public Comment

This quarterly update summarizes key federal legislative and regulatory developments in the first quarter of 2022 related to artificial intelligence (“AI”), the Internet of Things (“IoT”), connected and automated vehicles (“CAVs”), and data privacy, and highlights a few particularly notable developments in the States.  In the first quarter of 2022, Congress and the Administration focused on required assessments and funding for AI, restrictions on targeted advertising using personal data collected from individuals and connected devices, creating rules to enhance CAV safety, and children’s privacy topics.
Continue Reading U.S. AI, IoT, CAV, and Privacy Legislative Update – First Quarter 2022

As 2021 comes to a close, we will be sharing the key legislative and regulatory updates for artificial intelligence (“AI”), the Internet of Things (“IoT”), connected and automated vehicles (“CAVs”), and privacy this month.  Lawmakers introduced a range of proposals to regulate AI, IoT, CAVs, and privacy as well as appropriate funds to study developments in these emerging spaces.  In addition, from developing a consumer labeling program for IoT devices to requiring the manufacturers and operators of CAVs to report crashes, federal agencies have promulgated new rules and issued guidance to promote consumer awareness and safety.  We are providing this year-end round up in four parts.  In this post, we detail IoT updates in Congress, the states, and federal agencies.

Part IV: Internet of Things

This quarter’s IoT-related Congressional and regulatory updates ranged from promoting consumer awareness to bolstering the security of connected devices.  In particular, the Federal Communications Commission (“FCC”) has taken a number of actions to promote the growth of IoT while the National Institute of Standards and Technology (“NIST”) continues to work to fulfill its obligations under President Biden’s May Executive Order on Improving the Nation’s Cybersecurity (“EO”).  The IoT Cybersecurity Improvement Act of 2020 (H.R.1668) additionally tasked NIST with developing security standards and guidelines for the federal government’s IoT devices.  This year NIST put out a number of reports to carry out this mandate, including guidance documents to assist federal agencies with evaluating the security capabilities required in their IoT devices (NIST SP 800-213).
Continue Reading U.S. AI and IoT Legislative Update – Year-End 2021