Emerging Technologies

The New York City Department of Consumer and Worker Protection (“DCWP”) recently issued a Notice of Adoption of Final Rule (“Final Rule”) relating to the implementation of New York City’s law regulating the use of automated employment decision tools (“AEDT”) by NYC employers and employment agencies.

NYC’s Local Law 144 now takes effect on July 5, 2023.  As discussed in our prior post, Local Law 144 prohibits employers and employment agencies from using certain Artificial Intelligence (“AI”) tools in the hiring or promotion process unless the tool has been subject to a bias audit within one year prior to its use, the results of the audit are publicly available, and notice requirements to employees or job candidates are satisfied.

The issuance of DCWP’s Final Rule follows the prior release of two sets of proposed rules in September 2022 and December 2022.  The Final Rule’s most significant updates from the December 2022 proposal include an expansion of the definition of AEDTs and modifications to the requirements for bias audits.  Key provisions of the Final Rule are summarized below.Continue Reading NYC Artificial Intelligence Rule to Take Effect July 5, 2023: New York City Issues Final Rule Regulating the Use of AI Tools by Employers

Many employers and employment agencies have turned to artificial intelligence (“AI”) tools to assist them in making better and faster employment decisions, including in the hiring and promotion processes.  The use of AI for these purposes has been scrutinized and will now be regulated in New York City.  The New York City Department of Consumer and Worker Protection (“DCWP”) recently issued a Notice of Public Hearing and Opportunity to Comment on Proposed Rules relating to the implementation of New York City’s law regulating the use of automated employment decision tools (“AEDT”) by NYC employers and employment agencies.  As detailed further below, the comment period is open until October 24, 2022.Continue Reading Artificial Intelligence & NYC Employers:  New York City Seeks Public Comment on Proposed Rules That Would Regulate the Use of AI Tools in the Employment Context

On December 10th, the Federal Trade Commission (FTC) published a Statement of Regulatory Priorities that announced the agency’s intent to initiate rulemakings on issues such as privacy, security, algorithmic decision-making, and unfair methods of competition.
Continue Reading FTC Announces Regulatory Priorities for Both Privacy and Competition

Last week, the Ninth Circuit ruled in Lemmon v. Snap, Inc., No. 20-55295 (May 4 2021), that 47 U.S.C. § 230 (“Section 230”) did not bar a claim of negligent product design against Snap, Inc., reversing and remanding a lower court ruling.
Continue Reading Ninth Circuit Denies Section 230 Defense in Products Liability Case

On March 31st, Washington Governor Jay Inslee signed into law SB 6280, a bill aimed at regulating state and local government agencies’ use of facial recognition services.  An overview of the law’s provisions can be found here.

Notably, Governor Inslee vetoed Section 10 of the bill, which
Continue Reading Washington Enacts New Facial Recognition Law

On 19 February 2020, the European Commission presented its long-awaited strategies for data and AI.  These follow Commission President Ursula von der Leyen’s commitment upon taking office to put forward legislative proposals for a “coordinated European approach to the human and ethical implications of AI” within the new Commission’s first 100 days.  Although the papers published this week do not set out a comprehensive EU legal framework for AI, they do give a clear indication of the Commission’s key priorities and anticipated next steps.

The Commission strategies are set out in four separate papers—two on AI, and one each on Europe’s digital future and the data economy.  Read together, it is clear that the Commission seeks to position the EU as a digital leader, both in terms of trustworthy AI and the wider data economy.Continue Reading European Commission Presents Strategies for Data and AI (Part 1 of 4)

On February 4, 2020, the United Kingdom’s Centre for Data Ethics and Innovation (“DEI”) published its final report on “online targeting” (the “Report”), examining practices used to monitor a person’s online behaviour and subsequently customize their experience.  In October 2018, the UK government appointed the DEI, an expert committee that advises the UK government on how to maximize the benefits of new technologies, to explore how data is used in shaping peoples’ online experiences. The Report sets out its findings and recommendations.
Continue Reading Centre for Data Ethics and Innovation Publishes Final Report on “Online Targeting”

On November 21, 2019, the European Commission’s Expert Group on Liability and New Technologies – New Technologies Formation (“NTF”) published its Report on Liability for Artificial Intelligence and other emerging technologies.  The Commission tasked the NTF with establishing the extent to which liability frameworks in the EU will continue to operate effectively in relation to emerging digital technologies (including artificial intelligence, the internet of things, and distributed ledger technologies).  This report presents the NTF’s findings and recommendations.
Continue Reading Commission Expert Group Report on Liability for Emerging Digital Technologies