On December 1, 2022, a committee of the Brazilian Senate presented a report (currently available only in Portuguese) with research on the regulation of artificial intelligence (“AI”) and a draft AI law (see pages 15-58) (“Draft AI Law”) that will serve as the starting point for deliberations by the Senate on new AI legislation.  When preparing the 900+ page report and Draft AI Law, the Senate committee drew inspiration from earlier proposals for regulating AI in Brazil and its research into how OECD countries are regulating (or planning to regulate) in this area, as well as inputs received during a public hearing and in the form of written comments from stakeholders.  This blog posts highlights 13 key aspects of the Draft AI Law.

Continue Reading Brazil’s Senate Committee Publishes AI Report and Draft AI Law

Many employers and employment agencies have turned to artificial intelligence (“AI”) tools to assist them in making better and faster employment decisions, including in the hiring and promotion processes.  The use of AI for these purposes has been scrutinized and will now be regulated in New York City.  The New York City Department of Consumer and Worker Protection (“DCWP”) recently issued a Notice of Public Hearing and Opportunity to Comment on Proposed Rules relating to the implementation of New York City’s law regulating the use of automated employment decision tools (“AEDT”) by NYC employers and employment agencies.  As detailed further below, the comment period is open until October 24, 2022.

Continue Reading Artificial Intelligence & NYC Employers:  New York City Seeks Public Comment on Proposed Rules That Would Regulate the Use of AI Tools in the Employment Context

Today, the Federal Trade Commission (FTC) announced that it anticipates proposing a privacy rulemaking this month, with comments closing in August.  This announcement follows the agency’s statement in December that it planned to begin a rulemaking to “curb lax security practices, limit privacy abuses, and ensure that algorithmic decision-making does not result in unlawful discrimination.” 

In January 2022, China released two regulations (one in draft form) that touch on hot topics in technological development – algorithmic recommendations and deep synthesis – making it one of the first countries in the world to directly tackle these cutting edge areas.  In this post, we provide an overview of the draft Provisions on

On December 10th, the Federal Trade Commission (FTC) published a Statement of Regulatory Priorities that announced the agency’s intent to initiate rulemakings on issues such as privacy, security, algorithmic decision-making, and unfair methods of competition.
Continue Reading FTC Announces Regulatory Priorities for Both Privacy and Competition

In April 2021, the European Commission released its proposed Regulation Laying Down Harmonized Rules on Artificial Intelligence (the “Regulation”), which would establish rules on the development, placing on the market, and use of artificial intelligence systems (“AI systems”) across the EU. The proposal, comprising 85 articles and nine annexes, is part of a wider package of Commission initiatives aimed at positioning the EU as a world leader in trustworthy and ethical AI and technological innovation.

The Commission’s objectives with the Regulation are twofold: to promote the development of AI technologies and harness their potential benefits, while also protecting individuals against potential threats to their health, safety, and fundamental rights posed by AI systems. To that end, the Commission proposal focuses primarily on AI systems identified as “high-risk,” but also prohibits three AI practices and imposes transparency obligations on providers of certain non-high-risk AI systems as well. Notably, it would impose significant administrative costs on high-risk AI systems of around 10 percent of the underlying value, based on compliance, oversight, and verification costs. This blog highlights several key aspects of the proposal.

Continue Reading European Commission Proposes New Artificial Intelligence Regulation

Last week, the Ninth Circuit ruled in Lemmon v. Snap, Inc., No. 20-55295 (May 4 2021), that 47 U.S.C. § 230 (“Section 230”) did not bar a claim of negligent product design against Snap, Inc., reversing and remanding a lower court ruling.
Continue Reading Ninth Circuit Denies Section 230 Defense in Products Liability Case

A number of legislative proposals to amend Section 230 of the 1996 Communications Decency Act (“Section 230”) have already been introduced in the new Congress.  Section 230 provides immunity to an owner or user of an “interactive computer service” — generally understood to encompass internet platforms and websites — from liability for content posted by a third party.

On February 8, 2021, Senator Mark Warner (D-VA) introduced the Safeguarding Against Fraud, Exploitation, Threats, Extremism, and Consumer Harms Act (“SAFE TECH Act”), cosponsored by Senators Amy Klobuchar (D-MN) and Mazie Hirono (D-HI).  The bill would narrow the scope of immunity that has been applied to online platforms.  Specifically, the SAFE TECH Act would amend Section 230 in the following ways:
Continue Reading SAFE TECH Act Would Limit Scope and Redesign Framework of Section 230 Immunity

On October 9, 2020, the French Supervisory Authority (“CNIL”) issued guidance on the use of facial recognition technology for identity checks at airports (available here, in French).  The CNIL indicates that it has issued this guidance in response to a request from several operators and service providers of airports in France who are planning to deploy this technology on an experimental basis.  In this blog post, we summarize the main principles that the CNIL says airports should observe when deploying biometric technology.

Continue Reading French Supervisory Authority Releases Strict Guidance on the Use of Facial Recognition Technology at Airports

On June 16, 2020, the First Circuit released its opinion in United States v. Moore-Bush.  The issue presented was whether the Government’s warrantless use of a pole camera to continuously record for eight months the front of Defendants’ home, as well as their and their visitors’ comings and goings, infringed on the Defendants’ reasonable expectation of privacy in and around their home and thereby violated the Fourth Amendment.  The appeal followed the district court’s decision in June 2019 in favor of Defendants’ motions to exclude evidence obtained via the pole camera.  The Government, without obtaining a warrant, had installed a pole camera on a utility pole across the street from Defendants’ residence.  The pole camera (1) took continuous video recording for approximately eight months, (2) focused on the driveway and the front of the house, (3) had the ability to zoom in so close that it can read license plate numbers, and (4) created a digitally searchable log.

In their motions to exclude, the Defendants, relying on Katz v. United States, argued they had both a subjective and objective reasonable expectation of privacy in the movements into and around their home, and that the warrantless use of the pole camera therefore constituted an unreasonable search under the Fourth Amendment.  The Government relied on an earlier First Circuit case, United States v. Bucci, which held that there was no reasonable expectation of privacy in a person’s movements outside of and around their home—“An individual does not have an expectation of privacy in items or places he exposes to the public.”  Thus, Bucci held that use of a pole camera for eight months did not constitute a search.
Continue Reading United States v. Moore-Bush: No Reasonable Expectation of Privacy Around the Home