Photo of Jadzia Pierce

Jadzia Pierce is an associate in the firm’s Washington, DC office. She is a member of the Data Privacy and Cybersecurity and Communications and Media Practice Groups.

On March 31st, Washington Governor Jay Inslee signed into law SB 6280, a bill aimed at regulating state and local government agencies’ use of facial recognition services.  An overview of the law’s provisions can be found here.

Notably, Governor Inslee vetoed Section 10 of the bill, which aimed to establish a legislative

On March 12, 2020, Washington’s state legislature passed SB 6280, a bill that will regulate state and local government agencies’ use of facial recognition services (“FRS’s”).  The bill aims to create a legal framework by which agencies may use FRS’s to the benefit of society (for example, by assisting agencies in locating missing or deceased persons), but prohibits uses that “threaten our democratic freedoms and put our civil liberties at risk.”
Continue Reading Washington State Passes Bill Limiting Government Use of Facial Recognition

Cardi B might like it, but the Federal Trade Commission (“FTC”) did not.  On March 5, 2020, the agency sent Cardi B and other high-profile influencers warning letters alleging that the influencers made inadequate disclosures in their endorsements of Teami tea.  The letters followed on the heels of the FTC’s proposed order against Teami, LLC for allegedly making deceptive claims about weight loss and other health benefits in their advertisements and failing to adequately instruct influencers about how to comply with the law when endorsing Teami products.
Continue Reading FTC Sends Warning Letters to Teami Tea Influencers

On February 14, 2020, California State Assembly Member Ed Chau introduced the Automated Decision Systems Accountability Act of 2020, which would require any business in California that provides a person with a program or device that uses an “automated decision system” (“ADS”) to establish processes to “continually test for biases during the development and usage of the ADS” and to conduct an impact assessment on that program or device.

ADS is defined broadly as “a computational process, including one derived from machine learning, statistics, or other data processing or artificial intelligence techniques, that makes a decision or facilitates human decision making, that impacts persons.”  The required ADS impact assessments would study the various aspects of the ADS and its development process, “including, but not limited to, the design and training data of the ADS, for impacts on accuracy, fairness, bias, discrimination, privacy, and security.”  At minimum, the assessments must include “[a] detailed description of the ADS, its design, training provided on its use, its data, and its purpose” and “[a]n assessment of the relative benefits and costs of the ADS in light of its purpose,” with certain factors such as data minimization and risk mitigation required in the cost-benefit analysis.

The provider of the ADS also must determine whether the ADS system “has a disproportionate adverse impact on a protected class,” examine whether it serves “reasonable objectives and furthers a legitimate interest,” and consider alternatives or reasonable modifications that could be incorporated “to limit adverse consequences on protected classes.”
Continue Reading California Introduces Bill to Regulate Automated Decision Systems

On February 12, 2020, Senator Kirsten Gillibrand (D-NY) announced a plan to create a new Data Protection Agency through her proposed legislation, the Data Protection Act of 2020 (S.3300).

Under the proposal, the new agency would replace the Federal Trade Commission (FTC) as the “privacy cop on the beat.”  As such, the FTC’s current authority in the privacy space—including its ability to draft guidelines, conduct studies, and issue implementing regulations for certain federal privacy laws, would be transferred to the new agency.

As opposed to the Online Privacy Act, a bill introduced by Representatives Anna Eshoo (D-CA-18) and Zoe Lofgren (D-CA-19) that also would create a new privacy agency, Sen. Gillibrand’s bill would not create a new omnibus federal privacy law.  Instead, it is focused on the creation of the Data Protection Agency and its rulemaking authority.  However, various aspects of the new agency’s authority provide valuable insights into what privacy regulation at the federal level might look like under the bill.
Continue Reading Sen. Kirsten Gillibrand Proposes New Digital Privacy Agency

Today, President Trump signed an Executive Order (“EO”), “Maintaining American Leadership in Artificial Intelligence,” that launches a coordinated federal government strategy for Artificial Intelligence (the “AI Initiative”).  Among other things, the AI Initiative aims to solidify American leadership in AI by empowering federal agencies to drive breakthroughs in AI research and development (“R&D”) (including by making data computing resources available to the AI research community), to establish technological standards to support reliable and trustworthy systems that use AI, to provide guidance with respect to regulatory approaches, and to address issues related to the AI workforce.  The Administration’s EO is the latest of at least 18 other countries’ national AI strategies, and signals that investment in artificial intelligence will continue to escalate in the near future—as will deliberations with respect to how AI-based technologies should be governed.

Continue Reading President Trump Signs Executive Order on Artificial Intelligence

On December 6, 2018, the Australian Parliament passed a bill that aims to address concerns raised by national security and law enforcement agencies regarding encrypted communications.

Introduced in September, the Telecommunications and Other Legislation Amendment (Assistance and Access) Act 2018 (the Act) may affect technology companies around the globe.  As discussed in our previous post, the Act requires “designated communications providers” (a definition that includes foreign and domestic communications providers) to provide support to Australian government agencies under new legal bases provided by the Act’s framework.  A Technical Assistance Notice (TAN), for example, will permit certain government entities to require assistance that a designated communications provider is already capable of giving.  If the provider lacks the capability to assist, a Technical Capability Notice (TCN) may require the provider to build such capability.

As described in greater detail in the Act’s accompanying Explanatory Memorandum, the ability to issue TANs and TCNs is not without limitation.  Importantly, neither forms of Notice may require providers to implement or build a “systemic weakness or systemic vulnerability” into their electronic protections, or prevent providers from patching such weaknesses or vulnerabilities.  Recent additions to the Act took this prohibition even further—requiring that in any case where a weakness is selectively introduced to a “target” technology connected with a particular person, the prohibition against systemic weaknesses or vulnerabilities extends to anything that would “jeopardize the security of information held by any other person” aside from the intended target.  The phrase “jeopardize the security of information” is defined by the Act as any “act or thing that creates a material risk that otherwise secure information can be accessed by an unauthorized party.”


Continue Reading Australia’s Encryption Bill Becomes Law

On September 26, 2018, New Jersey federal district judge Madeline Cox Arleo dismissed an eight-count class action complaint in its entirety against three smart TV makers: Samsung, LG, and Sony.  The plaintiffs alleged that defendants’ smart TVs continuously monitored and tracked their viewing habits, recorded their voices, and then transmitted that information to defendants’ servers,