As 2021 comes to a close, we will be sharing the key legislative and regulatory updates for artificial intelligence (“AI”), the Internet of Things (“IoT”), connected and automated vehicles (“CAVs”), and privacy this month.  Lawmakers introduced a range of proposals to regulate AI, IoT, CAVs, and privacy as well as appropriate funds to study developments in these emerging spaces.  In addition, from developing a consumer labeling program for IoT devices to requiring the manufacturers and operators of CAVs to report crashes, federal agencies have promulgated new rules and issued guidance to promote consumer awareness and safety.  We are providing this year-end round up in four parts.  In this post, we detail AI updates in Congress, state legislatures, and federal agencies.

Part I:  Artificial Intelligence

While there have been various AI legislative proposals introduced in Congress, the United States has not embraced a horizontal broad-based approach to AI regulation as proposed by the European Commission, instead focusing on investing in infrastructure to promote the growth of AI.

In particular this quarter, the National Defense Authorization Act for 2021 (“NDAA”) (H.R. 6395), which the Senate is likely to pass this week, represents the most substantial federal U.S. legislation on AI to date.  The NDAA established the National AI Initiative to coordinate the ongoing AI research, development, and demonstration activities among stakeholders.  To implement the AI Initiative, the NDAA mandates the creation of a National Artificial Intelligence Initiative Office under the White House Office of Science and Technology Policy (“OSTP”) to undertake the AI Initiative activities, as well as an interagency National Artificial Intelligence Advisory Committee to coordinate related federal activity.  The White House also launched AI.gov and the National AI Research Resource Task Force to coordinate and accelerate AI research across all scientific disciplines.  In addition, the NDAA:

  • Directs the National Institute of Science and Technology (“NIST”) to support the development of relevant standards and best practices pertaining to AI and appropriates $400 million to NIST through FY 2025;
  • Requires an assessment and report on whether AI technology acquired by the DOD is developed in an ethically and responsibly sourced manner, including steps taken or resources required to mitigate any deficiencies;
  • Includes a number of other provisions expanding research, development and deployment of AI, including authorizing $1.2 billion through FY 2025 for a Department of Energy (“DOE”) AI research program.

A growing body of state and federal proposals address algorithmic accountability and mitigation of unwanted bias and discrimination.  For example, the Mind Your Own Business Act of 2021 (S. 1444), introduced by Senator Ron Wyden (D-OR), would authorize the FTC to promulgate regulations that would require covered entities to conduct impact assessments of “high-risk automated decision systems,” such as AI and machine learning techniques, as well as “high-risk information systems” that “pose a significant risk to the privacy or security” of consumers’ personal information.  Other federal bills, like the Algorithmic Justice and Online Platform Transparency Act of 2021 (S. 1896), introduced by Senator Ed Markey (D-MA), would subject online platforms to transparency requirements such as describing to users the types of algorithmic processes they employ and the information they collect to power them.

States are considering their own slates of related proposals.  For example, the California State Assembly is considering the Automated Decision Systems Accountability Act of 2021 (AB-13), which would require monitoring and impact assessments for California businesses that provide “automated decision systems,” defined as products or services using AI or other computational techniques to make decisions.  A Washington state bill (SB 5116) would direct the state’s chief privacy officer to adopt rules regarding the development, procurement, and use of automated decision systems by public agencies.  More broadly, facial recognition technology has attracted renewed attention from state lawmakers, with wholesale bans on state and local government agencies’ use of facial recognition gaining steam.

Agencies are also focusing on AI, particularly in the enforcement context.  For example, the Federal Trade Commission (“FTC”) investigated and settled with Everalbum, Inc. in January 2021 in relation to its “Ever App,” a photo and video storage app that used facial recognition technology to automatically sort and “tag” users’ photographs.  Pursuant to the settlement agreement, Everalbum was required to delete models and algorithms that it developed using users’ uploaded photos and videos and obtain express consent from its users prior to applying facial recognition technology.  Enforcement activity by the FTC to regulate AI may become even more common, as legislative efforts seek to create a new privacy-focused bureau within the FTC and expand the agency’s civil penalty authority.

Print:
Email this postTweet this postLike this postShare this post on LinkedIn
Photo of Jennifer Johnson Jennifer Johnson

Jennifer Johnson is a partner specializing in communications, media and technology matters who serves as co-chair of Covington’s global and multi-disciplinary Internet of Things (IoT) group. She represents and advises content distributors, broadcast companies, trade associations, and other media and technology entities on…

Jennifer Johnson is a partner specializing in communications, media and technology matters who serves as co-chair of Covington’s global and multi-disciplinary Internet of Things (IoT) group. She represents and advises content distributors, broadcast companies, trade associations, and other media and technology entities on a wide range of issues. Jennifer has more than two decades of experience advising clients in the communications, media and technology sectors, and has served as a co-chair for these practices for more than 15 years. On IoT issues, she collaborates with Covington’s global, multi-disciplinary team to assist companies navigating the complex statutory and regulatory constructs surrounding this evolving area, including legal issues with respect to connected and autonomous vehicles, internet connected devices, smart ecosystems, and other IoT products and services.

Jennifer assists clients in developing and pursuing strategic business and policy objectives before the Federal Communications Commission (FCC) and Congress and through transactions and other business arrangements. She regularly advises clients on FCC regulatory matters and advocates frequently before the FCC. Jennifer has extensive experience negotiating content acquisition and distribution agreements for media and technology companies, including program distribution agreements with cable, satellite, and telco companies, network affiliation and other program rights agreements for television companies, and agreements providing for the aggregation and distribution of content on over-the-top app-based platforms. She also assists investment clients in structuring, evaluating, and pursuing potential investments in media and technology companies.

Photo of Jayne Ponder Jayne Ponder

Jayne Ponder is an associate in the firm’s Washington, DC office and a member of the Data Privacy and Cybersecurity Practice Group. Jayne’s practice focuses on a broad range of privacy, data security, and technology issues. She provides ongoing privacy and data protection…

Jayne Ponder is an associate in the firm’s Washington, DC office and a member of the Data Privacy and Cybersecurity Practice Group. Jayne’s practice focuses on a broad range of privacy, data security, and technology issues. She provides ongoing privacy and data protection counsel to companies, including on topics related to privacy policies and data practices, the California Consumer Privacy Act, and cyber and data security incident response and preparedness.

Andrew Longhi

Andrew Longhi is an associate in the firm’s Washington, DC office and a member of the Data Privacy and Cybersecurity and Communications and Media Practice Groups. Andrew advises clients on a broad range of privacy and cybersecurity issues, including compliance obligations, commercial transactions…

Andrew Longhi is an associate in the firm’s Washington, DC office and a member of the Data Privacy and Cybersecurity and Communications and Media Practice Groups. Andrew advises clients on a broad range of privacy and cybersecurity issues, including compliance obligations, commercial transactions involving personal information and cybersecurity risk, and responses to regulatory inquiries. Andrew is Admitted to the Bar under DC App. R. 46-A (Emergency Examination Waiver); Practice Supervised by DC Bar members.

Photo of Lindsay Brewer Lindsay Brewer

Lindsay Brewer is an associate in the firm’s Washington office. She advises clients on environmental, product safety, occupational safety, and public policy issues. She has experience with a wide range of environmental and safety programs, with a focus on the Clean Air Act…

Lindsay Brewer is an associate in the firm’s Washington office. She advises clients on environmental, product safety, occupational safety, and public policy issues. She has experience with a wide range of environmental and safety programs, with a focus on the Clean Air Act (CAA), the Comprehensive Environmental Response, Compensation, and Liability Act (CERCLA/Superfund), the Federal Trade Commission Act (FTC Act), the Consumer Product Safety Act (CPSA), the Federal Motor Vehicle Safety Standards (FMVSS), and the Occupational Safety and Health Act (OSH Act).

Photo of Nira Pandya Nira Pandya

Nira Pandya advises private and public companies on venture capital financings, mergers and acquisitions, joint ventures, strategic investments, and other corporate transactions. She also represents emerging companies in general corporate matters, including entity formation, corporate governance, and securities law compliance.