As 2021 comes to a close, we will be sharing the key legislative and regulatory updates for artificial intelligence (“AI”), the Internet of Things (“IoT”), connected and automated vehicles (“CAVs”), and privacy this month.  Lawmakers introduced a range of proposals to regulate AI, IoT, CAVs, and privacy as well as appropriate funds to study developments in these emerging spaces.  In addition, from developing a consumer labeling program for IoT devices to requiring the manufacturers and operators of CAVs to report crashes, federal agencies have promulgated new rules and issued guidance to promote consumer awareness and safety.  We are providing this year-end round up in four parts.  In this post, we detail AI updates in Congress, state legislatures, and federal agencies.

Part I:  Artificial Intelligence

While there have been various AI legislative proposals introduced in Congress, the United States has not embraced a horizontal broad-based approach to AI regulation as proposed by the European Commission, instead focusing on investing in infrastructure to promote the growth of AI.

In particular this quarter, the National Defense Authorization Act for 2021 (“NDAA”) (H.R. 6395), which the Senate is likely to pass this week, represents the most substantial federal U.S. legislation on AI to date.  The NDAA established the National AI Initiative to coordinate the ongoing AI research, development, and demonstration activities among stakeholders.  To implement the AI Initiative, the NDAA mandates the creation of a National Artificial Intelligence Initiative Office under the White House Office of Science and Technology Policy (“OSTP”) to undertake the AI Initiative activities, as well as an interagency National Artificial Intelligence Advisory Committee to coordinate related federal activity.  The White House also launched AI.gov and the National AI Research Resource Task Force to coordinate and accelerate AI research across all scientific disciplines.  In addition, the NDAA:

  • Directs the National Institute of Science and Technology (“NIST”) to support the development of relevant standards and best practices pertaining to AI and appropriates $400 million to NIST through FY 2025;
  • Requires an assessment and report on whether AI technology acquired by the DOD is developed in an ethically and responsibly sourced manner, including steps taken or resources required to mitigate any deficiencies;
  • Includes a number of other provisions expanding research, development and deployment of AI, including authorizing $1.2 billion through FY 2025 for a Department of Energy (“DOE”) AI research program.

A growing body of state and federal proposals address algorithmic accountability and mitigation of unwanted bias and discrimination.  For example, the Mind Your Own Business Act of 2021 (S. 1444), introduced by Senator Ron Wyden (D-OR), would authorize the FTC to promulgate regulations that would require covered entities to conduct impact assessments of “high-risk automated decision systems,” such as AI and machine learning techniques, as well as “high-risk information systems” that “pose a significant risk to the privacy or security” of consumers’ personal information.  Other federal bills, like the Algorithmic Justice and Online Platform Transparency Act of 2021 (S. 1896), introduced by Senator Ed Markey (D-MA), would subject online platforms to transparency requirements such as describing to users the types of algorithmic processes they employ and the information they collect to power them.

States are considering their own slates of related proposals.  For example, the California State Assembly is considering the Automated Decision Systems Accountability Act of 2021 (AB-13), which would require monitoring and impact assessments for California businesses that provide “automated decision systems,” defined as products or services using AI or other computational techniques to make decisions.  A Washington state bill (SB 5116) would direct the state’s chief privacy officer to adopt rules regarding the development, procurement, and use of automated decision systems by public agencies.  More broadly, facial recognition technology has attracted renewed attention from state lawmakers, with wholesale bans on state and local government agencies’ use of facial recognition gaining steam.

Agencies are also focusing on AI, particularly in the enforcement context.  For example, the Federal Trade Commission (“FTC”) investigated and settled with Everalbum, Inc. in January 2021 in relation to its “Ever App,” a photo and video storage app that used facial recognition technology to automatically sort and “tag” users’ photographs.  Pursuant to the settlement agreement, Everalbum was required to delete models and algorithms that it developed using users’ uploaded photos and videos and obtain express consent from its users prior to applying facial recognition technology.  Enforcement activity by the FTC to regulate AI may become even more common, as legislative efforts seek to create a new privacy-focused bureau within the FTC and expand the agency’s civil penalty authority.

Print:
Email this postTweet this postLike this postShare this post on LinkedIn
Photo of Jennifer Johnson Jennifer Johnson

Jennifer Johnson is a partner specializing in communications, media and technology matters who serves as Co-Chair of Covington’s Technology Industry Group and its global and multi-disciplinary Artificial Intelligence (AI) and Internet of Things (IoT) Groups. She represents and advises technology companies, content distributors…

Jennifer Johnson is a partner specializing in communications, media and technology matters who serves as Co-Chair of Covington’s Technology Industry Group and its global and multi-disciplinary Artificial Intelligence (AI) and Internet of Things (IoT) Groups. She represents and advises technology companies, content distributors, television companies, trade associations, and other entities on a wide range of media and technology matters. Jennifer has almost three decades of experience advising clients in the communications, media and technology sectors, and has held leadership roles in these practices for almost twenty years. On technology issues, she collaborates with Covington’s global, multi-disciplinary team to assist companies navigating the complex statutory and regulatory constructs surrounding this evolving area, including product counseling and technology transactions related to connected and autonomous vehicles, internet connected devices, artificial intelligence, smart ecosystems, and other IoT products and services. Jennifer serves on the Board of Editors of The Journal of Robotics, Artificial Intelligence & Law.

Jennifer assists clients in developing and pursuing strategic business and policy objectives before the Federal Communications Commission (FCC) and Congress and through transactions and other business arrangements. She regularly advises clients on FCC regulatory matters and advocates frequently before the FCC. Jennifer has extensive experience negotiating content acquisition and distribution agreements for media and technology companies, including program distribution agreements, network affiliation and other program rights agreements, and agreements providing for the aggregation and distribution of content on over-the-top app-based platforms. She also assists investment clients in structuring, evaluating, and pursuing potential investments in media and technology companies.

Photo of Jayne Ponder Jayne Ponder

Jayne Ponder counsels national and multinational companies across industries on data privacy, cybersecurity, and emerging technologies, including Artificial Intelligence and Internet of Things.

In particular, Jayne advises clients on compliance with federal, state, and global privacy frameworks, and counsels clients on navigating the…

Jayne Ponder counsels national and multinational companies across industries on data privacy, cybersecurity, and emerging technologies, including Artificial Intelligence and Internet of Things.

In particular, Jayne advises clients on compliance with federal, state, and global privacy frameworks, and counsels clients on navigating the rapidly evolving legal landscape. Her practice includes partnering with clients on the design of new products and services, drafting and negotiating privacy terms with vendors and third parties, developing privacy notices and consent forms, and helping clients design governance programs for the development and deployment of Artificial Intelligence and Internet of Things technologies.

Jayne routinely represents clients in privacy and consumer protection enforcement actions brought by the Federal Trade Commission and state attorneys general, including related to data privacy and advertising topics. She also helps clients articulate their perspectives through the rulemaking processes led by state regulators and privacy agencies.

As part of her practice, Jayne advises companies on cybersecurity incident preparedness and response, including by drafting, revising, and testing incident response plans, conducting cybersecurity gap assessments, engaging vendors, and analyzing obligations under breach notification laws following an incident.

Photo of Andrew Longhi Andrew Longhi

Andrew Longhi advises national and multinational companies across industries on a wide range of regulatory, compliance, and enforcement matters involving data privacy, telecommunications, and emerging technologies.

Andrew’s practice focuses on advising clients on how to navigate the rapidly evolving legal landscape of state…

Andrew Longhi advises national and multinational companies across industries on a wide range of regulatory, compliance, and enforcement matters involving data privacy, telecommunications, and emerging technologies.

Andrew’s practice focuses on advising clients on how to navigate the rapidly evolving legal landscape of state, federal, and international data protection laws. He proactively counsels clients on the substantive requirements introduced by new laws and shifting enforcement priorities. In particular, Andrew routinely supports clients in their efforts to launch new products and services that implicate the laws governing the use of data, connected devices, biometrics, and telephone and email marketing.

Andrew assesses privacy and cybersecurity risk as a part of diligence in complex corporate transactions where personal data is a key asset or data processing issues are otherwise material. He also provides guidance on generative AI issues, including privacy, Section 230, age-gating, product liability, and litigation risk, and has drafted standards and guidelines for large-language machine-learning models to follow. Andrew focuses on providing risk-based guidance that can keep pace with evolving legal frameworks.

Photo of Lindsay Brewer Lindsay Brewer

Lindsay advises clients on environmental, human rights, product safety, and public policy matters.

She counsels clients seeking to set sustainability goals; track their progress on environmental, social, and governance topics; and communicate their achievements to external stakeholders in a manner that mitigates legal…

Lindsay advises clients on environmental, human rights, product safety, and public policy matters.

She counsels clients seeking to set sustainability goals; track their progress on environmental, social, and governance topics; and communicate their achievements to external stakeholders in a manner that mitigates legal risk. She also advises clients seeking to engage with regulators and policymakers on environmental policy. Lindsay has extensive experience advising clients on making environmental disclosures and public marketing claims related to their products and services, including under the FTC’s Green Guides and state consumer protection laws.

Lindsay’s legal and regulatory advice spans a range of topics, including climate, air, water, human rights, environmental justice, and product safety and stewardship. She has experience with a wide range of environmental and safety regimes, including the Federal Trade Commission Act, the Clean Air Act, the Consumer Product Safety Act, the Federal Motor Vehicle Safety Standards, and the Occupational Safety and Health Act. Lindsay works with companies of various sizes and across multiple sectors, including technology, energy, financial services, and consumer products.

Photo of Nira Pandya Nira Pandya

Nira Pandya is a member of the firm’s Technology and IP Transactions Practice Group in Boston.

With a broad practice that spans a variety of industries, Nira routinely advises clients with their most complex commercial transactions and strategic collaborations involving technology, intellectual property…

Nira Pandya is a member of the firm’s Technology and IP Transactions Practice Group in Boston.

With a broad practice that spans a variety of industries, Nira routinely advises clients with their most complex commercial transactions and strategic collaborations involving technology, intellectual property, and data, with a focus on issues around IP ownership and licensing, artificial intelligence, software development, and information technology services.

As a member of the firm’s Digital Health Initiative, Nira counsels pharmaceutical, medical device, healthcare, and technology clients on commercial and intellectual property considerations that arise in partnerships and collaborations at the intersection of life sciences and technology.

Nira leverages in-house experience gained during her secondment to a leading technology company, where she partnered with business clients and translated legal advice into practical solutions. Prior to joining the firm’s Technology and IP Transactions practice group, Nira advised private and public companies on mergers and acquisitions, joint ventures, strategic investments, and other corporate transactions.