U.S. policymakers have continued to express interest in legislation to regulate artificial intelligence (“AI”), particularly at the state level.  Although comprehensive AI bills and frameworks in Congress have received substantial attention, state legislatures also have been moving forward with their own efforts to regulate AI.  This blog post summarizes key themes in state AI bills introduced in the past year.  Now that new state legislative sessions have commenced, we expect to see even more activity in the months ahead.

  • Notice Requirements:  A number of state AI bills focus on notice to individuals.  Some bills would require covered entities to notify individuals when using automated decision-making tools for decisions that affect their rights and opportunities, such as the use of AI in employment.  For example, the District of Columbia’s “Stop Discrimination by Algorithms Act” (B 114) would require a notice about how the covered entity uses personal information in algorithmic eligibility determinations, including providing information about the source of information, and it would require a separate notice to an individual affected by an algorithmic eligibility determination that results in an “adverse action.”  Similarly, the Massachusetts “Act Preventing a Dystopian Work Environment” (HB 1873) likewise would require employers or vendors using an automated decision system to provide notice to workers prior to adopting the system and would require an additional notice if there are “significant updates or changes” made to the system.  Additionally, other AI bills have focused on disclosure requirements between entities in the AI ecosystem.  For example, Washington’s legislature is considering a bill (HB 1951) that would require developers of automated decision tools to provide documentation of the “known limitations” of the tool, the types of data used to program or train the tool, and how the tool was evaluated for validity to deployers of the tool.
  • Impact Assessments:  Another key theme in state AI bills focuses on requirements for impact assessments in the development of AI tools; calls for these assessments aim to mitigate potential discrimination, privacy, and accuracy harms.  For example, a Vermont bill (HB 114) would require employers using automated decision-making tools to conduct algorithmic impact assessments prior to using those tools for employment-related decisions.  Additionally, the bill mentioned above under consideration in the Washington legislature (HB 1951) would require that deployers complete impact assessments for automated decision tools that include, for example, assessments of reasonably foreseeable risks of algorithmic decision making and the safeguards implemented.
  • Individual Rights:  State legislatures also have sought to implement requirements for consumers to exercise certain rights in AI bills.  For example, several state AI bills would establish an individual right to opt-out of decisions based on automated decision-making or request a human reevaluation of such decisions.  California (AB 331) and New York (AB 7859) are considering bills that would require AI deployers to allow individuals to request “alternative selection processes” where an automated decision tool is being used to make, or is a controlling factor in, a consequential decision.  Similarly, New York’s AI Bill of Rights (S 8209) would provide individuals with the right to opt-out of  the use of automated systems in favor of a human alternative. 
  • Licensing & Registration Regimes:  A handful of state legislatures have proposed requirements for AI licensing and registration.  For example, New York’s Advanced AI Licensing Act (A 8195) would require all developers and operators of certain “high-risk advanced AI systems” to apply for a license from the state before use.  Other bills require registration for certain uses of the AI system.  For instance, an amendment introduced in the Illinois legislature (HB 1002) would require state certification of diagnostic algorithms used by hospitals.
  • Generative AI & Content Labeling:  Another prominent theme in state AI legislation has been a focus on labeling content produced by generative AI systems.  For example, Rhode Island is considering a bill (H 6286) that would require a “distinctive watermark” to authenticate generative AI content.

We will continue to monitor these and related developments across our blogs.

Print:
Email this postTweet this postLike this postShare this post on LinkedIn
Photo of Yaron Dori Yaron Dori

Yaron Dori has over 25 years of experience advising technology, telecommunications, media, life sciences, and other types of companies on their most pressing business challenges. He is a former chair of the firm’s technology, communications and media practices and currently serves on the…

Yaron Dori has over 25 years of experience advising technology, telecommunications, media, life sciences, and other types of companies on their most pressing business challenges. He is a former chair of the firm’s technology, communications and media practices and currently serves on the firm’s eight-person Management Committee.

Yaron’s practice advises clients on strategic planning, policy development, transactions, investigations and enforcement, and regulatory compliance.

Early in his career, Yaron advised telecommunications companies and investors on regulatory policy and frameworks that led to the development of broadband networks. When those networks became bidirectional and enabled companies to collect consumer data, he advised those companies on their data privacy and consumer protection obligations. Today, as new technologies such as Artificial Intelligence (AI) are being used to enhance the applications and services offered by such companies, he advises them on associated legal and regulatory obligations and risks. It is this varied background – which tracks the evolution of the technology industry – that enables Yaron to provide clients with a holistic, 360-degree view of technology policy, regulation, compliance, and enforcement.

Yaron represents clients before federal regulatory agencies—including the Federal Communications Commission (FCC), the Federal Trade Commission (FTC), and the Department of Commerce (DOC)—and the U.S. Congress in connection with a range of issues under the Communications Act, the Federal Trade Commission Act, and similar statutes. He also represents clients on state regulatory and enforcement matters, including those that pertain to telecommunications, data privacy, and consumer protection regulation. His deep experience in each of these areas enables him to advise clients on a wide range of technology regulations and key business issues in which these areas intersect.

With respect to technology and telecommunications matters, Yaron advises clients on a broad range of business, policy and consumer-facing issues, including:

  • Artificial Intelligence and the Internet of Things;
  • Broadband deployment and regulation;
  • IP-enabled applications, services and content;
  • Section 230 and digital safety considerations;
  • Equipment and device authorization procedures;
  • The Communications Assistance for Law Enforcement Act (CALEA);
  • Customer Proprietary Network Information (CPNI) requirements;
  • The Cable Privacy Act
  • Net Neutrality; and
  • Local competition, universal service, and intercarrier compensation.

Yaron also has extensive experience in structuring transactions and securing regulatory approvals at both the federal and state levels for mergers, asset acquisitions and similar transactions involving large and small FCC and state communication licensees.

With respect to privacy and consumer protection matters, Yaron advises clients on a range of business, strategic, policy and compliance issues, including those that pertain to:

  • The FTC Act and related agency guidance and regulations;
  • State privacy laws, such as the California Consumer Privacy Act (CCPA) and California Privacy Rights Act, the Colorado Privacy Act, the Connecticut Data Privacy Act, the Virginia Consumer Data Protection Act, and the Utah Consumer Privacy Act;
  • The Electronic Communications Privacy Act (ECPA);
  • Location-based services that use WiFi, beacons or similar technologies;
  • Digital advertising practices, including native advertising and endorsements and testimonials; and
  • The application of federal and state telemarketing, commercial fax, and other consumer protection laws, such as the Telephone Consumer Protection Act (TCPA), to voice, text, and video transmissions.

Yaron also has experience advising companies on congressional, FCC, FTC and state attorney general investigations into various consumer protection and communications matters, including those pertaining to social media influencers, digital disclosures, product discontinuance, and advertising claims.

August Gweon

August Gweon counsels national and multinational companies on data privacy, cybersecurity, antitrust, and technology policy issues, including issues related to artificial intelligence and other emerging technologies. August leverages his experiences in AI and technology policy to help clients understand complex technology developments, risks…

August Gweon counsels national and multinational companies on data privacy, cybersecurity, antitrust, and technology policy issues, including issues related to artificial intelligence and other emerging technologies. August leverages his experiences in AI and technology policy to help clients understand complex technology developments, risks, and policy trends.

August regularly provides advice to clients for complying with federal, state, and global privacy and competition frameworks and AI regulations. He also assists clients in investigating compliance issues, preparing for federal and state privacy regulations like the California Privacy Rights Act, responding to government inquiries and investigations, and engaging in public policy discussions and rulemaking processes.

Photo of Jayne Ponder Jayne Ponder

Jayne Ponder is an associate in the firm’s Washington, DC office and a member of the Data Privacy and Cybersecurity Practice Group. Jayne’s practice focuses on a broad range of privacy, data security, and technology issues. She provides ongoing privacy and data protection…

Jayne Ponder is an associate in the firm’s Washington, DC office and a member of the Data Privacy and Cybersecurity Practice Group. Jayne’s practice focuses on a broad range of privacy, data security, and technology issues. She provides ongoing privacy and data protection counsel to companies, including on topics related to privacy policies and data practices, the California Consumer Privacy Act, and cyber and data security incident response and preparedness.