U.S. state lawmakers have introduced more than 40 bills across at least 24 states to regulate personalized algorithmic pricing in 2026 thus far, already outpacing the number of personalized algorithmic pricing bills introduced in all of 2025.  While their definitions and scope vary, the 2026 bills broadly refer to “personalized algorithmic” or “dynamic” pricing as the practice of setting or adjusting prices by analyzing consumer data through AI or other automated tools, which may result in different prices being offered to different consumers for the same good or service.  

If enacted, these bills could impose a broad range of restrictions on such pricing, including disclosure requirements, general prohibitions, sector-specific restrictions, and restrictions on the use of protected class data in pricing decisions.  Although the proposals vary significantly, a few key themes emerge across these bills:

  • Disclosure Requirements. Several states have introduced legislation that would require businesses to affirmatively disclose when prices are determined using algorithmic methods, similar to New York’s 2025 Algorithmic Pricing Disclosure Act.  For example, Connecticut SB 4, Maryland HB 1475, and several other state bills would require any person that establishes a price using “personalized algorithmic pricing” to include a disclosure stating that “THIS PRICE WAS SET BY AN ALGORITHM USING YOUR PERSONAL DATA.”  

By contrast, Illinois’s Algorithmic Pricing Transparency Act (HB 4248) would take a more granular transparency approach, requiring entities that sell goods or services through online platforms to, among other obligations, provide disclosures of the “categories of personal data used to generate the price” and a linked explanation of the entity’s algorithmic pricing practices.  HB 4248 also would establish a consumer right to opt out of “surveillance pricing” and require entities to provide a “non-personalized baseline price” upon request.

  • General Prohibitions. Other proposals would categorically restrict or ban personalized algorithmic pricing practices to set or adjust consumer prices.  Note that these bills generally would not treat non-discriminatory discounts, coupons, or loyalty programs as covered pricing activity.  Vermont S.207 and California AB 2564, for example, would prohibit the use of “surveillance pricing”defined as setting a customized price using personally identifiable information gathered through “electronic surveillance technology”—unless the price difference is based solely on cost differences or reflects a discount offered to all consumers on equal terms.  

Other bills, such as Washington’s Fair Pricing and Transparency Act (HB 2481 / SB 6312), would also prohibit pricing based on an “algorithmic determination of willingness to pay,” while bills like Rhode Island H 7849 would prohibit “algorithmic price increases” based on a consumer’s personal data, while exempting “price decreases.”

  • Protected Class Data. Many of these bills would impose restrictions on the use of protected class data for personalized algorithmic pricing decisions.  New Jersey A4085 / S3612, for example, would prohibit businesses from using “personalized algorithmic pricing, surveillance pricing, or any pricing strategy” based on “protected class data,” while Nebraska’s Protecting Consumers and Jobs from Predatory Pricing Act (LB 1006) and other state bills would generally prohibit any use of “protected class data” to set prices that results in discriminatory pricing outcomes, including the withholding or denial of accommodations or a price that differs from prices offered to other individuals or groups.  
  • Minor Data Limitations. Other bills would restrict the use of minors’ data for personalized algorithmic pricing.  Iowa SF 2278 and Tennessee HB 2052 / SB 1998, for example, would prohibit the collection or use of “data belonging to minors” under 17 years of age for “personalized algorithmic pricing,” regardless of parental consent.
  • Retail and Grocery Restrictions. Several bills would impose sector-specific limits on grocery stores and food retailers.  For example, Georgia’s Surveillance Pricing Act (HB 1439) and New Jersey S3732 would prohibit “retail food establishments” or “food retailers” from using “surveillance pricing” to set food or grocery prices.  Other bills like Oklahoma HB 3959 and Tennessee HB 2052 / SB 1998 would combine pricing restrictions with technology bans, prohibiting food retail establishments from using electronic shelf labels or digital shelf display technology. 

Taken together, these state personalized algorithmic pricing proposals reflect only one dimension of broader state AI legislative activity underway in 2026. 

Print:
Email this postTweet this postLike this postShare this post on LinkedIn
Photo of Lindsey Tonsager Lindsey Tonsager

Lindsey Tonsager is a recognized leader in representing companies before federal and state regulators, and is renowned for advising on minor protection, AI, and state comprehensive privacy laws.

Lindsey chairs the firm’s global Data Privacy and Cybersecurity practice. She advises clients in their…

Lindsey Tonsager is a recognized leader in representing companies before federal and state regulators, and is renowned for advising on minor protection, AI, and state comprehensive privacy laws.

Lindsey chairs the firm’s global Data Privacy and Cybersecurity practice. She advises clients in their strategic and proactive engagement with the Federal Trade Commission, the U.S. Congress, the California Privacy Protection Agency, and State Attorneys General on proposed changes to data protection laws, and regularly represents clients in responding to investigations and enforcement actions involving their privacy and information security practices.

Lindsey’s practice focuses on helping clients launch new products and services that implicate the laws governing the use of artificial intelligence; data processing for robotics, autonomous vehicles, and other connected devices; biometrics; online advertising; the collection of personal information from children, teens, and students online; e-mail marketing; disclosures of video viewing information; and new technologies.

Lindsey also assesses privacy and data security risks in complex corporate transactions where personal data is a critical asset or data processing risks are otherwise material. In light of a dynamic regulatory environment where new state, federal, and international data protection laws are always on the horizon and enforcement priorities are shifting, she focuses on designing risk-based global privacy programs for clients that can keep pace with evolving legal requirements and efficiently leverage the clients’ existing privacy policies and practices. She conducts data protection assessments to benchmark against legal requirements and industry trends and proposes practical risk mitigation measures.

Photo of Jayne Ponder Jayne Ponder

Jayne Ponder provides strategic advice to national and multinational companies across industries on existing and emerging data privacy, cybersecurity, and artificial intelligence laws and regulations.

Jayne’s practice focuses on helping clients launch and improve products and services that involve laws governing data privacy…

Jayne Ponder provides strategic advice to national and multinational companies across industries on existing and emerging data privacy, cybersecurity, and artificial intelligence laws and regulations.

Jayne’s practice focuses on helping clients launch and improve products and services that involve laws governing data privacy, artificial intelligence, sensitive data and biometrics, marketing and online advertising, connected devices, and social media. For example, Jayne regularly advises clients on the California Consumer Privacy Act, Colorado AI Act, and the developing patchwork of U.S. state data privacy and artificial intelligence laws. She advises clients on drafting consumer notices, designing consent flows and consumer choices, drafting and negotiating commercial terms, building consumer rights processes, and undertaking data protection impact assessments. In addition, she routinely partners with clients on the development of risk-based privacy and artificial intelligence governance programs that reflect the dynamic regulatory environment and incorporate practical mitigation measures.

Jayne routinely represents clients in enforcement actions brought by the Federal Trade Commission and state attorneys general, particularly in areas related to data privacy, artificial intelligence, advertising, and cybersecurity. Additionally, she helps clients to advance advocacy in rulemaking processes led by federal and state regulators on data privacy, cybersecurity, and artificial intelligence topics.

As part of her practice, Jayne also advises companies on cybersecurity incident preparedness and response, including by drafting, revising, and testing incident response plans, conducting cybersecurity gap assessments, engaging vendors, and analyzing obligations under breach notification laws following an incident.

Jayne maintains an active pro bono practice, including assisting small and nonprofit entities with data privacy topics and elder estate planning.

Photo of Natalie Maas Natalie Maas

Natalie is an associate in the firm’s San Francisco office, where she is a member of the Food, Drug, and Device, and Data Privacy and Cybersecurity Practice Groups. She advises pharmaceutical, biotechnology, medical device, and food companies on a broad range of regulatory…

Natalie is an associate in the firm’s San Francisco office, where she is a member of the Food, Drug, and Device, and Data Privacy and Cybersecurity Practice Groups. She advises pharmaceutical, biotechnology, medical device, and food companies on a broad range of regulatory and compliance issues.

Natalie also maintains an active pro bono practice, with a particular focus on health care and reproductive rights.

Photo of August Gweon August Gweon

August Gweon counsels national and multinational companies on new regulatory frameworks governing artificial intelligence, robotics, and other emerging technologies, digital services, and digital infrastructure. August leverages his AI and technology policy experiences to help clients understand AI industry developments, emerging risks, and policy…

August Gweon counsels national and multinational companies on new regulatory frameworks governing artificial intelligence, robotics, and other emerging technologies, digital services, and digital infrastructure. August leverages his AI and technology policy experiences to help clients understand AI industry developments, emerging risks, and policy and enforcement trends. He regularly advises clients on AI governance, risk management, and compliance under data privacy, consumer protection, safety, procurement, and platform laws.

August’s practice includes providing comprehensive advice on U.S. state and federal AI policies and legislation, including the Colorado AI Act and state laws regulating automated decision-making technologies, AI-generated content, generative AI systems and chatbots, and foundation models. He also assists clients in assessing risks and compliance under federal and state privacy laws like the California Privacy Rights Act, responding to government inquiries and investigations, and engaging in AI public policy advocacy and rulemaking.