Attorneys General in Oregon and Connecticut issued guidance over the holiday interpreting their authority under their state comprehensive privacy statutes and related authorities.  Specifically, the Oregon Attorney General’s guidance focuses on laws relevant for artificial intelligence (“AI”), and the Connecticut Attorney General’s guidance focuses on opt-out preference signals that go into effect on January 1, 2025 in the state.

Oregon Guidance on AI Systems

On December 24, Oregon Attorney General Ellen Rosenblum issued guidance, “What you should know about how Oregon’s laws may affect your company’s use of Artificial Intelligence,” which underscores that the state’s Unlawful Trade Practices Act (“Oregon UTPA”), Consumer Privacy Act (“OCPA”), Equality Act, and other legal authorities apply to AI.  After noting the opportunities for Oregon’s economy – from streamlining tasks to delivering personalized services – the guidance states that AI can involve concerns around privacy, discrimination, and accountability. 

In particular, the guidance discusses how the Oregon UTPA and OCPA apply to the development and use of AI.  First, and with respect to the Oregon UTPA, Attorney General Rosenblum states that the “marketing, sale, or use” of AI systems are not exempt from the Oregon UTPA.  The guidance then provides several examples of activities related to AI systems that could implicate the Oregon UTPA, including, for example, that a business could violate the Oregon UTPA if it fails to disclose a material defect or material nonconformity in an AI product.  As another example, the guidance states that a business could violate the Oregon UTPA if it misrepresents the characteristics, uses, benefits, or qualities of AI system.

Additionally, the guidance also remarks on the intersection between the development and use of AI systems and the OCPA, and Attorney General Rosenblum remarks on two notable topics:

  • Disclosures of Personal Data for Model Training:  Attorney General Rosenblum states that developers that use personal data to train AI systems “must clearly disclose this in an accessible and clear privacy notice.”  Additionally, the guidance states that suppliers and developers cannot retroactively or passively alter privacy notices and must obtain affirmative consent for any new or secondary uses of that data.
  • Sensitive Data for Training:  The guidance also states that the use of sensitive data to develop or train AI models requires consent.
  • DPIAs:  Additionally, the guidance states that “feeding consumer data into AI models and processing it in connection with these models” presents “heightened risks to consumers” that require a data protection assessment.

Connecticut Guidance on OOPS Signals On December 30, Connecticut Attorney General William Tong issued a press release that the requirement to honor global opt out preference signals (“OOPS”) sent by consumers enters into effect January 1, 2025.  Under the Connecticut privacy statute, covered entities must honor signals that communicate a consumer’s request to opt-out of the sale of their personal data or targeted advertising.

Print:
Email this postTweet this postLike this postShare this post on LinkedIn
Photo of Libbie Canter Libbie Canter

Libbie Canter represents a wide variety of multinational companies on managing privacy, cyber security, and artificial intelligence risks, including helping clients with their most complex privacy challenges and the development of governance frameworks and processes to comply with U.S. and global privacy laws.

Libbie Canter represents a wide variety of multinational companies on managing privacy, cyber security, and artificial intelligence risks, including helping clients with their most complex privacy challenges and the development of governance frameworks and processes to comply with U.S. and global privacy laws. She routinely supports clients on their efforts to launch new products and services involving emerging technologies, and she has assisted dozens of clients with their efforts to prepare for and comply with federal and state laws, including the California Consumer Privacy Act, the Colorado AI Act, and other state laws. As part of her practice, she also regularly represents clients in strategic transactions involving personal data, cybersecurity, and artificial intelligence risk and represents clients in enforcement and litigation postures.

Libbie represents clients across industries, but she also has deep expertise in advising clients in highly-regulated sectors, including financial services and digital health companies. She counsels these companies — and their technology and advertising partners — on how to address legacy regulatory issues and the cutting edge issues that have emerged with industry innovations and data collaborations. 

Chambers USA 2024 ranks Libbie in Band 3 Nationwide for both Privacy & Data Security: Privacy and Privacy & Data Security: Healthcare. Chambers USA notes, Libbie is “incredibly sharp and really thorough. She can do the nitty-gritty, in-the-weeds legal work incredibly well but she also can think of a bigger-picture business context and help to think through practical solutions.”

Photo of Jayne Ponder Jayne Ponder

Jayne Ponder provides strategic advice to national and multinational companies across industries on existing and emerging data privacy, cybersecurity, and artificial intelligence laws and regulations.

Jayne’s practice focuses on helping clients launch and improve products and services that involve laws governing data privacy…

Jayne Ponder provides strategic advice to national and multinational companies across industries on existing and emerging data privacy, cybersecurity, and artificial intelligence laws and regulations.

Jayne’s practice focuses on helping clients launch and improve products and services that involve laws governing data privacy, artificial intelligence, sensitive data and biometrics, marketing and online advertising, connected devices, and social media. For example, Jayne regularly advises clients on the California Consumer Privacy Act, Colorado AI Act, and the developing patchwork of U.S. state data privacy and artificial intelligence laws. She advises clients on drafting consumer notices, designing consent flows and consumer choices, drafting and negotiating commercial terms, building consumer rights processes, and undertaking data protection impact assessments. In addition, she routinely partners with clients on the development of risk-based privacy and artificial intelligence governance programs that reflect the dynamic regulatory environment and incorporate practical mitigation measures.

Jayne routinely represents clients in enforcement actions brought by the Federal Trade Commission and state attorneys general, particularly in areas related to data privacy, artificial intelligence, advertising, and cybersecurity. Additionally, she helps clients to advance advocacy in rulemaking processes led by federal and state regulators on data privacy, cybersecurity, and artificial intelligence topics.

As part of her practice, Jayne also advises companies on cybersecurity incident preparedness and response, including by drafting, revising, and testing incident response plans, conducting cybersecurity gap assessments, engaging vendors, and analyzing obligations under breach notification laws following an incident.

Jayne maintains an active pro bono practice, including assisting small and nonprofit entities with data privacy topics and elder estate planning.