On September 8, 2023, Senators Richard Blumenthal (D-CT) and Josh Hawley (R-MO), Chair and Ranking Member of the Senate Judiciary Subcommittee on Privacy, Technology, and the Law, announced a new bipartisan framework for artificial intelligence (“AI”) legislation.  Senator Blumenthal said, “This bipartisan framework is a milestone – the first tough, comprehensive legislative blueprint for real, enforceable AI protections. It should put us on a path to addressing the promise and peril AI portends.” He also told CTInsider that he hopes to have a “detailed legislative proposal” ready for Congress by the end of this year.

The framework focuses on several key goals, summarized below: 

  • Establish a Licensing Regime Administered by an Independent Oversight Body: The framework proposes a licensing regime administered by an independent body.  With respect to which entities would need to register, the framework states that “[c]ompanies developing sophisticated general-purpose A.I. models should be required to register.”  The framework contemplates that licensing would include information on the AI model and that the oversight board would have authority to conduct audits of companies seeking licenses. 
  • Ensure Legal Accountability for Harms: The framework encourages Congress to ensure that AI companies can be held liable through both oversight body enforcement and private rights of action, including by “clarifying that Section 230 does not apply to A.I.”
  • Defend National Security and International Competition: The framework notes that Congress should use export controls, sanctions, and other restrictions to limit the transfer of advanced AI models and associated technologies to adversary nations and countries engaged in human rights violations.
  • Promote Transparency: The framework also alludes to certain responsibilities for developers and deployers of AI systems.  For example, the framework states that “[d]evelopers should be required to disclose essential information about the training data, limitations, accuracy, and safety” of the model to both users and companies deploying AI systems.  The framework also mentions that “A.I. system providers” should be required to watermark or otherwise provide technical disclosures of AI-generated deep fakes.
  • Protect Consumers and Kids: The framework states that companies “deploying A.I. in high-risk or consequential situations” should be required to “implement safety brakes,” such as providing notice when AI is being used to make decisions, “particularly adverse decisions.” The framework also states that “strict limits should be imposed on generative A.I. involving kids.”

We will continue to update you on meaningful developments in these quarterly updates and across our blogs.

Print:
Email this postTweet this postLike this postShare this post on LinkedIn
Photo of Jennifer Johnson Jennifer Johnson

Jennifer Johnson is a partner specializing in communications, media and technology matters who serves as Co-Chair of Covington’s Technology Industry Group and its global and multi-disciplinary Artificial Intelligence (AI) and Internet of Things (IoT) Groups. She represents and advises technology companies, content distributors…

Jennifer Johnson is a partner specializing in communications, media and technology matters who serves as Co-Chair of Covington’s Technology Industry Group and its global and multi-disciplinary Artificial Intelligence (AI) and Internet of Things (IoT) Groups. She represents and advises technology companies, content distributors, television companies, trade associations, and other entities on a wide range of media and technology matters. Jennifer has almost three decades of experience advising clients in the communications, media and technology sectors, and has held leadership roles in these practices for almost twenty years. On technology issues, she collaborates with Covington’s global, multi-disciplinary team to assist companies navigating the complex statutory and regulatory constructs surrounding this evolving area, including product counseling and technology transactions related to connected and autonomous vehicles, internet connected devices, artificial intelligence, smart ecosystems, and other IoT products and services. Jennifer serves on the Board of Editors of The Journal of Robotics, Artificial Intelligence & Law.

Jennifer assists clients in developing and pursuing strategic business and policy objectives before the Federal Communications Commission (FCC) and Congress and through transactions and other business arrangements. She regularly advises clients on FCC regulatory matters and advocates frequently before the FCC. Jennifer has extensive experience negotiating content acquisition and distribution agreements for media and technology companies, including program distribution agreements, network affiliation and other program rights agreements, and agreements providing for the aggregation and distribution of content on over-the-top app-based platforms. She also assists investment clients in structuring, evaluating, and pursuing potential investments in media and technology companies.

Photo of Jayne Ponder Jayne Ponder

Jayne Ponder counsels national and multinational companies across industries on data privacy, cybersecurity, and emerging technologies, including Artificial Intelligence and Internet of Things.

In particular, Jayne advises clients on compliance with federal, state, and global privacy frameworks, and counsels clients on navigating the…

Jayne Ponder counsels national and multinational companies across industries on data privacy, cybersecurity, and emerging technologies, including Artificial Intelligence and Internet of Things.

In particular, Jayne advises clients on compliance with federal, state, and global privacy frameworks, and counsels clients on navigating the rapidly evolving legal landscape. Her practice includes partnering with clients on the design of new products and services, drafting and negotiating privacy terms with vendors and third parties, developing privacy notices and consent forms, and helping clients design governance programs for the development and deployment of Artificial Intelligence and Internet of Things technologies.

Jayne routinely represents clients in privacy and consumer protection enforcement actions brought by the Federal Trade Commission and state attorneys general, including related to data privacy and advertising topics. She also helps clients articulate their perspectives through the rulemaking processes led by state regulators and privacy agencies.

As part of her practice, Jayne advises companies on cybersecurity incident preparedness and response, including by drafting, revising, and testing incident response plans, conducting cybersecurity gap assessments, engaging vendors, and analyzing obligations under breach notification laws following an incident.

Photo of Conor Kane Conor Kane

Conor Kane advises clients on a broad range of privacy, artificial intelligence, telecommunications, and emerging technology matters. He assists clients with complying with state privacy laws, developing AI governance structures, and engaging with the Federal Communications Commission.

Before joining Covington, Conor worked in…

Conor Kane advises clients on a broad range of privacy, artificial intelligence, telecommunications, and emerging technology matters. He assists clients with complying with state privacy laws, developing AI governance structures, and engaging with the Federal Communications Commission.

Before joining Covington, Conor worked in digital advertising helping teams develop large consumer data collection and analytics platforms. He uses this experience to advise clients on matters related to digital advertising and advertising technology.