The National Institute of Standards and Technology (“NIST”) issued its initial draft of the “AI Risk Management Framework” (“AI RMF”), which aims to provide voluntary, risk-based guidance on the design, development, and deployment of AI systems.  NIST is seeking public comments on this draft via email, at AIframework@nist.gov, through April 29, 2022.  Feedback received on this draft will be incorporated into the second draft of the framework, which will be issued this summer or fall.

In particular, NIST has requested feedback on the following questions:

  • Whether the AI RMF appropriately covers and addresses AI risks, including with the right specificity for various use cases;
  • Whether the AI RMF is flexible enough to serve as a continuing resource considering the evolving technology and standards landscape;
  • Whether the AI RMF enables decisions about how an organization can increase understanding of, communication about, and efforts to manage AI risks;
  • Whether the functions, categories, and subcategories are complete, appropriate, and clearly stated;
  • Whether the AI RMF is in alignment with or leverages other frameworks and standards such as those developed or being developed by IEEE or ISO/IEC SC42;
  • Whether the AI RMF is in alignment with existing practices, and broader risk management policies;
  • What might be missing from the AI RMF; and
  • Whether the soon to be published draft companion document citing AI risk management practices is useful as a complementary resource and what practices or standards should be added.

The current draft of the AI RMF notes that “AI trustworthiness and risk are inversely related,” and as such, organizations should aspire to develop and deploy AI systems with characteristics of “trustworthiness.”  The AI RMF uses a “three-class taxonomy” to define the characteristics of a “trustworthy” AI system:

  • Technical characteristics refer to factors that are “under the direct control of AI system designers and developers” and are generally measurable through statistical methods.
  • Socio-technical characteristics refer to factors relating to how AI systems are perceived in society.  As such, these characteristics are not quantifiable through an automated process and require “human judgment” to measure.
  • Guiding principles refer to broader, qualitative social norms that should inform the way that AI systems are developed and deployed.

Looking ahead, after incorporating comments, the final draft of the AI RMF will include three sections that highlight specific actions that organizations can take to manage risk:

  • The “Core” section, which is included in this draft, describes a broad series of actions that all organizations can take to manage AI risks.
  • The “Profiles” section, which is not included in this draft, will highlight case studies of managing AI risk in specific contexts.  NIST is actively seeking “contributions of AI RMF profiles” during this comment period that it could include in the next draft of the AI RMF.

The “Practice Guide,” which is not yet published and will be posted separately online, will include additional risk management examples and practices.  NIST is currently seeking comments on the types of practices and standards that should be included in this Practice Guide.

Print:
Email this postTweet this postLike this postShare this post on LinkedIn
Photo of Libbie Canter Libbie Canter

Libbie Canter represents a wide variety of multinational companies on managing privacy, cyber security, and artificial intelligence risks, including helping clients with their most complex privacy challenges and the development of governance frameworks and processes to comply with U.S. and global privacy laws.

Libbie Canter represents a wide variety of multinational companies on managing privacy, cyber security, and artificial intelligence risks, including helping clients with their most complex privacy challenges and the development of governance frameworks and processes to comply with U.S. and global privacy laws. She routinely supports clients on their efforts to launch new products and services involving emerging technologies, and she has assisted dozens of clients with their efforts to prepare for and comply with federal and state laws, including the California Consumer Privacy Act, the Colorado AI Act, and other state laws. As part of her practice, she also regularly represents clients in strategic transactions involving personal data, cybersecurity, and artificial intelligence risk and represents clients in enforcement and litigation postures.

Libbie represents clients across industries, but she also has deep expertise in advising clients in highly-regulated sectors, including financial services and digital health companies. She counsels these companies — and their technology and advertising partners — on how to address legacy regulatory issues and the cutting edge issues that have emerged with industry innovations and data collaborations. 

Chambers USA 2024 ranks Libbie in Band 3 Nationwide for both Privacy & Data Security: Privacy and Privacy & Data Security: Healthcare. Chambers USA notes, Libbie is “incredibly sharp and really thorough. She can do the nitty-gritty, in-the-weeds legal work incredibly well but she also can think of a bigger-picture business context and help to think through practical solutions.”

Photo of Jayne Ponder Jayne Ponder

Jayne Ponder provides strategic advice to national and multinational companies across industries on existing and emerging data privacy, cybersecurity, and artificial intelligence laws and regulations.

Jayne’s practice focuses on helping clients launch and improve products and services that involve laws governing data privacy…

Jayne Ponder provides strategic advice to national and multinational companies across industries on existing and emerging data privacy, cybersecurity, and artificial intelligence laws and regulations.

Jayne’s practice focuses on helping clients launch and improve products and services that involve laws governing data privacy, artificial intelligence, sensitive data and biometrics, marketing and online advertising, connected devices, and social media. For example, Jayne regularly advises clients on the California Consumer Privacy Act, Colorado AI Act, and the developing patchwork of U.S. state data privacy and artificial intelligence laws. She advises clients on drafting consumer notices, designing consent flows and consumer choices, drafting and negotiating commercial terms, building consumer rights processes, and undertaking data protection impact assessments. In addition, she routinely partners with clients on the development of risk-based privacy and artificial intelligence governance programs that reflect the dynamic regulatory environment and incorporate practical mitigation measures.

Jayne routinely represents clients in enforcement actions brought by the Federal Trade Commission and state attorneys general, particularly in areas related to data privacy, artificial intelligence, advertising, and cybersecurity. Additionally, she helps clients to advance advocacy in rulemaking processes led by federal and state regulators on data privacy, cybersecurity, and artificial intelligence topics.

As part of her practice, Jayne also advises companies on cybersecurity incident preparedness and response, including by drafting, revising, and testing incident response plans, conducting cybersecurity gap assessments, engaging vendors, and analyzing obligations under breach notification laws following an incident.

Jayne maintains an active pro bono practice, including assisting small and nonprofit entities with data privacy topics and elder estate planning.