The National Institute of Standards and Technology (“NIST”) issued its initial draft of the “AI Risk Management Framework” (“AI RMF”), which aims to provide voluntary, risk-based guidance on the design, development, and deployment of AI systems.  NIST is seeking public comments on this draft via email, at AIframework@nist.gov, through April 29, 2022.  Feedback received on this draft will be incorporated into the second draft of the framework, which will be issued this summer or fall.

In particular, NIST has requested feedback on the following questions:

  • Whether the AI RMF appropriately covers and addresses AI risks, including with the right specificity for various use cases;
  • Whether the AI RMF is flexible enough to serve as a continuing resource considering the evolving technology and standards landscape;
  • Whether the AI RMF enables decisions about how an organization can increase understanding of, communication about, and efforts to manage AI risks;
  • Whether the functions, categories, and subcategories are complete, appropriate, and clearly stated;
  • Whether the AI RMF is in alignment with or leverages other frameworks and standards such as those developed or being developed by IEEE or ISO/IEC SC42;
  • Whether the AI RMF is in alignment with existing practices, and broader risk management policies;
  • What might be missing from the AI RMF; and
  • Whether the soon to be published draft companion document citing AI risk management practices is useful as a complementary resource and what practices or standards should be added.

The current draft of the AI RMF notes that “AI trustworthiness and risk are inversely related,” and as such, organizations should aspire to develop and deploy AI systems with characteristics of “trustworthiness.”  The AI RMF uses a “three-class taxonomy” to define the characteristics of a “trustworthy” AI system:

  • Technical characteristics refer to factors that are “under the direct control of AI system designers and developers” and are generally measurable through statistical methods.
  • Socio-technical characteristics refer to factors relating to how AI systems are perceived in society.  As such, these characteristics are not quantifiable through an automated process and require “human judgment” to measure.
  • Guiding principles refer to broader, qualitative social norms that should inform the way that AI systems are developed and deployed.

Looking ahead, after incorporating comments, the final draft of the AI RMF will include three sections that highlight specific actions that organizations can take to manage risk:

  • The “Core” section, which is included in this draft, describes a broad series of actions that all organizations can take to manage AI risks.
  • The “Profiles” section, which is not included in this draft, will highlight case studies of managing AI risk in specific contexts.  NIST is actively seeking “contributions of AI RMF profiles” during this comment period that it could include in the next draft of the AI RMF.

The “Practice Guide,” which is not yet published and will be posted separately online, will include additional risk management examples and practices.  NIST is currently seeking comments on the types of practices and standards that should be included in this Practice Guide.

Print:
Email this postTweet this postLike this postShare this post on LinkedIn
Photo of Libbie Canter Libbie Canter

Libbie Canter represents a wide variety of multinational companies on privacy, cyber security, and technology transaction issues, including helping clients with their most complex privacy challenges and the development of governance frameworks and processes to comply with global privacy laws. She routinely supports…

Libbie Canter represents a wide variety of multinational companies on privacy, cyber security, and technology transaction issues, including helping clients with their most complex privacy challenges and the development of governance frameworks and processes to comply with global privacy laws. She routinely supports clients on their efforts to launch new products and services involving emerging technologies, and she has assisted dozens of clients with their efforts to prepare for and comply with federal and state privacy laws, including the California Consumer Privacy Act and California Privacy Rights Act.

Libbie represents clients across industries, but she also has deep expertise in advising clients in highly-regulated sectors, including financial services and digital health companies. She counsels these companies — and their technology and advertising partners — on how to address legacy regulatory issues and the cutting edge issues that have emerged with industry innovations and data collaborations.

Photo of Jayne Ponder Jayne Ponder

Jayne Ponder is an associate in the firm’s Washington, DC office and a member of the Data Privacy and Cybersecurity Practice Group. Jayne’s practice focuses on a broad range of privacy, data security, and technology issues. She provides ongoing privacy and data protection…

Jayne Ponder is an associate in the firm’s Washington, DC office and a member of the Data Privacy and Cybersecurity Practice Group. Jayne’s practice focuses on a broad range of privacy, data security, and technology issues. She provides ongoing privacy and data protection counsel to companies, including on topics related to privacy policies and data practices, the California Consumer Privacy Act, and cyber and data security incident response and preparedness.