The National Institute of Standards and Technology (“NIST”) issued its initial draft of the “AI Risk Management Framework” (“AI RMF”), which aims to provide voluntary, risk-based guidance on the design, development, and deployment of AI systems.  NIST is seeking public comments on this draft via email, at AIframework@nist.gov, through April 29, 2022.  Feedback received on this draft will be incorporated into the second draft of the framework, which will be issued this summer or fall.

In particular, NIST has requested feedback on the following questions:

  • Whether the AI RMF appropriately covers and addresses AI risks, including with the right specificity for various use cases;
  • Whether the AI RMF is flexible enough to serve as a continuing resource considering the evolving technology and standards landscape;
  • Whether the AI RMF enables decisions about how an organization can increase understanding of, communication about, and efforts to manage AI risks;
  • Whether the functions, categories, and subcategories are complete, appropriate, and clearly stated;
  • Whether the AI RMF is in alignment with or leverages other frameworks and standards such as those developed or being developed by IEEE or ISO/IEC SC42;
  • Whether the AI RMF is in alignment with existing practices, and broader risk management policies;
  • What might be missing from the AI RMF; and
  • Whether the soon to be published draft companion document citing AI risk management practices is useful as a complementary resource and what practices or standards should be added.

The current draft of the AI RMF notes that “AI trustworthiness and risk are inversely related,” and as such, organizations should aspire to develop and deploy AI systems with characteristics of “trustworthiness.”  The AI RMF uses a “three-class taxonomy” to define the characteristics of a “trustworthy” AI system:

  • Technical characteristics refer to factors that are “under the direct control of AI system designers and developers” and are generally measurable through statistical methods.
  • Socio-technical characteristics refer to factors relating to how AI systems are perceived in society.  As such, these characteristics are not quantifiable through an automated process and require “human judgment” to measure.
  • Guiding principles refer to broader, qualitative social norms that should inform the way that AI systems are developed and deployed.

Looking ahead, after incorporating comments, the final draft of the AI RMF will include three sections that highlight specific actions that organizations can take to manage risk:

  • The “Core” section, which is included in this draft, describes a broad series of actions that all organizations can take to manage AI risks.
  • The “Profiles” section, which is not included in this draft, will highlight case studies of managing AI risk in specific contexts.  NIST is actively seeking “contributions of AI RMF profiles” during this comment period that it could include in the next draft of the AI RMF.

The “Practice Guide,” which is not yet published and will be posted separately online, will include additional risk management examples and practices.  NIST is currently seeking comments on the types of practices and standards that should be included in this Practice Guide.

Print:
Email this postTweet this postLike this postShare this post on LinkedIn
Photo of Libbie Canter Libbie Canter

Libbie Canter represents a wide variety of multinational companies on privacy, cyber security, and technology transaction issues, including helping clients with their most complex privacy challenges and the development of governance frameworks and processes to comply with global privacy laws. She routinely supports…

Libbie Canter represents a wide variety of multinational companies on privacy, cyber security, and technology transaction issues, including helping clients with their most complex privacy challenges and the development of governance frameworks and processes to comply with global privacy laws. She routinely supports clients on their efforts to launch new products and services involving emerging technologies, and she has assisted dozens of clients with their efforts to prepare for and comply with federal and state privacy laws, including the California Consumer Privacy Act and California Privacy Rights Act.

Libbie represents clients across industries, but she also has deep expertise in advising clients in highly-regulated sectors, including financial services and digital health companies. She counsels these companies — and their technology and advertising partners — on how to address legacy regulatory issues and the cutting edge issues that have emerged with industry innovations and data collaborations.

As part of her practice, she also regularly represents clients in strategic transactions involving personal data and cybersecurity risk. She advises companies from all sectors on compliance with laws governing the handling of health-related data. Libbie is recognized as an Up and Coming lawyer in Chambers USA, Privacy & Data Security: Healthcare. Chambers USA notes, Libbie is “incredibly sharp and really thorough. She can do the nitty-gritty, in-the-weeds legal work incredibly well but she also can think of a bigger-picture business context and help to think through practical solutions.”

Photo of Jayne Ponder Jayne Ponder

Jayne Ponder counsels national and multinational companies across industries on data privacy, cybersecurity, and emerging technologies, including Artificial Intelligence and Internet of Things.

In particular, Jayne advises clients on compliance with federal, state, and global privacy frameworks, and counsels clients on navigating the…

Jayne Ponder counsels national and multinational companies across industries on data privacy, cybersecurity, and emerging technologies, including Artificial Intelligence and Internet of Things.

In particular, Jayne advises clients on compliance with federal, state, and global privacy frameworks, and counsels clients on navigating the rapidly evolving legal landscape. Her practice includes partnering with clients on the design of new products and services, drafting and negotiating privacy terms with vendors and third parties, developing privacy notices and consent forms, and helping clients design governance programs for the development and deployment of Artificial Intelligence and Internet of Things technologies.

Jayne routinely represents clients in privacy and consumer protection enforcement actions brought by the Federal Trade Commission and state attorneys general, including related to data privacy and advertising topics. She also helps clients articulate their perspectives through the rulemaking processes led by state regulators and privacy agencies.

As part of her practice, Jayne advises companies on cybersecurity incident preparedness and response, including by drafting, revising, and testing incident response plans, conducting cybersecurity gap assessments, engaging vendors, and analyzing obligations under breach notification laws following an incident.