On August 9, 2019, the U.S. Department of Commerce’s National Institute of Standards and Technology (“NIST”) submitted its plan for federal engagement in the development of artificial intelligence standards. The plan was developed in response to the Executive Order signed by President Trump earlier this year, which required NIST to “issue a plan for Federal engagement in the development of technical standards and related tools in support of reliable, robust, and trustworthy systems that use AI technologies.” The final plan incorporates comments from over 40 organizations that commented on a draft released in July.
AI Standards Priorities
In its plan, NIST highlights the importance of developing consistent AI standards “to enable market competition, preclude barriers to trade, and allow innovation to flourish.” The plan provides government agencies with guidance regarding “important characteristics” of AI standards to enable agencies to make their own considered decisions on the adoption and development of AI standards.
NIST’s plan encourages agencies to prioritize efforts that are “inclusive and accessible, open and transparent, consensus-based, globally relevant, and non-discriminatory.” However, it refrains from making specific recommendations to agencies. Instead, the plan focuses on the necessity of “nimble, multi-path standards development” by which private and public actors effectively work together to create the best standards to address issues associated with rapidly evolving AI technology.
NIST also identifies nine “areas of focus” to guide ongoing efforts in the development of AI standards:
- Concepts and terminology,
- Data and knowledge,
- Human interactions,
- Performance testing and reporting methodology,
- Risk management, and
Trustworthiness is emphasized by the NIST plan as a new aspect of AI standards, which would require standards to include “guidance and requirements for accuracy, explainability, resiliency, safety, reliability, objectivity, and security.” Appendix II of the plan compiles examples of existing AI standards that were gathered from stakeholders during the formation of NIST’s plan.
Recommendations on AI Standards-Related Tools
In conjunction with its guidance on AI standards, NIST also advises on the need for complementary tools to support the development of AI technologies. These tools include:
- Standardized datasets for training and testing of AI systems,
- Tools to promote consistent knowledge and reasoning in AI systems,
- Fully documented use cases to provide guidelines in the deployment of AI technologies,
- Benchmarks to promote advancement,
- Validation and evaluation testing methodologies,
- Metrics to assess AI technologies,
- AI testbeds for proper modeling and experimentation, and
- Tools for accountability and auditing of AI systems.
NIST’s Government Recommendations
NIST’s plan advises that the engagement of American stakeholders is critical to the United States’ long-term competitiveness in the development of AI technologies. To that end, NIST outlines four potential levels of involvement that agencies may pursue in the development of AI standards: (1) monitoring, (2) participating, (3) influencing, or (4) leading.
Regardless of what level of involvement any individual agency pursues, NIST also makes the following broad recommendations to the Federal government to ensure long-term capabilities in AI standards development:
- Bolster AI standards-related knowledge, leadership, and coordination among Federal agencies to maximize effectiveness and efficiency.
- Promote focused research to advance and accelerate broader exploration and understanding of how aspects of trustworthiness can be practically incorporated within standards and standards-related tools.
- Support and expand public-private partnerships to develop and use AI standards and related tools to advance reliable, robust, and trustworthy AI.
Strategically engage with international parties to advance AI standards for U.S. economic and national security needs.