On March 28, the White House Office of Management and Budget (OMB) released guidance on governance and risk management for federal agency use of artificial intelligence (AI). The guidance was issued in furtherance of last fall’s White House AI Executive Order, which established goals to promote the safe, secure, and trustworthy use and development of AI systems.
The OMB guidance—Advancing Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence— defines AI broadly to include machine learning and “[a]ny artificial system that performs tasks under varying and unpredictable circumstances without significant human oversight, or that can learn from experience and improve performance when exposed to data sets” among other things. It directs federal agencies and departments to address risks from the use of AI, expand public transparency, advance responsible AI innovation, grow an AI-focused talent pool and workforce, and strengthen AI governance systems. Federal agencies must implement proscribed safeguard practices no later than December 1, 2024.
More specifically, the guidance includes a number of requirements for federal agencies, including:
- Expanded Governance: The guidance requires agencies to designate Chief AI Officers responsible for coordinating agency use of AI, promoting AI adoption, and managing risk, within 60 days. It also requires each agency to convene an AI governance body within 60 days. Within 180 days, agencies must submit to OMB and release publicly an agency plan to achieve consistency with OMB’s guidance.
- Inventories: Each agency (except the Department of Defense and those that comprise the intelligence community) is required to inventory its AI use cases at least annually and submit a report to OMB. Some use cases are exempt from being reported individually, but agencies still must report aggregate metrics about those use cases to OMB if otherwise in scope. The guidance states that OMB will later issue “detailed instructions” for these reports.
- Removing Barriers to Use of AI: The guidance focuses on removing barriers to the responsible use of AI, including by ensuring that adequate infrastructure exists for AI projects and that agencies have sufficient capacity to manage data used for training, testing, and operating AI. As part of this, the guidance states that agencies “must proactively share their custom-developed code — including models and model weights — for AI application in active use and must release and maintain that code as open-source software on a public repository,” subject to some exceptions (e.g., if sharing is restricted by law or required by a contractual obligation).
- Special Requirements for FedRAMP. The guidance calls for updates to the Federal Risk and Authorization Management Program (FedRAMP), which generally applies to cloud services that are sold to the U.S. Government. Specifically, the guidance requires agencies to make updates to authorization processes for FedRAMP services, including by advancing continuous authorizations (different from annual authorizations) for services with AI. The guidance also encourages agencies to prioritize critical and emerging technologies and generative AI in issuing Authorizations to Operate (ATOs).
- Risk Management: For certain “safety-impacting” and “rights-impacting” AI use cases, some agencies will need to adopt minimum risk management practices. These include the completion of an AI impact assessment that examines, for example, the intended purpose for AI, expected benefits, and potential risks. The minimum practices also require the agency to test AI for performance in a real-world context and conduct ongoing monitoring of the system. Among other requirements, the agency will be responsible for identifying and assessing AI’s impact on equity and fairness and taking steps to mitigate algorithmic discrimination when present. The guidance presents these practices as initial baseline tasks and requires agencies to identify additional context-specific risks for relevant use cases to be addressed by applying best practices for AI risk-management, such as those from the National Institute of Standards and Technology (NIST) AI Management Framework. The guidance also calls for human oversight of safety- and rights-impacting AI decision making and remedy processes for affected individuals. Agencies must implement these minimum practices no later than December 1, 2024.
Separately but relatedly, OMB issued an RFI on March 29, 2024, to inform future action governing the responsible procurement of AI under federal contracts. The RFI seeks responses to several questions designed to provide OMB with information to enable it and/or federal agencies to craft contract language and requirements that will further agency AI use and innovation while managing its risks and performance. Responses to these questions, as well as any other comments on the subject, are due by April 29, 2024.