On March 28, the White House Office of Management and Budget (OMB) released guidance on governance and risk management for federal agency use of artificial intelligence (AI).  The guidance was issued in furtherance of last fall’s White House AI Executive Order, which established goals to promote the safe, secure, and trustworthy use and development of AI systems.

The OMB guidance—Advancing Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence— defines AI broadly to include machine learning and “[a]ny artificial system that performs tasks under varying and unpredictable circumstances without significant human oversight, or that can learn from experience and improve performance when exposed to data sets” among other things.  It directs federal agencies and departments to address risks from the use of AI, expand public transparency, advance responsible AI innovation, grow an AI-focused talent pool and workforce, and strengthen AI governance systems.  Federal agencies must implement proscribed safeguard practices no later than December 1, 2024.

More specifically, the guidance includes a number of requirements for federal agencies, including:

  • Expanded Governance:  The guidance requires agencies to designate Chief AI Officers responsible for coordinating agency use of AI, promoting AI adoption, and managing risk, within 60 days.  It also requires each agency to convene an AI governance body within 60 days.  Within 180 days, agencies must submit to OMB and release publicly an agency plan to achieve consistency with OMB’s guidance.
  • Inventories:  Each agency (except the Department of Defense and those that comprise the intelligence community) is required to inventory its AI use cases at least annually and submit a report to OMB.  Some use cases are exempt from being reported individually, but agencies still must report aggregate metrics about those use cases to OMB if otherwise in scope.  The guidance states that OMB will later issue “detailed instructions” for these reports.
  • Removing Barriers to Use of AI:  The guidance focuses on removing barriers to the responsible use of AI, including by ensuring that adequate infrastructure exists for AI projects and that agencies have sufficient capacity to manage data used for training, testing, and operating AI.  As part of this, the guidance states that agencies “must proactively share their custom-developed code — including models and model weights — for AI application in active use and must release and maintain that code as open-source software on a public repository,” subject to some exceptions (e.g., if sharing is restricted by law or required by a contractual obligation). 
  • Special Requirements for FedRAMP.  The guidance calls for updates to the Federal Risk and Authorization Management Program (FedRAMP), which generally applies to cloud services that are sold to the U.S. Government.  Specifically, the guidance requires agencies to make updates to authorization processes for FedRAMP services, including by advancing continuous authorizations (different from annual authorizations) for services with AI.  The guidance also encourages agencies to prioritize critical and emerging technologies and generative AI in issuing Authorizations to Operate (ATOs). 
  • Risk Management:  For certain “safety-impacting” and “rights-impacting” AI use cases, some agencies will need to adopt minimum risk management practices.  These include the completion of an AI impact assessment that examines, for example, the intended purpose for AI, expected benefits, and potential risks.  The minimum practices also require the agency to test AI for performance in a real-world context and conduct ongoing monitoring of the system.  Among other requirements, the agency will be responsible for identifying and assessing AI’s impact on equity and fairness and taking steps to mitigate algorithmic discrimination when present.  The guidance presents these practices as initial baseline tasks and requires agencies to identify additional context-specific risks for relevant use cases to be addressed by applying best practices for AI risk-management, such as those from the National Institute of Standards and Technology (NIST) AI Management Framework.  The guidance also calls for human oversight of safety- and rights-impacting AI decision making and remedy processes for affected individuals.  Agencies must implement these minimum practices no later than December 1, 2024.

Separately but relatedly, OMB issued an RFI on March 29, 2024, to inform future action governing the responsible procurement of AI  under federal contracts. The RFI seeks responses to several questions designed to provide OMB with information to enable it and/or federal agencies to craft contract language and requirements that will further agency AI use and innovation while managing its risks and performance.  Responses to these questions, as well as any other comments on the subject, are due by April 29, 2024.

Photo of Yaron Dori Yaron Dori

Yaron Dori is co-chair of the Communications & Media Practice Group. He practices primarily in the area of telecommunications, privacy and consumer protection law, with an emphasis on strategic planning, policy development, commercial transactions, investigations and enforcement, and overall regulatory compliance. Mr. Dori…

Yaron Dori is co-chair of the Communications & Media Practice Group. He practices primarily in the area of telecommunications, privacy and consumer protection law, with an emphasis on strategic planning, policy development, commercial transactions, investigations and enforcement, and overall regulatory compliance. Mr. Dori advises clients on, among other things, federal and state wiretap and electronic storage provisions, including the Electronic Communications Privacy Act (ECPA); regulations affecting new technologies such as online behavioral advertising; and the application of federal and state telemarketing, commercial fax, and other consumer protection laws to voice, text and video transmissions sent to wireless devices and alternative distribution platforms. Mr. Dori also has experience advising companies on state medical marketing privacy provisions, and, more broadly, on state attorney general investigations into a range of consumer protection issues.

Photo of Ryan Burnette Ryan Burnette

Ryan Burnette advises clients on a range of issues related to government contracting. Mr. Burnette has particular experience with helping companies navigate mergers and acquisitions, FAR and DFARS compliance issues, public policy matters, government investigations, and issues involving government cost accounting and the…

Ryan Burnette advises clients on a range of issues related to government contracting. Mr. Burnette has particular experience with helping companies navigate mergers and acquisitions, FAR and DFARS compliance issues, public policy matters, government investigations, and issues involving government cost accounting and the Cost Accounting Standards.  Prior to joining Covington, Mr. Burnette served in the Office of Federal Procurement Policy in the Executive Office of the President, where he worked on government-wide contracting regulations and administrative actions affecting more than $400 billion dollars’ worth of goods and services each year.

Photo of Jayne Ponder Jayne Ponder

Jayne Ponder is an associate in the firm’s Washington, DC office and a member of the Data Privacy and Cybersecurity Practice Group. Jayne’s practice focuses on a broad range of privacy, data security, and technology issues. She provides ongoing privacy and data protection…

Jayne Ponder is an associate in the firm’s Washington, DC office and a member of the Data Privacy and Cybersecurity Practice Group. Jayne’s practice focuses on a broad range of privacy, data security, and technology issues. She provides ongoing privacy and data protection counsel to companies, including on topics related to privacy policies and data practices, the California Consumer Privacy Act, and cyber and data security incident response and preparedness.

Photo of Vanessa Lauber Vanessa Lauber

Vanessa Lauber is an associate in the firm’s New York office and a member of the Data Privacy and Cybersecurity Practice Group, counseling clients on data privacy and emerging technologies, including artificial intelligence.

Vanessa’s practice includes partnering with clients on compliance with federal…

Vanessa Lauber is an associate in the firm’s New York office and a member of the Data Privacy and Cybersecurity Practice Group, counseling clients on data privacy and emerging technologies, including artificial intelligence.

Vanessa’s practice includes partnering with clients on compliance with federal and state privacy laws and FTC and consumer protection laws and guidance. Additionally, Vanessa routinely counsels clients on drafting and developing privacy notices and policies. Vanessa also advises clients on trends in artificial intelligence regulations and helps design governance programs for the development and deployment of artificial intelligence technologies across a number of industries.