On September 24, 2024, the Office of Management and Budget (OMB) released Memorandum M-24-18, Advancing the Responsible Acquisition of Artificial Intelligence in Government (Memo). The 36-page Memo builds on OMB’s March 2024 guidance governing federal agencies’ use of AI, Memorandum M-24-10, which we reported on here. The Memo addresses requirements and guidance for agencies acquiring AI systems and services, focusing on three strategic goals: (i) ensuring collaboration across the federal government; (ii) managing AI risks and performance; and (iii) promoting a competitive AI market.
Scope and Applicability
The Memo’s requirements will apply to contracts awarded under solicitations issued on or after March 23, 2025, as well as to any renewal options or extensions exercised after March 23, 2025.
The Memo addresses government-wide considerations associated with agencies’ procurement of an AI system or service. For this purpose, the Memo defines “AI System” to include AI applications and AI integrated into other systems or agency business processes, but does not include “common commercial products” with embedded AI functionality (e.g., common commercial map navigation applications or word processing software that has substantial non-AI purposes or functionalities but for which AI is embedded for functions like suggesting text or correcting spelling and grammar).
The Memo also does not apply to AI acquired by elements of the Intelligence Community or acquired for use as a component of a “National Security System” as defined under 44 U.S.C. § 3552(b)(6), and does not apply to:
- contractors’ incidental use of AI during contract performance (e.g., AI used at the option of a contractor when not directed or required to fulfill requirements);
- AI acquired to carry out basic, applied, or experimental research (except where the purpose of such research is to develop particular AI applications within the agency);
- regulatory actions designed to prescribe generally AI law or policy; or
- evaluations of particular AI applications because the AI provider is the target or potential target of a regulatory enforcement, law enforcement, or national security action.
Cross-Functional and Interagency Collaboration
The Memo directs agencies to formalize internal acquisition policies, procedures, and practices to reflect AI acquisition requirements and requires agencies to submit proof of implementation progress and agency-wide coordination to OMB by March 2025. The Memo further directs agencies to work together through interagency councils and other efforts to collaborate and share information about AI acquisition across agencies “to strengthen the marketplace over time by increasing predictability and standardizing expectations for vendors.” According to the Memo, information collected by agencies on AI acquisition should be shared publicly where possible to provide clarity to contractors, including new entrants.
Managing AI Risks and Performance
The bulk of the Memo is dedicated to best practices and specific requirements for managing AI risk and performance, directing agencies to prioritize privacy, security, data ownership, and interoperability when planning for an AI acquisition.
Identifying AI Acquisitions and Communicating AI Requirements with Contractors
To determine whether AI covered by Memorandum M-24-18 is being acquired, the Memo directs agency officials responsible for acquisition planning, requirements development, and proposal evaluation to:
- Communicate to contractors, to the greatest extent practicable, whether the acquired AI system or service is intended to be used in a manner that could impact rights or safety and trigger additional risk management requirements.
- In cases where an agency’s solicitation does not explicitly ask for an AI system, consider requirements language asking contractors to report any proposed use of AI as part of their proposal submissions.
- Require contractors to provide a notification to and receive acceptance from relevant agency stakeholders prior to the integration of new AI features or components into systems and services being delivered under contract.
- Communicate with contractors to determine when AI is a primary feature or component in an acquired system or service, including questions to the contractor to understand if AI is being used in the evaluation or performance of a contract that does not explicitly involve AI.
Requirements Related to Privacy, Civil Liberties, and Civil Rights
The Memo includes various recommendations and requirements that could create affirmative requirements for contractors, including that:
- Agencies should consider including as part of their evaluation criteria how AI vendors demonstrate they are protecting personally identifiable information and mitigating privacy risks, including through privacy-enhancing technologies.
- Agencies should ensure contractual terms address requirements for vendors to submit systems that use facial recognition for evaluation by NIST as part of the Face Recognition Technology Evaluation and Facial Analytics Technical Evaluation, where practicable.
- Agencies should require vendors to identify potential AI biases and mitigation strategies to address biases.
Managing Performance and Risk for Acquired AI
The Memo suggests that agencies should leverage performance-based contracting approaches and techniques, to strengthen their ability to effectively plan for, identify, and manage risk throughout the contract lifecycle.
Intellectual Property Rights and Ownership
The Memo requires agencies to scrutinize terms of service and licensing terms, including those that specify what information, models, and transformed agency data should be provided as deliverables to avoid vendor lock-in, and to conduct careful due diligence on the supply chain of a vendor’s data. The best practices outlined in the Memo include contractual restrictions on using agency information to train AI systems.
Data Management
The Memo requires that contract terms explicitly address how a vendor will ensure compliance with relevant data management directives and policies (e.g., through a quality management system), particularly with respect to (i) data that is generated before, during, or after the delivery of the AI; (ii) tiered levels of access and requisite responsibilities of handling data; and (iii) disclosures when copyrighted materials are used in the training data.
Cybersecurity
According to the Memo, agencies should include contractual requirements that facilitate the ability to obtain any documentation and access necessary to understand how a model was trained. For example, agencies may request training logs from a contractor, including evidence of any data sourcing, cleansing, inputs, parameters, or hyper-parameters used during training sessions for models delivered to the government. Contractors may also be asked to provide detailed documentation of the training procedure used for the model to demonstrate the model’s authenticity, provenance, and security, and to make trained model artifacts available for agency evaluation and review.
Rights-Impacting AI and Safety-Impacting AI
Where practicable, agencies must disclose in solicitations whether the planned use is rights-impacting or safety-impacting. Agencies must consider whether various categories of information must be provided by the vendor to satisfy the requirements of OMB Memorandum M-24-10 or to meet the agency’s objectives, including, e.g., performance metrics and information about data source, provenance, selection, quality, and appropriateness and fitness-for-purpose. Contracts should delineate responsibilities for ongoing testing and monitoring, set criteria for risk mitigation, and prioritize performance improvement. Contractors could also be required to have a process for identifying and disclosing serious AI incidents and malfunctions of an acquired AI system or service within 72 hours, or a timely manner based on the severity of the incident.
For new or existing contracts involving agency use of rights-impacting AI systems or services, agencies must disclose OMB Memorandum M-24-10’s notice and appeal requirements to contractors and require cooperation with those requirements.
Generative AI
The Memo includes best practices for agencies acquiring general use enterprise-wide generative AI, including contractual requirements for vendors to provide transparency about generated content, protect against inappropriate use, prevent harmful and illegal output, provide evaluation and testing documentation, and mitigate environmental impacts.
Innovative Acquisition to Promote Competition
The Memo calls on agencies to foster a competitive AI marketplace, including by establishing contractual requirements designed to minimize vendor lock-in; prioritizing interoperability and transparency; and leveraging innovative acquisition practices to secure better contract outcomes. Appendix I of the Memo outlines actions agencies should take to promote such innovative practices.
Key Takeaways
Memorandum M-24-18 further signals a movement by the government from discussing general principles for AI to creating rules around the government’s procurement and use of AI in contracts. Contractors should expect to continue to see movement towards regulations and should pay close attention to solicitations that may require reporting requirements around, for example: (a) the proposed use of an AI system; (b) the use of new AI features; (c) the protection of personally identifiable information (PII); (d) the data used to train AI models; (e) data accountability; (f) how the AI system is tested and validated; and (g) how bias will be mitigated.
We would like to thank Lillie Drenth, Law Clerk, for her contribution to this alert.