Contractors interested in offering federal agencies artificial intelligence (AI) can now glean insight into how agencies are expected to conduct AI acquisitions. On September 24, 2024, the Office of Management and Budget (OMB) issued Memorandum M-24-18, Advancing the Responsible Acquisition of Artificial Intelligence in Government (the Memorandum), providing guidance and directing agencies “to improve their capacity for the responsible acquisition of AI” systems or services, including subcomponents. The Memorandum builds on the White House’s Executive Order 14110, Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, and OMB Memorandum M-24-10, Advancing Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence. Taking effect on March 23, 2025, M-24-18 will apply to all solicitations and contract option exercises for AI systems covered under the Memorandum.

Despite not being the intended audience, contractors would do well to understand and consider the Memorandum’s contents, particularly the acquisition strategies agencies must implement to procure AI responsibly. Notably, OMB identifies three areas where agencies must undertake actions to improve their capability to conduct AI acquisitions: improving information sharing, risk management, and the promotion of competition. Notably, contractors can expect emphasis in the following areas when offering an AI system or application to the government:

  • Greater use of performance-based acquisition techniques under Federal Acquisition Regulation (FAR) Subpart 37.6 to evaluate AI system acquisition proposals and perform post-award monitoring.
  • New contract terms that attempt to delineate the government and contractors’ ownership and intellectual property (IP) rights. This includes requiring contractors to provide appropriate rights to code, data, and AI models first produced in the performance of the contract. Notably, contractors should be prepared for agencies’ negotiation of unlimited rights for certain deliverables, including assigning copyright to deliverables to avoid vendor lock-in.
  • Greater transparency on licensing terms and total life cycle cost and pricing to government customers. Agencies will be on the lookout for unduly restrictive terms that limit an agency’s ability to use acquired AI systems.
  • Responding to requests for oral presentations, pilots, trials, and other proofs of concept. AI is still fresh and relatively misunderstood, so expect agencies to try to develop a more informed analysis of vendor offerings and ways to leverage collective knowledge-sharing platforms and portals.
  • Shorter contract durations as the agencies look to promote competition and avoid vendor lock-in. There will also be a push for refined or limited tasks as agencies try to diversify technical elements composing an AI system or service through multiple contracts or modular contracting. A hearty protest may be warranted if solicitation requirements are not described in vendor- and model-agnostic ways.
  • Agency use of on-ramp procedures to allow the onboarding of small and large contractors for long-term contracts and consider exploring non-incumbent providers of AI systems even when a current provider offers similar services.

Although not revolutionary, Memorandum M-24-18 provides solid agency guidance for conducting AI acquisitions. As noted above, understanding its intricacies also provides significant insight for contractors preparing AI products for government customers, so when entering the new AI products and services market, it does not hurt to be well-versed on what agencies are looking for/at. Thus, a breakdown of key points in the Memorandum follows, detailing the acquisition posture that agencies will assume when competing and negotiating their AI acquisitions.

The Scope: What Qualifies as AI?

At its heart, the Memorandum applies to AI systems defined in Section 7223 of the Advancing American AI Act and subcomponents of AI systems. Codified at 40 U.S.C. §11301, an AI system is defined to mean:

  • any data system, software, application, tool, or utility that operates in whole or in part using dynamic or static machine learning algorithms or other forms of artificial intelligence, whether-
    • the data system, software, application, tool, or utility is established primarily for the purpose of researching, developing, or implementing artificial intelligence technology; or
    • artificial intelligence capability is integrated into another system or agency business process, operational activity, or technology system; and
  • does not include any common commercial product within which artificial intelligence is embedded, such as a word processor or map navigation system.

Contractors should take note that a “common commercial product” is excluded from the definition of an AI system and, therefore, outside the Memorandum’s requirements. Of course, what constitutes a common commercial product is a fact-specific inquiry to be conducted by agencies. Generally, to determine whether an AI application or product qualifies as a common commercial product, the Memorandum recommends agencies consider (1) whether the product is widely available to the public for commercial use, as opposed to products that are not readily available to the general public or are specialized or customized for agency use, and (2) whether the AI is embedded in a product that has substantial non-AI purposes or functionalities, as opposed to products for which AI is a primary purpose or functionality.

However, an unresolved question is the interplay between a common commercial product described in the Memorandum and commercial products understood in the FAR. Under FAR Part 12, when procuring commercial products or services, the government generally acquires “only the technical data and the rights in that data customarily provided to the public with a commercial product or process.” FAR 12.211. Similarly, when the government procures computer software, “[c]ommercial computer software or commercial computer software documentation shall be acquired under licenses customarily provided to the public to the extent such licenses are consistent with Federal law and otherwise satisfy the Government’s needs.” FAR 12.212(a). However, as discussed later, the Memorandum directs agencies to negotiate and seek IP rights that are in tension with rights typically conferred on commercial products and services under FAR Part 12. This will/should be an area of important contractor and agency scrutiny moving forward.

Managing AI Risks and Performance: Measure Twice, Cut Once         

Recognizing the “complex nature of how AI systems are built, trained, and deployed,” the Memorandum advises agencies to prioritize privacy, security, data ownership, and interoperability to manage AI risks. Accounting for the distinct ways AI systems are developed, trained, and deployed, the principles in the Memorandum are to supplement existing risk management policies and practices.

To manage AI risks effectively, agencies must identify proactively and early whether an acquisition includes AI covered under the Memorandum. This may include inserting language in an agency’s solicitation that the agency intends to procure an AI system or application. When a solicitation does not explicitly request AI, agencies may consider requiring vendors to respond as part of their proposal, whether the use of AI is proposed, and whether AI is a primary feature or a component of a system or service being acquired.

Internally, early engagement from relevant equities is critical in managing AI risks. Agencies are to engage senior agency officials for privacy and the privacy program office early and throughout the acquisition process to manage privacy risks. Other relevant equities are to be engaged in addressing risks arising from potential AI biases to identify mitigation strategies.

Contractually, OMB directs agencies to ensure AI acquisitions include terms prohibiting a contractor’s delivery of AI systems or applications that are designed, trained, or developed contrary to federal law or policy. This may include requiring the delivery of supporting documentation or test results to allow an agency to independently validate and assess the capabilities of an AI system to determine whether it performs as stated. AI-based biometric acquisitions can include contract terms prohibiting a contractor from training an AI system or application on unreliable or unlawfully collected data.

For AI-based facial recognition technology, agencies may consider requiring vendors to “submit systems that use facial recognition for evaluation by NIST [the National Institute of Standards and Technology] as part of the Face Recognition Technology Evaluation and Facial Analytics Technical Evaluation.” Face Recognition Technology Evaluation and Facial Analytics Technical Evaluation is part of NIST’s Face Projects that provide “independent government evaluations of commercially available and prototype face recognition technologies.” These evaluations provide government and law enforcement agencies with information on where and how facial recognition technology can be best deployed and direct future research to improve the technology.

Cybersecurity and incident reporting are other areas in which to manage AI risks. Because understanding how an acquired AI system works is necessary in order to recognize what risks may arise, the Memorandum recommends that agencies include contractual requirements to facilitate an agency’s ability to obtain any documentation and access necessary to understand how an AI model is trained. Documentation may include training logs and other applicable data necessary to reproduce and validate the AI training methodology. The Memorandum also recommends that agencies consider contractual requirements that establish baseline expectations of model training using agency data to better understand how a model is expected to perform using such data.

Additionally, the Memorandum recommends that agencies consider contract terms to address contractor compliance with relevant agency requirements regarding data protection, including software controls for privacy and security. In addition to applicable cybersecurity and incident-reporting requirements, agencies are to consider additional terms requiring a contractor to identify and report AI incidents and malfunctions to the agency, e.g., within 72 hours or a certain time, depending on the incident’s severity.

 Generative AI: Creating Risks

The Memorandum provides specific guidance to manage risk for contractors offering generative AI (GenAI) to the government. GenAI systems create content in response to a user’s prompt; the created (or “generated”) content can be text, audio, visual, or a combination. Examples of GenAI systems are ChatGPT by OpenAI and Gemini by Google.

Given the risks GenAI presents if abused, e.g., deepfakes, the Memorandum identifies the following risk management practices agencies should consider when procuring GenAI:

  • Include contractual requirements for methods to distinguish generated content from reality using watermarks, cryptographically signed metadata, or other technical artifacts.
  • Require contractors to document how GenAI was or will be trained and evaluated, including relevant information about data, model architecture, and relevant evaluations.
  • Include contract terms requiring the contractor to implement controls to mitigate the risk of harmful or illegal output by having appropriate policies, procedures, and technical measures to filter training data, prompts, and outputs. This can be accomplished by providing the agency with documentation on components or capabilities put in place by the vendor, or the agency can configure that.
  • Include contract terms requiring the contractor to provide detailed documentation to allow the agency to understand the testing and evaluation conducted to develop the GenAI application, including red-teaming results and steps to mitigate issues.
  • Given the necessary resource consumption by supporting data centers, consider requiring the disclosure of environmental impacts and mitigation efforts related to carbon emissions and resource consumption. Agencies may consider incentivizing contractors that prioritize renewable energy and AI system efficiencies tied to performance-based acquisitions.
  • Consider leveraging GenAI used by other agencies for other government needs to avoid the duplicative and redundant development of AI systems or services.

Accordingly, contractors in the GenAI products and services market should expect agencies to demand more as part of their requests than simply providing the product. Time and care should be spent not only negotiating the full scope of available terms and conditions but also addressing manuals and materials that properly describe how the GenAI product is trained and used by the federal government. The guidance also suggests that one government customer could lead to many more. So structuring deals that entice one client may put the contractor in a position to service many—that is, of course, so long as the product continues to compliantly deliver.

Avoiding Vendor Lock-In: Competition Is the Fuel That Ignites Innovation

A notable tenor of the Memorandum is the importance of competition in this still-nascent marketplace. The Memorandum directs agencies to promote competition and avoid vendor lock-in to ensure the nascent AI marketplace grows and thrives. Although many recommendations are neither new nor groundbreaking, the Memorandum emphasizes agencies prioritizing interoperability concerning data sharing, data portability, and AI model portability. Portability minimizes the risk of vendor lock-in, high switching costs, and other negative competitive effects. Interoperability, for purposes of the Memorandum, refers “to the ability of two or more systems, products, or components to exchange information and use the information that has been exchanged, including to operate effectively together. This includes ensuring that open and standard data formats and application programming interfaces (APIs) are used so that foundational components can be used, including to build for new use cases, without obscure proprietary technologies or licensing.”

To avoid vendor lock-in and promote competition, the Memorandum recommends the following:

  • Require contractors to commit to knowledge transfer to appropriate agency staff, including training and documentation necessary to switch vendors or AI system.
  • Prohibit contractors from employing contract terms that limit an agency’s ability to share pricing information.
  • Require the delivery of well-defined APIs, particularly within acquired architectures that promote interoperability with other system components.
  • Require robust documentation regarding AI model development, coding languages used, testing scripts, and protocols.
  • Require transparent and nondiscriminatory pricing practices that offer products without bulk pricing arrangements, tying arrangements, steering arrangements, minimum spend requirements, or agreements that encourage consolidation with one vendor. Require contractors to refrain from self-preferencing vertically integrated systems or services and to provide information about which subcontractors, including system integrators, were engaged, how they were selected, and how their involvement impacts price.

Here again, savvy contractors need to recognize the scope of the relationship they may build when offering AI to a federal customer. There is a high likelihood that the contractor/federal customer relationship could be fast-paced and require the contractor provide or make available many additional elements and requests associated with a better understanding of the technology being delivered. That could become quite expensive if it is not properly contracted for. Additionally, as with the greater scope of IP rights on the table as described above, this level of transparency may be uncomfortable for some long-established federal contractors. However, the government is clearly worried that the novelty of the technology—and perhaps its rapid adoption—may lead some agencies to build systems from which there is no easy or affordable escape.

In sum, the AI market is surely building up and out. That means contractors interested in or already offering AI to the government should expect a greater emphasis on performance-based acquisitions and modular contracting, governed under FAR Subpart 37.6 and FAR 39.103, respectively. Contractors should also prepare additional deliverable requirements documenting their AI systems’ design, development, and deployment. Last, contractors should expect an even greater focus on negotiating data rights as agencies implement and comply with the Memorandum’s requirements. If this describes your company or its intent, you would do well to prepare now before the Memorandum takes effect on March 23, 2025.

Photo of Alex Major Alex Major

Mr. Major is a partner and co-leader of the firm’s Government Contracts & Export Controls Practice Group. Mr. Major focuses his practice on federal procurement, cybersecurity liability and risk management, and litigation. A prolific author and thought leader in the area of cybersecurity…

Mr. Major is a partner and co-leader of the firm’s Government Contracts & Export Controls Practice Group. Mr. Major focuses his practice on federal procurement, cybersecurity liability and risk management, and litigation. A prolific author and thought leader in the area of cybersecurity, his professional experience involves a wide variety of litigation and counseling matters dealing with procurement laws and federal regulations and standards . His diverse experience includes complex litigation in federal court under the qui tam provisions of the False Claims Act and bid protest actions. He counsels all sizes of companies on issues relating to compliance with government regulations including, among other things, cybersecurity (NIST, FIPS, FedRAMP, and DFARS) requirements, multiple award schedule compliance, Section 508 issues, country of origin requirements under the Buy American and Trade Agreements Acts, cost accounting, and small business requirements. He also regularly conducts internal investigations to assist companies ensure that they are in full compliance with the law.