On March 6, 2026, the General Services Administration (“GSA”) published a draft contract clause, GSAR 552.239-7001, “Basic Safeguarding of Artificial Intelligence Systems,” that would establish binding requirements for contractors using artificial intelligence (“AI”) under GSA Multiple Award Schedule (“MAS”) contracts.
The clause is part of a broader federal push to govern AI procurement.[1] Because the clause would apply to any contract “for Artificial Intelligence capabilities” and could be incorporated into both new and existing GSA schedule contracts, its reach is potentially wide.
Clause Requirements and Restrictions
The draft clause would impose six core obligations on contractors:
- Disclose AI systems used in performance. Contractors must identify any AI system used in performing the contract, including whether that system was modified to comply with a foreign government or commercial regulatory framework, within 30 days of contract award.
- Use “American AI systems.” The clause expressly prohibits contractors from using foreign AI systems and AI systems whose components were “manufactured, developed, or controlled by non-U.S. entities” in the performance of the contract. This requirement aligns with current federal efforts to strengthen America’s domestic AI industry.
- Enable human oversight. Contractors must give the Government visibility into how AI systems operate, including a summary of all intermediate steps between data input and output, the methods the system uses to retrieve data, and the rationale behind its decisions.
- Report security incidents promptly. Contractors must report any incident as defined under the Federal Information Security Modernization Act (44 U.S.C. § 3552(b)(2))[2] within 72 hours and provide daily status updates thereafter.
- Establish feedback mechanisms. Contractors must create a process through which the Government can submit feedback, request modifications or enhancements, and flag operational concerns.
- Maintain documentation. Contractors must be able to produce existing commercial documents and disclosures demonstrating compliance with the clause, including system documentation consistent with NIST AI Risk Management Framework guidelines.
Contractor Responsibility for Third-Party AI Providers
One of the more significant—and potentially burdensome—provisions is the clause’s treatment of “Service Providers,” which it defines as an entity that directly or indirectly provides, operates, or licenses an AI system but is not a party to the contract. Despite Service Providers not being parties to the GSA Schedule contract, the clause explicitly declares contractors responsible for Service Providers’ compliance with the contract clause. Typically, a contractor is responsible for its subcontractors, not for independent third parties. However, the clause defines “Service Provider” to include entities that “may or may not be subcontractors.”
This means that contractors could be held responsible for ensuring that any company whose AI system they use in performance complies with the clause, even if that company’s only connection to the contract is a standard commercial platform agreement. This demands a level of supply chain visibility that many contractors, particularly smaller businesses, may simply not have. Navigating these obligations with third-party AI vendors will require careful attention.
Additionally, the requirement that AI systems must be “American AI systems” means that contractors would need full visibility into the supply chain of the AI system they are using. They would be responsible for ensuring that their Service Provider’s AI components were not manufactured, developed, or controlled by a non-U.S. entity. This could be difficult to enforce in practice, given the global supply network for AI infrastructure.
Government Rights, Data Restrictions, and AI System Use Limitations
While contractors and their Service Providers would retain ownership of their underlying AI systems, the clause grants the Government ownership of any “Custom Developments” (defined broadly to include modifications, customizations, configurations, or enhancements) made to the AI system for the contract. Under the clause, contractors and Service Providers may not use those Custom Developments for non-contract purposes without the Government’s explicit authorization. For example, if the contractor finds that Custom Developments could also improve the commercial version of the AI system, they will need explicit written authorization from the Contracting Officer to make those improvements. The Government also reserves the right to use the AI system “for any lawful Government purpose.”
One concern regarding AI systems has been safeguarding confidential government information. To that end, the clause prohibits contractors and Service Providers from using Government Data (defined as data inputs and outputs) to train or improve their AI systems. For contractors relying on third-party AI platforms, this means implementing compliance mechanisms to ensure that obligation is met in practice. The clause also states that all Government Data must be segregated from commercial data, deleted at the conclusion of the contract, and certified as deleted in writing. Additionally, the clause requires contractors to notify the Government if certain changes are made to the AI system during performance.
Finally, the clause requires contractors to make “commercial efforts” to ensure their AI systems follow “Unbiased AI Principles.” The clause specifically identifies Diversity, Equity, and Inclusion (“DEI”) as an “ideological dogma” that AI systems may not be programmed to favor, consistent with the current Administration’s broader stance on DEI.[3] This could require contractors to revisit, and potentially renegotiate, their existing agreements with American AI systems providers. Further, under the clause, the Government retains the right to assess AI systems for compliance and may suspend use of a system until any issues are resolved to its satisfaction. Thus, contractors could also face liability for “reasonable decommissioning costs” if their contract is terminated due to failure to comply with “Unbiased AI Principles.”
Key Takeaways
Although the clause is not yet final, contractors should treat this as a sign of how other agencies will incorporate federal AI procurement policies into their contracts. In the near term, contractors should undertake the following activities to prepare for these new compliance requirements:
- Assess their direct and indirect AI use.
- Determine whether the systems they rely on would qualify as “American AI systems” under the clause’s definition.
- Review their agreements with AI Service Providers to ensure they can deliver on the clause’s requirements.
- Begin developing internal processes for incident reporting, data segregation and deletion, and system documentation.
Getting ahead of these requirements before they become binding will put contractors in a far stronger position if and when the clause goes into effect.
GSA has extended the public comment feedback period for the proposed clause from March 20, 2026 to April 3, 2026. Interested parties can submit comments to maspmo@gsa.gov.
[1] Other examples include the Office of Management and Budget’s Memorandum M-25-22, “Driving Efficient Acquisition of Artificial Intelligence in Government,” and Executive Order 14365, “Ensuring a National Policy Framework for Artificial Intelligence.”
[2] The statute defines “incident” as an occurrence that unlawfully jeopardizes information or an information system, or violates applicable laws and security/acceptable use policies.
[3] Examples include Executive Order 14319, “Preventing Woke AI in the Federal Government” and Office of Management and Budget Memorandum M-26-04, “Increasing Public Trust in Artificial Intelligence Through Unbiased AI Principles.”