GSA Proposes New Contract Clause Focused on the Government Use of AI

6 min

On March 6, 2026, the U.S. General Services Administration (GSA) issued a draft version of a new contract clause, GSAR 552.239-7001, “GSA Federal Acquisition Service Proposed Government AI System Terms and Conditions,” available here. The draft clause is prescribed for insertion in GSA “solicitations and contracts for Artificial Intelligence capabilities.” While the GSA is seeking industry comment on this draft clause through March 20, 2026, the GSA also indicated that it would be incorporating this clause into all existing and new Multiple Award Schedule (MAS) contracts through the much-anticipated next “mass modification,” Refresh 31 (see here).  The draft clause attempts to define a number of terms and concepts relevant to how contractors incorporate AI into their performance of contracts for GSA and other MAS customers, but much remains unclear.

How did we get here?

In the wake of the decision by the president and the Department of War (DoW) to designate Anthropic a supply chain risk, the GSA has taken swift action, including removing Anthropic from GSA contract programs. This newest action by GSA seems to follow suit, whereby GSA is mandating that contractors ensure their AI providers agree to, among other requirements, the government’s use of an AI system or service for any “lawful Government purpose,” which was one of the conditions Anthropic took issue with in its much-publicized dispute with the DoW.

The draft clause also implements, as a contractual matter, an executive order and guidance from the Office of Management and Budget (OMB) related to “Unbiased AI Principles,” which generally involve requirements for “truth-seeking” and “ideological neutrality.”

What’s in the draft contract clause?

The proposed clause would mandate that agencies that buy through a GSA-administered contract (such as a MAS contract, but other GSA contracts as well), regardless of the terms of an otherwise applicable commercial license, receive an “irrevocable, royalty-free, non-exclusive license to use the AI System for the duration of th[e] contract for any lawful Government purpose” (emphasis added). The license would also include the government’s right to “[i]ntegrate the AI system with Government systems as necessary for any lawful Government purpose.”  Moreover, this license would not only be imposed upon a prime contractor that is itself an AI technology company offering its solution to the government, but also on a prime contractor using another vendor’s AI products in performance of a covered contract. The latter would be required to secure these rights with any service provider whose AI solutions are used in performance of the contract (whether that service provider is a subcontractor or vendor of the prime contractor, at any tier below the prime contract level).

Additional compliance requirements under the draft clause include:

  1. Disclosing all AI systems used in performance
  2. Using only “American AI Systems” in contract performance
  3. Providing AI systems that enable human oversight by government officials (e.g., summarize intermediate reasoning steps, provide transparency around paths, retrieval methods, sources, etc.)
  4. Reporting security incidents via CISA forms within 72 hours and providing daily status updates
  5. Notifying the government in advance of material changes to the AI system or service providers and providing access to successor models
  6. Making “commercial efforts to ensure the AI system” used in performance complies with the “Unbiased AI Principles” referenced above, which include, among other things, that “AI system must be a neutral, nonpartisan tool that does not manipulate responses in favor of ideological dogmas such as Diversity, Equity, Inclusion”
  7. Permitting government evaluation and remediation rights
  8. Providing documentation of compliance with the clause upon government request, including system documentation consistent with the NIST AI Risk Management Framework. (Note: The government can suspend its use of the AI system and pursue decommissioning costs if the contractor fails to meet the Unbiased AI Principles.)

What contracts will this apply to and when?

Once finalized, GSA contracting officers will be required to insert the clause:

  1. In all GSA-issued solicitations and contracts when AI capabilities are contemplated in the course of performance and
  2. In all new and existing MAS contracts, such that the clause will apply to any AI systems or services offered under MAS contracts

In terms of timing, as noted, GSA is seeking feedback from industry by March 20, 2026, but has also announced that it intends to incorporate this clause into all MAS contracts via Refresh 31, whereby contractors will be given 60 days to accept the clause.  While a date certain on the issuance of Refresh 31 has not been given, this refresh was originally expected in February, so it stands to reason that GSA will issue it as soon as it is ready and when GSA feels comfortable with the clause as constituted.

In theory, GSA contracting officers have the authority to grant exceptions from “mass modifications” under the MAS program; in practice, however, GSA contracting officers often refuse to grant exceptions to these modifications. GSA’s MAS Modification Guide states that GSA may decline to approve contractors’ own modification requests, or exercise option periods, if a contractor has not acted on mass modifications implementing refreshes. In addition, GSA has previously threatened to revoke MAS contractors’ ability to receive new orders until they have accepted certain MAS modifications (e.g., the contractor vaccine mandate).

What concerns might industry have?

As in any other new area, much remains to be determined, and that is certainly the case with the first of federal contract clauses dealing with AI.  In this regard, at first blush a number of issues arise, some of which have been raised by the Anthropic dispute, and others simply as a matter of contract interpretation. Some of these issues may include, but are certainly not limited to the following:

  • Are all the defined terms clear?  For example, “Artificial Intelligence (AI) System” relies upon the definition in the Advancing American AI Act (Section 7223(4)) of Pub. L. 117-263) without detailed implementing guidance. The statute defines an artificial intelligence system in part as “any data system, software, application, tool, or utility that operates in whole or in part using dynamic or static machine learning algorithms or other forms of artificial intelligence,” but specifically excludes “any common commercial product within which artificial intelligence is embedded, such as a word processor or map navigation system.” OMB has provided some guidance regarding the “embedded” exception (e.g., “word processing software that is primarily used for its AI functionality likely would” not fall within the exception), but the exact line remains unclear.
  • What about undefined terms, such as the meaning of “lawful Government purpose”? What if contractors disagree with what is a lawful purpose? Or with respect to the unbiased/ideologically free requirement, how are “ideological dogmas” or “ideological judgments” defined?
  • What authority, particularly under the GSA MAS program (which supports purchasing on commercial terms and conditions), does the government have to require potential deviations from commercial practices? Under FAR 12.302(c), contracting officers must obtain a waiver before they can “tailor any clause or otherwise include any additional terms or conditions in a solicitation or contract for commercial products or commercial services in a manner that is inconsistent with customary commercial practice for the item being acquired[.]”

Unfortunately, these are just a few of the questions and issues presented in this draft clause, as many more are likely to arise across industry.