
Anthropic vs. The Pentagon, Claude Outpaces ChatGPT, and Consulting Gets Replaced | #234
March 3, 2026
By C. Rich
In late February 2026, the Pentagon sought contracts for AI integration into classified military systems. Anthropic (maker of Claude) refused terms that would permit unrestricted use, specifically insisting on prohibitions against mass domestic surveillance of U.S. persons and fully autonomous weapons (systems that select and engage targets without meaningful human oversight). The government rejected these restrictions, leading to Anthropic being designated a “supply chain risk” and federal agencies ordered to cease using its tools.
Shortly thereafter, OpenAI announced an agreement allowing its AI systems (including ChatGPT-derived models) to be deployed on the Pentagon’s classified networks. OpenAI’s CEO, Sam Altman, stated that the contract incorporated similar safety principles: prohibitions on domestic mass surveillance and requirements for human responsibility in the use of force (including autonomous weapons). The company described technical safeguards, deployment oversight, and restrictions to cloud environments.
However, the publicly shared contract language permits the Department of Defense to use the AI for all lawful purposes, consistent with existing laws (e.g., the Fourth Amendment, FISA Act of 1978, Executive Order 12333). Critics argue this effectively defers to government interpretations of legality rather than imposing absolute red lines. Following backlash, including user boycotts and internal concerns, OpenAI amended the agreement (as of early March 2026) to add explicit language prohibiting intentional domestic surveillance of U.S. persons/nationals, including via commercially acquired data, and clarifying that intelligence agencies like the NSA would require separate agreements.
Regarding the Patriot Act (USA PATRIOT Act of 2001): It is not explicitly mentioned in OpenAI’s announcements or contract excerpts. The Patriot Act expanded government surveillance powers post-9/11, including access to business records and broader data collection under certain conditions, often tied to national security or foreign intelligence. Some commentary suggests that OpenAI’s reliance on “applicable laws” could indirectly accommodate surveillance programs justified under such statutes (e.g., if deemed “lawful” by authorities), though OpenAI’s amendments emphasize compliance with constitutional protections and prohibitions on unconstrained monitoring.
OpenAI did not “agree to let the government use their company” in an unqualified sense; rather, it negotiated terms for military deployment with claimed safeguards. The arrangement has sparked debate over whether these protections are robust or merely restate existing legal limits (which have historically allowed extensive surveillance programs). No evidence indicates OpenAI surrendered operational control or agreed to unrestricted government access beyond the negotiated scope.
This situation remains fluid, with ongoing criticism from tech employees, users, and observers regarding the ethics of AI in military contexts.


