Pentagon Designates Anthropic Supply Chain Risk Over AI Military Dispute
Pentagon Designates Anthropic Supply Chain Risk Over AI Military Dispute
On February 28, 2026, Anthropic responded to U.S. Secretary of Defense Pete Hegseth’s directive to the Pentagon, which designated the company as a “supply chain risk” due to ongoing negotiations regarding the use of its AI model, Claude. Anthropic stated, “This action follows months of negotiations that reached an impasse over two exceptions we requested to the lawful use of our AI model: the mass domestic surveillance of Americans and fully autonomous weapons.” 🚨
The designation follows weeks of discussions between the Pentagon and Anthropic concerning the military’s use of its AI models. Anthropic emphasized that its contracts should not enable mass domestic surveillance or the development of autonomous weapons, arguing that the technology is not sufficiently reliable for such purposes. They stated, “Using these systems for mass domestic surveillance is incompatible with democratic values. AI-driven mass surveillance presents serious, novel risks to our fundamental liberties.” ⚖️
In a social media post on Truth Social, President Donald Trump ordered all federal agencies to phase out the use of Anthropic technology within six months. Following this, Hegseth mandated that all contractors, suppliers, and partners working with the U.S. military cease any commercial activity with Anthropic immediately. Hegseth wrote, “In conjunction with the President’s directive for the Federal Government to cease all use of Anthropic’s technology, I am directing the Department of War to designate Anthropic a Supply Chain Risk to National Security.” 🔒
Anthropic criticized the designation as “legally unsound” and warned it could set a dangerous precedent for American companies negotiating with the government. Sean Parnell, the Pentagon’s chief spokesperson, clarified that the department does not intend to conduct mass domestic surveillance or deploy autonomous weapons without human involvement, calling the narrative “fake.” He stated, “Allow the Pentagon to use Anthropic’s model for all lawful purposes. This is a simple, common-sense request that will prevent Anthropic from jeopardizing critical military operations and potentially putting our warfighters at risk.” 💬
The ongoing stalemate has polarized the tech industry, with hundreds of employees at Google and OpenAI signing an open letter urging their companies to support Anthropic in its conflict with the Pentagon over military applications for AI tools like Claude. The standoff comes as OpenAI CEO Sam Altman announced an agreement with the U.S. Department of Defense to deploy its models in their classified network, emphasizing that AI safety and the wide distribution of benefits are core to their mission. 🌐
To read the complete article see: Read full article