At Bridgers Agency, we help businesses integrate AI solutions into their operations. Choosing an AI provider is never a neutral decision, and the Anthropic vs Pentagon standoff has just proven that in dramatic fashion. For any company relying on AI in its workflows - including Emelia.io, which uses artificial intelligence for B2B prospecting - understanding this controversy matters. Beyond the political showdown, this is fundamentally about the reliability and longevity of the AI tools your business depends on.
The story begins with a contractual disagreement that spiraled into a full-blown political crisis. In July 2026, Anthropic secured a $200 million contract to provide its frontier AI models to the Department of Defense, alongside OpenAI, Google, and xAI. Claude became the first major AI model deployed on the Pentagon's classified networks, through a partnership with Palantir.
But negotiations broke down over two red lines that Anthropic refused to cross. CEO Dario Amodei said he could not, "in good conscience," authorize Claude's use for two specific purposes:
Mass domestic surveillance of American citizens - using AI to automatically compile disparate, seemingly harmless data into a detailed portrait of any individual's life at massive scale
Fully autonomous weapons - systems capable of making the decision to strike targets without direct human oversight
The Pentagon's position was clear: no private contractor should dictate how its tools are used. Emil Michael, the Department of Defense's Chief Technology Officer (and former Uber executive), argued that Anthropic's technology should be treated like any other tool, governed solely by U.S. law rather than a company's internal usage policies.
The New York Times revealed the details of the collapse. On February 24, 2026, Defense Secretary Pete Hegseth convened a meeting with Anthropic. The atmosphere was tense, and the meeting lasted under an hour.
Anthropic had made significant concessions. The company was willing to let its technology be used by the National Security Agency for material collected under the Foreign Intelligence Surveillance Act. It only asked for a legally binding guarantee against Claude being used on unclassified commercial data.
At that point, Emil Michael requested to speak directly with Dario Amodei. Anthropic's CEO was not immediately available. Shortly after, Hegseth declared the negotiations over.
On Friday, February 28, the Pentagon imposed a 5:00 PM deadline demanding Anthropic lift all restrictions. That same day, President Donald Trump posted on Truth Social ordering all federal agencies to stop using Anthropic's technology, calling the company "woke" and "radical left." Trump wrote: "We don't need them, we don't want them, and we will not do business with them again!"
Hegseth then announced on X that he was designating Anthropic a "Supply-Chain Risk to National Security."
This is the central legal question of the entire dispute. The supply chain risk designation is a mechanism under Section 3252 of Title 10 of the U.S. Code. It allows the Secretary of Defense to exclude a supplier from government contracts for covered national security systems.
The critical point: this designation had never been applied to an American company. It was historically reserved for foreign firms tied to U.S. adversaries, such as Huawei (China) or Kaspersky (Russia).
Senator Kirsten Gillibrand, a member of both the Senate Armed Services and Intelligence Committees, called the move "a dangerous misuse of a tool designed to address technology controlled by adversaries."
In a detailed legal analysis on Lawfare, Alan Rozenshtein, professor at the University of Minnesota Law School, highlighted the fundamental incoherence of the Pentagon's position: "It is completely insane to simultaneously say this product is so important we're going to force you to give it to us, so safe we're going to use it during an active military engagement, and so dangerous we're going to burn you to the ground. You obviously can't have all three at the same time."
Anthropic has confirmed it will challenge the designation in court. Legal experts point to several strong arguments.
The ultra vires argument: The supply chain risk statute was not designed to apply to U.S. companies. A court could rule that the government is invoking a law that simply does not apply to this situation. As Rozenshtein explained, "It's a much stronger argument for Anthropic to go in and say this just doesn't apply to us. This is a classic example of ultra vires action."
The arbitrary and capricious argument: Under the Administrative Procedures Act, a government decision can be struck down if it is "arbitrary and capricious." The fact that the Pentagon is actively using Claude for military operations in Iran while declaring it a "security risk" is, as Lawfare editor-in-chief Benjamin Wittes noted, "almost the definition of arbitrary." Adding to this: the Pentagon delayed enforcement for six months, raising the question of how something can be an immediate supply chain risk if you plan to keep using it for half a year.
The least restrictive means requirement: The statute 10 USC 3252 requires the use of the "least restrictive means necessary." Designating an American company as a supply chain risk over a contractual disagreement clearly exceeds this standard.
Anthony Kuhn, managing partner at law firm Tully Rinckey in New York, warned that Anthropic would "likely file suit against everybody who's involved and just get their money one way or another."
A group of 30 former military, intelligence, and tech policy leaders sent a joint letter to Congress demanding an investigation into the "dangerous precedent" set by the Pentagon's actions.
While Anthropic's legal team was drafting its lawsuit on Friday evening, OpenAI CEO Sam Altman was on the phone with Emil Michael finalizing a deal with the Department of Defense. Altman announced the agreement on social media shortly after, and Hegseth shared the announcement on his personal account.
OpenAI's timing triggered a wave of backlash. Altman himself admitted the company "shouldn't have rushed" and that the timing "just looked opportunistic and sloppy." Hundreds of OpenAI employees signed an open letter supporting Anthropic. Researcher Aidan McLaughlin posted on X: "This deal was not worth it" - a message viewed nearly 500,000 times. Chalk graffiti criticizing OpenAI even appeared outside its San Francisco offices.
Jonathan Iwry, a fellow at the Wharton School's Accountable AI Lab, criticized the broader industry response: "What is particularly disappointing is that the rest of the AI industry failed to come to Anthropic's support. Instead, they let the administration play them off against one another as market competitors."
Facing the pressure, OpenAI renegotiated its contract to add explicit prohibitions against using its AI for "domestic surveillance of U.S. persons and nationals." The revised agreement also bars Defense Intelligence Components (the NSA, NGA, and DIA) from using OpenAI's services without a separate contract modification.
Charlie Bullock, senior research fellow at the Institute for Law & AI, noted: "This seems like a significant improvement over the previous language with respect to surveillance. It does not address autonomous weapons concerns, nor does it claim to."
One of the most striking aspects of the affair is that Claude continued to be used by the U.S. military for operations in Iran even after the ban was announced.
According to the Washington Post, Palantir's Maven Smart System, powered by Claude, proposed hundreds of targets for the U.S. military, prioritized them by importance, and provided location coordinates. The military struck over 1,000 targets within the first 24 hours of its offensive, leveraging Claude's analytical capabilities.
Sources close to the situation indicate that Claude will not be removed from operations until the Pentagon finds a replacement with comparable capabilities. According to Defense One estimates, it would take at minimum three months, and potentially twelve months or longer, to fully replace Claude's capabilities on classified networks.
The good news for commercial users: the scope of the designation is narrower than the political rhetoric suggested. According to the letter Anthropic received from the Pentagon, the designation only applies to use of Claude "as a direct part of" contracts with the Department of Defense.
Amodei clarified: "Even for Department of Defense contractors, the supply chain risk classification does not (and cannot) restrict the use of Claude or business dealings with Anthropic if those are not directly related to their specific Department of Defense contracts."
Microsoft confirmed its lawyers studied the rule and the company "can continue to work with Anthropic on non-defense related projects." Amazon and Google also issued statements confirming that Anthropic products would remain available through their platforms for all non-Department of Defense uses.
However, uncertainty remains. Approximately 80% of Anthropic's revenue comes from enterprise sales, with an estimated annual revenue run rate of around $19 billion. Ten portfolio companies at J Ventures that work with the Department of Defense have already "reduced their reliance on Claude for defense-related applications and are actively seeking to replace it with another service."
Criteria | Anthropic (Claude) | OpenAI (GPT) |
|---|---|---|
Position on mass surveillance | Categorical refusal | Added after renegotiation |
Position on autonomous weapons | Categorical refusal | Not addressed in contract |
Pentagon contract | $200M - now suspended | New contract signed |
Classified network presence | First model deployed | Deployment in progress |
Employee reaction | Internal support for CEO | Open letter criticizing leadership |
Legal status | Suing the Pentagon | Contract under renegotiation |
Use in Iran operations | Still active via Palantir | Not yet operational |
The Chatham House analysis is unequivocal: "The most consequential decisions about how AI systems can and cannot be used - whether they can target or kill without human oversight or be used for mass domestic surveillance - are not being made in legislatures and international forums, but in contract negotiations."
Unlike nuclear technology, aviation, or pathogen research, all governed by binding international treaty regimes, no equivalent regulatory framework exists for AI. And the U.S.-China rivalry makes multilateral cooperation on this subject highly unlikely.
The Anthropic affair exposes a fundamental paradox: governments demand that AI companies build ethical guardrails into their products, then exempt themselves from those very constraints in the name of national security. As the Daily Economy put it: "If ethics are indispensable to safe AI, they are most indispensable where power is greatest and secrecy deepest."
The outcome of this confrontation remains uncertain. Several scenarios are emerging:
Anthropic's lawsuit: The company has confirmed it will challenge the designation in court. Legal experts believe its arguments are strong, particularly the fact that the law was never designed to target American companies.
Military transition: The Pentagon has six months to replace Claude, but experts estimate it could take much longer. In the meantime, Claude continues to be used in Iran operations.
Impact on Anthropic's IPO: With a $19 billion revenue run rate and an IPO in preparation, this crisis poses a significant risk to the company's valuation.
Precedent for the industry: If the Pentagon can force an American supplier to abandon its terms of use, it changes the rules of the game for the entire tech industry.
For businesses that rely on AI in their daily operations - from Emelia.io's prospecting tools to the automation solutions built by Bridgers Agency - this affair is a crucial reminder: choosing your AI provider is not just about technical performance. The governance, values, and political stability of your technology partner can have direct consequences on your business.

Sin compromiso, precios para ayudarte a aumentar tu prospección.
No necesitas créditos si solo quieres enviar emails o hacer acciones en LinkedIn
Se pueden utilizar para:
Buscar Emails
Acción IA
Buscar Números
Verificar Emails