Anthropic is facing the prospect of being frozen out of US government work after refusing to relax safeguards on how its AI can be used by the newly rebranded Department of War – a clash that has bigger ramifications for it, OpenAI and other foundational model companies globally as AI becomes more pervasive on the modern battlefield.
The dispute, which escalated publicly last week, centres on Anthropic’s refusal to amend contract language to allow what the DoW describes as “any lawful purpose” use of its models.
The company says its Claude system is already deployed inside classified government networks, supporting intelligence analysis, operational planning, and cyber tasks. But what it will not do, Anthropic said, is strip out protections that prohibit using Claude for mass domestic surveillance or fully autonomous weapons.
Not content with arch-rival Anthropic basking in ethical sunlight, OpenAI wasted no time wading into the story.
Before we go any further, it’s important to note that the standoff is largely theoretical at this point. The contract would be a framework for future services, and we have seen how even some of the biggest IT deals, covering far more mature technology than AI, can get scrapped and never make it out of the negotiating room.
Still, the conversations have implications and highlight some of the tensions inherent in how AI and defence intersect – which they are doing more and more everyday. And so, the story is already reverberating beyond Washington.
Old World, meet new challenges
In Europe, there has been no equivalent public rupture between European defence ministries and frontier model providers, but there are three immediate questions hanging in the air:
Given European regulations and old world skepticism, is the region more aligned with AI safety as laid out by Anthropic? What would European governments ask of AI companies when it comes to surveillance and building lethal weapons? And how much has been done already on this front?
On the first of these, some have highlighted how Anthropic’s refusal to dilute its safeguards is more closely aligned with Europe’s stricter posture on surveillance and high-risk AI under the EU AI Act. This makes Europe start to look like the steadier regulatory partner when it comes to aligning on safety. That has spilled into hypothetical chatter (one example linked above, and another here) about whether there is a way to lure Anthropic over to Europe wholesale. (No way.)
We’d love to know what Mistral thinks of that idea – not least because right now it has some pretty plain sailing ahead of it when it comes to being virtually the only credible, hitting-for-the-rafters foundational AI player homegrown Europe at the moment.
That brings us to the second and third questions. The long and short is that governments in Europe have been working and will continue to work with AI firms. That will inevitably include defence departments and militaries.
Multiple countries in Europe have already pushed the boat out on building AI for public services.
Taking just a few examples, the UK inked a wide-ranging — but not legally binding, the government noted — memorandum of understanding with OpenAI in 2025 to explore AI opportunities at every junction. OpenAI, on the heels of closing a monster funding round of $110 billion last week, splashed out plans a few days ago to make London its biggest R&D hub outside of the US. Anthropic is also pursuing UK government deals.
Over on the continent, France and Germany have launched a joint project with SAP and Mistral AI. This, too, is not billed as a defence programme, yet it shows Europe wants to anchor advanced AI infrastructure within domestic industrial and legal frameworks.
Yet there are also already signs of defence deals, too. In January 2026, Mistral announced an agreement with France’s Ministry of Defence “to strengthen France’s Defence capabilities through advanced AI solutions.”
The deal is big on top-line concepts and very short on detail for now, so we don’t know if kill chains are on the agenda.
Strategic autonomy – building AI services that are not interlinked or dependent on third parties outside of the country – is a main goal. Mistral notes that the “collaboration will provide the French Armed Forces – alongside research institutions and public agencies operating within the Ministry – with access to our models, solutions, and experts.”
AMIAD, an agency within the defence ministry focusing on AI, will oversee how this works, it noted. But that’s about all the detail that has been provided so far, so we don’t know what the scope of the deal could include, nor whether Mistral would be okay with contributing to, domestic surveillance or lethality, the two deal-breakers for Anthropic.
(We have reached out to both Anthropic and Mistral and will update this story when and if we hear more.)
In the UK, the position remains exploratory. The MoD is pursuing its own AI modernisation agenda, but it has not publicly announced integration of Claude- or GPT-class systems into classified defence networks on the scale described in the US.
How far the dispute will reverberate in the US is still up in the air, and for now everyone is posturing.
At its most extreme, senior officials have indicated that if Anthropic does not fall in line, it could be designated a national security “supply chain risk” and effectively cut off from federal contracts. That would be very notable domestic hardball if it stuck: that label is normally reserved for foreign adversaries, not US AI firms. So far, the White House has backed the DoW’s position, and Anthropic has indicated it would challenge any such move in court.
And President Trump has been characteristically tactful on the situation, noting on his own social media platform Truth Social, “The Leftwing nut jobs at Anthropic have made a DISASTROUS MISTAKE trying to STRONG-ARM the [Pentagon], and force them to obey their Terms of Service instead of our Constitution.”
At the same time, OpenAI has found itself fielding questions about its own Department of War deal. The company had said its models would be deployed inside classified systems and stressed that everything would operate within existing law. That reassurance did not satisfy everyone, including the company’s own employees.
Altman earlier today finally conceded the announcement had been rushed and said the contract language was being revised. The new version closes off domestic surveillance and forces any intelligence agency work into a separate agreement. It remains to be seen whether this leads to a de-escalation between the government and Anthropic.
The whole episode showed how quickly these defence relationships can turn political.
For now, neither company is proposing to hand lethal authority to a language model. Anthropic argues its limits have not stopped Claude from being used for intelligence and planning work, and OpenAI says much the same about its own systems. What is being fought over is the fine print: how far the government’s discretion extends once a model is deployed.
None of this is happening in a vacuum. The US military has been using machine learning for years to sift imagery and prioritise data. What changes now is not the existence of AI, but how central these newer models could become.
What makes this episode different is the type of systems now in play. Claude and GPT-class models are general-purpose tools capable of drafting assessments, summarising classified material, and generating planning options at scale. As they move closer to core analytical functions, the contractual boundaries around their use become more than legal fine print.
Kill chain questions remain
More generally, the debate about AI inside what the military calls the “kill chain” predates the rise of large language models. It has surfaced in arguments over autonomous drones, targeting software, and missile defence systems. The terminology has shifted from “human in the loop” to “human on the loop”, but the central tension remains speed versus control. For every AI startup that may take issue with its AI being used for violence, there are others specifically developing AI for that very purpose.
Pentagon officials argue that the US cannot afford to fall behind adversaries integrating AI into command-and-control systems. Critics counter that expanding AI’s role without hard limits risks normalising opaque systems that are difficult to audit once operational.
Gabrielle Hempel, security operations strategist at Exabeam, sees a familiar pattern.
“In cybersecurity, we have all become very familiar with voluntary guardrails collapsing under competitive pressure. In the AI ‘space race’, safety promises slow companies down,” she told Resilience Media.
“Anthropic is essentially shifting from ‘we won’t deploy vulnerable systems’ to ‘we’ll publish good post-incident reports.’ From a defensive standpoint, this indicates that capability development is outpacing risk mitigation. When companies move from prevention language to transparency language, it’s usually a pretty good indication that they no longer think the risk can be contained.”
Her assessment speaks to the commercial pressure behind the rhetoric. Frontier AI firms are competing not only on model performance but on access to classified environments and long-term government contracts. As those relationships deepen, the tension between voluntary company-imposed restrictions and state demands is likely to intensify.
Even so, these questions rarely remain confined to a single state.
Precedent set by the Pentagon can influence procurement, interoperability within NATO systems, and export controls elsewhere, meaning the lines drawn in Washington may ultimately shape decisions across Europe as well.
Rows over contract wording are nothing new in defence, and they usually end with revised clauses and everyone moving on. What is different here is who is involved – these are not niche subcontractors, but rather the companies building the core AI systems Washington now wants inside sensitive networks.
As these systems become more capable and more deeply embedded in intelligence and planning workflows, the fight over who controls their boundaries is unlikely to disappear.








