Tuesday 3 March, 2026
[email protected]
Resilience Media
  • About
  • News
  • Resilience Conference
    • Resilience Conference Warsaw 2026
    • Resilience Conference Copenhagen 2026
    • Resilience Conference London 2026
  • Guest Posts
    • Author a Post
  • Subscribe
No Result
View All Result
  • About
  • News
  • Resilience Conference
    • Resilience Conference Warsaw 2026
    • Resilience Conference Copenhagen 2026
    • Resilience Conference London 2026
  • Guest Posts
    • Author a Post
  • Subscribe
No Result
View All Result
Resilience Media
No Result
View All Result

Anthropic, OpenAI, and the new rules of Defence AI

Anthropic’s refusal to loosen AI safeguards for the US Department of War has sparked a public clash, while OpenAI has revised its defence deal after backlash, exposing growing tension over who controls the limits of defence AI

Carly PageIngrid LundenbyCarly PageandIngrid Lunden
March 3, 2026
in News
Share on Linkedin

Anthropic is facing the prospect of being frozen out of US government work after refusing to relax safeguards on how its AI can be used by the newly rebranded Department of War – a clash that has bigger ramifications for it, OpenAI and other foundational model companies globally as AI becomes more pervasive on the modern battlefield.

You Might Also Like

Periphery and Midgard partner to secure robots against capture and reverse engineering

Auterion, the drone software startup, eyes raising $200M at a $1.2B+ valuation

Ukrspecsystems, one of the Ukraine’s big drone makers, opens a factory in the UK

The dispute, which escalated publicly last week, centres on Anthropic’s refusal to amend contract language to allow what the DoW describes as “any lawful purpose” use of its models.

The company says its Claude system is already deployed inside classified government networks, supporting intelligence analysis, operational planning, and cyber tasks. But what it will not do, Anthropic said, is strip out protections that prohibit using Claude for mass domestic surveillance or fully autonomous weapons.

Not content with arch-rival Anthropic basking in ethical sunlight, OpenAI wasted no time wading into the story.

Before we go any further, it’s important to note that the standoff is largely theoretical at this point. The contract would be a framework for future services, and we have seen how even some of the biggest IT deals, covering far more mature technology than AI, can get scrapped and never make it out of the negotiating room.

Still, the conversations have implications and highlight some of the tensions inherent in how AI and defence intersect – which they are doing more and more everyday. And so, the story is already reverberating beyond Washington.

Old World, meet new challenges

In Europe, there has been no equivalent public rupture between European defence ministries and frontier model providers, but there are three immediate questions hanging in the air:

Given European regulations and old world skepticism, is the region more aligned with AI safety as laid out by Anthropic? What would European governments ask of AI companies when it comes to surveillance and building lethal weapons? And how much has been done already on this front?

On the first of these, some have highlighted how Anthropic’s refusal to dilute its safeguards is more closely aligned with Europe’s stricter posture on surveillance and high-risk AI under the EU AI Act. This makes Europe start to look like the steadier regulatory partner when it comes to aligning on safety. That has spilled into hypothetical chatter (one example linked above, and another here) about whether there is a way to lure Anthropic over to Europe wholesale. (No way.)

We’d love to know what Mistral thinks of that idea – not least because right now it has some pretty plain sailing ahead of it when it comes to being virtually the only credible, hitting-for-the-rafters foundational AI player homegrown Europe at the moment.

That brings us to the second and third questions. The long and short is that governments in Europe have been working and will continue to work with AI firms. That will inevitably include defence departments and militaries.

Multiple countries in Europe have already pushed the boat out on building AI for public services.

Taking just a few examples, the UK inked a wide-ranging — but not legally binding, the government noted — memorandum of understanding with OpenAI in 2025 to explore AI opportunities at every junction. OpenAI, on the heels of closing a monster funding round of $110 billion last week, splashed out plans a few days ago to make London its biggest R&D hub outside of the US. Anthropic is also pursuing UK government deals.

Over on the continent, France and Germany have launched a joint project with SAP and Mistral AI. This, too, is not billed as a defence programme, yet it shows Europe wants to anchor advanced AI infrastructure within domestic industrial and legal frameworks.

Yet there are also already signs of defence deals, too. In January 2026, Mistral announced an agreement with France’s Ministry of Defence “to strengthen France’s Defence capabilities through advanced AI solutions.”

The deal is big on top-line concepts and very short on detail for now, so we don’t know if kill chains are on the agenda.

Strategic autonomy – building AI services that are not interlinked or dependent on third parties outside of the country – is a main goal. Mistral notes that the “collaboration will provide the French Armed Forces – alongside research institutions and public agencies operating within the Ministry – with access to our models, solutions, and experts.”

AMIAD, an agency within the defence ministry focusing on AI, will oversee how this works, it noted. But that’s about all the detail that has been provided so far, so we don’t know what the scope of the deal could include, nor whether Mistral would be okay with contributing to, domestic surveillance or lethality, the two deal-breakers for Anthropic.

(We have reached out to both Anthropic and Mistral and will update this story when and if we hear more.)

In the UK, the position remains exploratory. The MoD is pursuing its own AI modernisation agenda, but it has not publicly announced integration of Claude- or GPT-class systems into classified defence networks on the scale described in the US.

How far the dispute will reverberate in the US is still up in the air, and for now everyone is posturing.

At its most extreme, senior officials have indicated that if Anthropic does not fall in line, it could be designated a national security “supply chain risk” and effectively cut off from federal contracts. That would be very notable domestic hardball if it stuck: that label is normally reserved for foreign adversaries, not US AI firms. So far, the White House has backed the DoW’s position, and Anthropic has indicated it would challenge any such move in court.

And President Trump has been characteristically tactful on the situation, noting on his own social media platform Truth Social, “The Leftwing nut jobs at Anthropic have made a DISASTROUS MISTAKE trying to STRONG-ARM the [Pentagon], and force them to obey their Terms of Service instead of our Constitution.”

At the same time, OpenAI has found itself fielding questions about its own Department of War deal. The company had said its models would be deployed inside classified systems and stressed that everything would operate within existing law. That reassurance did not satisfy everyone, including the company’s own employees.

Altman earlier today finally conceded the announcement had been rushed and said the contract language was being revised. The new version closes off domestic surveillance and forces any intelligence agency work into a separate agreement. It remains to be seen whether this leads to a de-escalation between the government and Anthropic.

The whole episode showed how quickly these defence relationships can turn political.

For now, neither company is proposing to hand lethal authority to a language model. Anthropic argues its limits have not stopped Claude from being used for intelligence and planning work, and OpenAI says much the same about its own systems. What is being fought over is the fine print: how far the government’s discretion extends once a model is deployed.

None of this is happening in a vacuum. The US military has been using machine learning for years to sift imagery and prioritise data. What changes now is not the existence of AI, but how central these newer models could become.

What makes this episode different is the type of systems now in play. Claude and GPT-class models are general-purpose tools capable of drafting assessments, summarising classified material, and generating planning options at scale. As they move closer to core analytical functions, the contractual boundaries around their use become more than legal fine print.

Kill chain questions remain

More generally, the debate about AI inside what the military calls the “kill chain” predates the rise of large language models. It has surfaced in arguments over autonomous drones, targeting software, and missile defence systems. The terminology has shifted from “human in the loop” to “human on the loop”, but the central tension remains speed versus control. For every AI startup that may take issue with its AI being used for violence, there are others specifically developing AI for that very purpose.

Pentagon officials argue that the US cannot afford to fall behind adversaries integrating AI into command-and-control systems. Critics counter that expanding AI’s role without hard limits risks normalising opaque systems that are difficult to audit once operational.

Gabrielle Hempel, security operations strategist at Exabeam, sees a familiar pattern.

“In cybersecurity, we have all become very familiar with voluntary guardrails collapsing under competitive pressure. In the AI ‘space race’, safety promises slow companies down,” she told Resilience Media.

“Anthropic is essentially shifting from ‘we won’t deploy vulnerable systems’ to ‘we’ll publish good post-incident reports.’ From a defensive standpoint, this indicates that capability development is outpacing risk mitigation. When companies move from prevention language to transparency language, it’s usually a pretty good indication that they no longer think the risk can be contained.”

Her assessment speaks to the commercial pressure behind the rhetoric. Frontier AI firms are competing not only on model performance but on access to classified environments and long-term government contracts. As those relationships deepen, the tension between voluntary company-imposed restrictions and state demands is likely to intensify.

Even so, these questions rarely remain confined to a single state.

Precedent set by the Pentagon can influence procurement, interoperability within NATO systems, and export controls elsewhere, meaning the lines drawn in Washington may ultimately shape decisions across Europe as well.

Rows over contract wording are nothing new in defence, and they usually end with revised clauses and everyone moving on. What is different here is who is involved – these are not niche subcontractors, but rather the companies building the core AI systems Washington now wants inside sensitive networks.

As these systems become more capable and more deeply embedded in intelligence and planning workflows, the fight over who controls their boundaries is unlikely to disappear.

 

Tags: AIanthropicArmydefence techdepartment of warkill chainMinistry of DefenceOpenAI
Previous Post

Rajmund T. Andrzejczak and Marcin Hejka to speak at Resilience Conference Warsaw

Carly Page

Carly Page

Carly Page is a freelance journalist and copywriter with 10+ years of experience covering the technology industry, and was formerly a senior cybersecurity reporter at TechCrunch. Bylines include Forbes, IT Pro, LeadDev, The Register, TechCrunch, TechFinitive, TechRadar, TES, The Telegraph, TIME, Uswitch, WIRED, & more.

Ingrid Lunden

Ingrid Lunden

Ingrid is an editor and writer. Born in Moscow, brought up in the U.S. and now based out of London, from February 2012 to May 2025, she worked at leading technology publication TechCrunch, initially as a writer and eventually as one of TechCrunch’s managing editors, leading the company’s international editorial operation and working as part of TechCrunch’s senior leadership team. She speaks Russian, French and Spanish and takes a keen interest in the intersection of technology with geopolitics.

Related News

Periphery CEO Toby Wilmington

Periphery and Midgard partner to secure robots against capture and reverse engineering

byPaul Sawers
March 2, 2026

Modern conflict has pushed autonomous machines into some of the most hostile operating environments. Drones are intercepted mid-flight, ground robots...

Auterion, the drone software startup, eyes raising $200M at a $1.2B+ valuation

Auterion, the drone software startup, eyes raising $200M at a $1.2B+ valuation

byIngrid Lunden
February 27, 2026

German defence tech startups are seeing a lot of activity at the moment, and one of them is using that...

Ukrspecsystems, one of the Ukraine’s big drone makers, opens a factory in the UK

Ukrspecsystems, one of the Ukraine’s big drone makers, opens a factory in the UK

byIngrid Lunden
February 26, 2026

Ukrspecsystems, one of the bigger defence startups in Ukraine, has opened up a factory to  produce drones in the UK....

Europe’s Defence Renaissance Gets a VTOL Boost: STARK Launches AI-Enabled Strike Drone

Germany set to formally announce Stark and Helsing strike-drone contracts this week

byCarly Pageand1 others
February 25, 2026

Germany is expected to formally announce its strike-drone deal with defence startup Stark and Helsing on Thursday, sources tell Resilience...

Group 14 Technologies is betting on silicon batteries for super fast charging

Group 14 Technologies is betting on silicon batteries for super fast charging

byJohn Biggs
February 24, 2026

https://youtu.be/FE_FhVsSm10 For decades, silicon batteries were a pipe dream. The product, a cross between a standard lithium battery and a...

Frankenburg has raised up to $50M at a $400M valuation, say sources

Frankenburg confirms €30M funding to build more EU-made rockets

byJulia Gifford
February 24, 2026

Nearly a month after Resilience Media broke the news that Frankenburg Technologies had raised more funding, today the Baltics-based startup...

Tytan raises €30M for drone defence, sources say at a ~€150M valuation

Tytan raises €30M for drone defence, sources say at a ~€150M valuation

byIngrid Lunden
February 24, 2026

Drones have quickly become a cornerstone of modern warfare, and so has building better tools for those times when drones...

brown and black abstract painting

IQM, the quantum startup from Finland, plans US listing on Nasdaq at $1.8B valuation

byIngrid Lunden
February 23, 2026

IQM, the Finnish startup that has raised more than $570 million over the years to fuel its big ambition of...

Load More

Most viewed

InVeris announces fats Drone, an integrated, multi-party drone flight simulator

Twentyfour Industries emerges from stealth with $11.8M for mass-produced drones

Senai exits stealth to help governments harness online video intelligence

Harmattan AI raises $200M at a $1.4B valuation from Dassault

Palantir and Ukraine’s Brave1 have built a new AI “Dataroom”

Frankenburg has raised up to $50M at a $400M valuation, say sources

Resilience Media is an independent publication covering the future of defence, security, and resilience. Our reporting focuses on emerging technologies, strategic threats, and the growing role of startups and investors in the defence of democracy.

  • About
  • News
  • Resilence Conference
    • Resilience Conference Copenhagen 2026
    • Resilience Conference Warsaw 2026
    • Resilience Conference 2026
  • Guest Posts
  • Subscribe
  • Privacy Policy
  • Terms & Conditions

© 2026 Resilience Media

No Result
View All Result
  • About
  • News
  • Resilence Conference
    • Resilience Conference Copenhagen 2026
    • Resilience Conference Warsaw 2026
    • Resilience Conference 2026
  • Guest Posts
  • Subscribe
  • Privacy Policy
  • Terms & Conditions

© 2026 Resilience Media

This website uses cookies. By continuing to use this website you are giving consent to cookies being used. Visit our Privacy and Cookie Policy.