Anthropic, ride or die?
Pete Hegseth is on the warpath against the maker of the “frontier” AI model, Claude. An ultimatum has been issued. Here’s what’s going on.
Anthropic is under fire.
Pete Hegseth, the United States Secretary of War, has given the company until the end of the day to let his generals have unfettered access to its technology. If Anthropic refuses, Hegesth is threatening to implode its business.
This is Washington power-play politics on steroids and what happens next could reshape the artificial intelligence market.
The US military and Anthropic are sparring over a $200 million contract to develop AI capabilities for national defense, awarded to Anthropic last year. A handful of other leading AI companies got similar contracts.
This was delicate territory for Anthropic, which has made safety its calling card. As a condition of the deal, Anthropic got the Pentagon to agree not to use its technology for domestic surveillance or fully autonomous weapons systems.
Those restrictions chafed with Pentagon officials like Emil Michael, a former Uber executive who serves as the department’s chief technology officer. He thinks the US military should not have to call Silicon Valley to ask permission to maximize national defense capabilities.
Using the military for domestic surveillance would be illegal regardless.
In January, the situation escalated. Hegseth, who has been on a campaign to make the military more “lethal” and less “woke” signed a memo saying the former Department of Defense would no longer be willing to give AI companies a veto over how it uses their technology.
Instead, it would insist on being able to use powerful AI systems however it wants, as long as its own lawyers approve. Hegseth gave Pentagon officials 180 days to implement the new directive.
In early February, tense negotiations to resolve the impasse spilled into public view. On Tuesday, Hegseth summoned Anthropic’s CEO, Dario Amodei, to the Pentagon and delivered an ultimatum: if Anthropic didn’t cave by Friday, the Pentagon would invoke a Korean War-era law to force the company to hand over the keys to Claude — its foundational model technology.
The threat hinges on the Defense Production Act (DPA), a 1950s law that gives the President of the United States broad authority to force private companies to prioritize government orders over all others during a national crisis.
During the Covid-19 pandemic, the US government used the DPA to speed up facemask and ventilator production. The Biden administration also invoked the law in an executive order requiring AI companies to hand over details of their safety testing to the government. That order was later rescinded by the Trump administration.
Using the DPA to force Anthropic to let it access its AI models however it wants would address the Pentagon’s immediate concern about outsourcing military decisions to Silicon Valley. But it would amount to a partial nationalization of a leading US tech company’s core product.
Even more ominously, Hegseth threatened to designate Anthropic as a supply chain risk. This kind of blacklisting could pose huge risks for Anthropic’s business — and possibly threaten its very survival.
A law first passed in 2018 gives the Secretary of War broad powers to exclude risky vendors from doing business with the US government.
The law was aimed at Chinese companies like Huawei. Applying it to Anthropic would immediately doom the company’s $200m Pentagon contract at a time when foundational model labs are under pressure to increase revenues. More importantly, it would probably force other companies that want to do business with the US military to stop working with Anthropic. That could be an existential issue for Amodei’s company.
The Pentagon is a sprawling, $1 trillion enterprise. Companies from internet hyperscalers to niche defense contractors compete fiercely for contracts to supply it with everything from explosives and fighter jets, to supply chain management software and cloud services. Many companies in these industries may also use AI models or other products produced by Anthropic.
They would almost certainly rather stop working with the company and just use another AI company’s models, than put big government contracts at risk.
So far, Amodei is sticking to his principles. Late on Thursday, Anthropic published a letter from Amodei, highlighting the company’s existing work on US defense and intelligence applications, while arguing that the company “cannot in good conscience accede” to the Pentagon’s demands.
He warned that AI was making it easy to take data that the US government is collecting on American citizens and turn it into a “comprehensive picture of any person’s life — automatically and at massive scale.” He also warned that even powerful AI models were not yet reliable enough to substitute for human judgement on the battlefield.
Amodei also pointed out the contradiction in the Pentagon threatening to use emergency powers to take control of how it uses Claude, while also threatening to label the company as a supply chain risk.
One former general who was involved in a previous bust-up between the Pentagon and Silicon Valley called Hegseth’s DPA threat “bizarre” and warned that blacklisting Anthropic would be akin to “shooting yourself in the foot.”
If Hegseth follows through on his threats, it could have long-term consequences for the AI sector and the US innovation landscape.
At the extreme, kneecapping a leading US AI company for falling out of line with the war department could have a massive chilling effect on the sector. Anthropic’s immediate competitors would have a strong incentive to stay in line in the short term, but in the long run, innovative companies might think twice about doing business with the Pentagon.
Forcing Anthropic to hand over control of its most advanced software would raise new concerns about the Trump administration’s willingness to intervene in the private sector.
Artificial intelligence and supporting technologies like cloud computing aren’t like Cold War-era defense technologies, which were mainly developed by captive defense contractors.
They have instead been developed and scaled inside of civilian tech companies, a significant number of which had long been cautious about cooperating with the US military. That freewheeling, private-sector spirit is part of what enabled the US to establish the world’s most vibrant AI ecosystem.
That vibrancy could be at risk if companies have to worry about their source code being seized based on thin national security pretexts.
Moreover, several of the US’s close military allies also use Anthropic. They might decide to abandon the company if it comes under further pressure from the Trump administration, damaging separate US efforts to get partners to embrace the American AI stack.
This is a prove-it or lose-it moment for a foundational model company that was founded on a commitment to ethics and safety.
We may know by the end of the day if Anthropic has capitulated, struck a last minute deal, or whether it has stood firm — daring the US government to seize control of one of the defining technologies of our time.





Good overview of the standoff. One angle that deserves more attention: the specific official driving this. Emil Michael, the Undersecretary who led the Anthropic negotiations, has a documented history at Uber that connects directly to the surveillance question. He proposed spending $1M on opposition researchers to dig into journalists' families ("Nobody would know it was us"), was involved in obtaining confidential medical records of a woman raped by an Uber driver in India, and left the day before Eric Holder's investigation report went public. Uber also maintained a real-time tracking tool called "God View." Now Michael is the one pushing to remove protections against AI-enabled mass surveillance of Americans. Sourced breakdown: https://theaiblindspot.substack.com/p/nobody-would-know-it-was-us
Wow so many implications thanks for keeping us in the light