Can AI firms set limits on how and where the military uses their models? Anthropic is in heated negotiations with the Pentagon after refusing to comply with new military contract terms that would require it to loosen the guardrails on its AI models, allowing for “any lawful use,” even mass surveillance of Americans and fully autonomous lethal weapons.
Pentagon CTO Emil Michael is pushing for Anthropic to be designated a “supply chain risk” if it doesn’t comply, a label usually only given to national security threats. Anthropic’s rivals OpenAI and xAI have reportedly agreed to the new terms, but even after a White House meeting with Defense Secretary Pete Hegseth, Anthropic CEO Dario Amodei is still refusing to cross his company’s red line, stating that “threats do not change our position: we cannot in good conscience accede to their request.”
Follow along here for the latest updates on the clash between AI companies and the Pentagon…
- President Trump said “it’s possible” Anthropic and the Pentagon could reach a deal eventually.
During a television interview with CNBC, he said Anthropic, which has been enmeshed in a dramatic lawsuit with the Department of Defense, had a positive meeting at the White House. Anthropic had come to discuss Mythos, its buzzy private model. “We had some very good talks with them, and I think they’re shaping up,” he said. “They’re very smart, and I think they can be of great use.”
- The NSA reportedly has access to Anthropic’s Mythos despite being labeled a supply-chain risk.
Sources told Axios that the agency was among the roughly 40 organizations granted access. This, despite the Pentagon arguing that Anthropic is a threat to national security. The NSA has reportedly been using it primarily to identify vulnerabilities in its own network, but considering its track record, it’s understandable if you’re wary.
Anthropic’s new cybersecurity model could get it back in the government’s good graces

Image: Cath Virginia / The Verge, Getty ImagesThe Trump administration has spent nearly two months fighting with AI company Anthropic. It’s dubbed the company a “RADICAL LEFT, WOKE COMPANY” full of “Leftwing nut jobs” and a menace to national security. But some of the ice may reportedly be melting between the two, thanks to Anthropic’s buzzy new cybersecurity-focused model: Claude Mythos Preview.
Anthropic’s relationship with the Pentagon soured quickly in late February after the company refused to budge on two red lines: using its technology for domestic mass surveillance or lethal fully autonomous weapons with no human in the loop. Anthropic’s tech has in the past been used heavily by the DoD and, it was the first company to have its models cleared to operate on classified military networks. The stalemate led to public insults on social media, Anthropic being categorized as a “supply chain risk,” the company filing a lawsuit fighting that designation, and a temporary injunction halting its ban.
Read Article >- Anthropic loses an appeal attempting to pause its supply chain risk designation.
As a result, “the company will continue to be excluded from new contracts and Pentagon systems,” according to The Wall Street Journal.
Court Denies Anthropic Request to End Defense Department Punishment[The Wall Street Journal]
Judge sides with Anthropic to temporarily block the Pentagon’s ban

Image: Cath Virginia / The Verge, Getty ImagesAfter Anthropic’s weeks-long standoff with the Pentagon, the company won one milestone: A judge granted Anthropic a preliminary injunction in its lawsuit, which sought to reverse its government blacklisting while the judicial process plays out.
“The Department of War’s records show that it designated Anthropic as a supply chain risk because of its ‘hostile manner through the press,’” Judge Rita F. Lin, a district judge in the northern district of California, wrote in the order, which will go into effect in seven days. “Punishing Anthropic for bringing public scrutiny to the government’s contracting position is classic illegal First Amendment retaliation.”
Read Article >Senate Democrats are trying to ‘codify’ Anthropic’s red lines on autonomous weapons and mass surveillance

Image: The Verge, Getty ImagesAnthropic’s fight with the Pentagon is expanding to Congress. Sen. Adam Schiff (D-CA) is working on a new bill to “codify” Anthropic’s red lines and ensure humans make the ultimate decisions in questions of life and death, and Sen. Elissa Slotkin (D-MI) recently introduced a bill to limit the Defense Department’s ability to use AI for mass surveillance of Americans.
The Trump administration blacklisted Anthropic earlier this month after it set limits on how the military could use its AI models, designating it a supply-chain risk. Anthropic has filed suit, accusing the government of violating its constitutional rights. It’s insisted that the Pentagon avoid using its products for fully autonomous weapons and mass domestic surveillance — resisting a deal signed by major competitor OpenAI. Anthropic is waiting to hear if a court will block the administration’s decision to label it a supply chain risk.
Read Article >David Sacks’ big Iran warning gets big time ignored


Sec. Scott Bessent, President Donald Trump, and David Sacks during The White House Digital Assets Summit in Washington, DC, on Friday, March 7, 2025. Chris Kleponis/CNP/Bloomberg via Getty ImagesHello and welcome to Regulator, a newsletter for Verge subscribers about the politics of technology and the technology of politics — now landing in your inbox on Wednesdays! If someone has forwarded this email to you, and you’re not a Verge subscriber yet, you should sign up right here, and not just because it would be really, really cool if you do that. We can apparently see how many non-subscribers have opened this email, and why should Palantir get all the “spying on people” fun?
Do you have cool events to highlight, tips to toss over, and secrets to spill? Send everything to [email protected]. Or, if you’re truly tech-pilled, send me a message on LinkedIn.
Read Article >The mysterious case of the DHS white supremacist memelord


Department of Homeland Security. Image: The VergeHello and welcome to Regulator, a newsletter that takes Verge subscribers into the smoke-filled back rooms of Washington as they become the Zyn-filled back rooms of Washington. (Although Tucker Carlson’s ALP is more popular among a certain set.) Not a Verge subscriber yet? Sign up here today! I’m not exposing myself to all these carcinogens for nothing.
Do you know something that’s not in any of those rooms? Send all tips to [email protected], or to my Signal account @tina.nguyen19. At the very least, you’ll save me a dry cleaning bill.
There’s a story that I’ve been chasing for months, along with, apparently, every other political reporter in Washington covering the Trump administration: Who is responsible for the government’s racist tweets? Or more specifically: Who is the person within the Department of Homeland Security creating the memes with all the deep-cut white supremacist references?
Read Article >Anthropic is launching a new think tank amid Pentagon blacklist fight

Image: Cath Virginia / The Verge, Getty ImagesAmid a weekslong conflict with the Pentagon, resulting in a blacklist and a lawsuit, Anthropic is shaking up its C-suite and research initiatives. The company announced Wednesday that it’s launching a new internal think tank, called the Anthropic Institute, that combines three of Anthropic’s current research teams. It will focus on researching AI’s large-scale implications, such as “what happens to jobs and economies, whether AI makes us safer or introduces new dangers, how its values might shape ours, and whether we can retain control,” per the company.
The news comes with C-suite changes, too. Anthropic cofounder Jack Clark is moving into a new role leading the think tank. His new title will be head of public benefit, after more than five years as head of public policy. The public policy team — which tripled in size in 2025, per Anthropic — will now be led by Sarah Heck, who was formerly head of external affairs. Anthropic will also open its planned office in Washington, DC, and the public policy team will continue to focus on issues like national security, AI infrastructure, energy, and “democratic leadership in AI.”
Read Article >Anthropic is suing the Department of Defense


Anthropic has sued the US government over its designation as a supply-chain risk, the latest move in a weekslong battle between it and the Pentagon over the acceptable use cases for its military AI tech. The suit, filed in a California district court, accuses the Trump administration of illegally punishing the company for setting “red lines” on mass domestic surveillance and fully autonomous weapons.
“The federal government retaliated against a leading frontier AI developer for adhering to its protected viewpoint on a subject of great public significance — AI safety and the limitations of its own AI models — in violation of the Constitution and laws of the United States,” the suit reads. “Defendants are seeking to destroy the economic value created by one of the world’s fastest-growing private companies, which is a leader in responsibly developing an emergent technology of vital significance to our Nation.”
Read Article >- Anthropic usage is booming despite “supply-chain risk.”
The designation from the US Department of War — that’s busy disrupting actual supply chains and human life in several countries — is having the inverse effect of driving up demand for Claude, which has been breaking daily signup records since early last week in every country where Claude is available.
AppFigures data also shows it topping App Store charts for free and AI apps in dozens of countries, including the US, Canada, and much of Europe.
- Anthropic responds to the Pentagon.
In a blog post, CEO Dario Amodei confirmed reports that the Defense Department had sent them a letter formally designating them a supply-chain risk, and said Anthropic planned to challenge them in court. He also clarified how it would currently impact Claude users:
The language used by the Department of War in the letter (even supposing it was legally sound) matches our statement on Friday that the vast majority of our customers are unaffected by a supply chain risk designation. With respect to our customers, it plainly applies only to the use of Claude by customers as a direct part of contracts with the Department of War, not all use of Claude by customers who have such contracts.
The Pentagon formally labels Anthropic a supply-chain risk


US Defense Secretary Pete Hegseth speaks during a press conference on US military action in Iran, at the Pentagon in Washington, DC, on March 2, 2026. Brendan Smialowski/AFP via Getty ImagesAfter weeks of failed negotiations, public ultimatums, and lawsuit threats, the Defense Department has formally labeled Anthropic a “supply-chain risk,” escalating its fight with the AI company over their acceptable use policies and potentially bringing their fight to court.
The decision, first reported by The Wall Street Journal on Thursday, citing one source familiar, will bar defense contractors from working with the government if they use Claude, Anthropic’s AI program, in their products. Though the designation is typically applied to foreign companies with ties to adversarial governments, this is the first time that an American company has publicly received this label.
Read Article >Anthropic chief reportedly trying to salvage Pentagon AI deal

Image: The VergeAnthropic CEO Dario Amodei is reportedly back at the negotiating table with the Department of Defense in an attempt to salvage the company’s relationship with the US military and prevent it from being iced out of defense work for being a “supply chain risk.” Talks between the two parties imploded on Friday after weeks of bitter public feuding over the startup’s refusal to grant the Pentagon unrestricted access to its AI, with rivals like OpenAI rushing to fill the void.
Amodei is in talks with under-secretary of defense for research and engineering Emil Michael about a new contract that would allow the US military to continue using Anthropic’s Claude AI models, according to the Financial Times, citing unnamed sources with knowledge of the matter. Michael attacked Amodei on social media last week amid a tense standoff over acceptable military uses of AI, calling the executive a “liar” with a “God-complex” and accusing him of “putting our nation’s safety at risk.”
Read Article >- Defense contractors are already backing off on Claude.
Companies that do business with the US military are pivoting away from Anthropic’s AI after Defense Secretary Pete Hegseth announced he was designating it a “supply chain risk” last week, CNBC reports. While Anthropic can still challenge the designation in court, defense companies say they’re abandoning Claude preemptively “out of an abundance of caution.”
AI is now part of the culture wars — and real wars


US Defense Secretary Pete Hegseth takes questions during a press conference on US military action in Iran, at the Pentagon in Washington, DC, on March 2, 2026. Brendan Smialowsky/AFP via Getty ImagesHello and welcome to Regulator, the newsletter for Verge subscribers that goes inside Washington’s increasingly existential clashes between tech and politics. If this was forwarded to you, can I interest you in a full-fledged subscription to The Verge for only $40 a year? You’ll get so much more than doomer scenarios. We cover non-existential fun stuff like Legos, too.
Do you work somewhere involving government, technology, and existential threats? Send all tips to [email protected], or to my Signal account @tina.nguyen19.
Read Article >- Sam Altman said he planned to add two sentences to OpenAI’s agreement with the Pentagon.
The OpenAI CEO laid out some updated wording he hoped would address people’s concerns about mass domestic surveillance, though the new language still included the phrase “consistent with applicable laws.” Altman also said he reiterated over the weekend that Anthropic should not be designated a supply chain risk.
Sam Altman’s post[X (formerly Twitter)]
How OpenAI caved to the Pentagon on AI surveillance

Image: Cath Virginia / The Verge, Getty ImagesOn Friday evening, amidst fallout from a standoff between the Department of Defense and Anthropic, OpenAI CEO Sam Altman announced that his own company had successfully negotiated new terms with the Pentagon. The US government had just moved to blacklist Anthropic for standing firm on two red lines for military use: no mass surveillance of Americans and no lethal autonomous weapons (or AI systems with the power to kill targets without human oversight). Altman, however, implied that he’d found a unique way to keep those same limits in OpenAI’s contract.
“Two of our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems,” Altman wrote. “The DoW agrees with these principles, reflects them in law and policy, and we put them into our agreement,” he added, using the Trump Administration’s preferred name for the Defense Department, the Department of War.
Read Article >- The US used Anthropic AI for strikes in Iran despite ban.
On Friday, Donald Trump announced a ban on the federal government’s use of Claude. Though he had to walk back his demand that agencies “IMMEDIATELY CEASE” using it, instead saying there would be a six-month phaseout. Part of that might be because planning for Saturday’s strikes against Iran was underway and relied on Claude for intelligence assessments and target identification. According to the Wall Street Journal:
Within hours of declaring that the federal government will end its use of artificial-intelligence tools made by tech company Anthropic, President Trump launched a major air attack in Iran with the help of those very same tools.
- A former Trump advisor calls the fight with Anthropic “attempted corporate murder.”
Dean Ball, who worked as a senior AI policy advisor, said on X that designating Anthropic as a “supply chain risk” or threatening to invoke the Defense Production Act could have a chilling effect on the entire industry. Alan Rozenshtein, a former DOJ official specializing in technology law, told Politico this could be the first step toward partial nationalization of the AI industry.
- OpenAI reached a new agreement with the Pentagon.
CEO Sam Altman wrote on X that the agreement allowed the US military to “deploy our models in their classified network.” He said the agreement reflects OpenAI’s desire for prohibitions on domestic mass surveillance and “human responsibility for the use of force, including for autonomous weapon systems.” Altman also wrote that OpenAI is “asking the DoW to offer these same terms to all AI companies, which in our opinion we think everyone should be willing to accept.” This follows a rollercoaster week of negotiations between Anthropic and the Pentagon.
Sam Altman’s post[X (formerly Twitter)]
Defense secretary Pete Hegseth designates Anthropic a supply chain risk


US President Donald Trump (R) looks on as US Secretary of Defense Pete Hegseth speaks to the press following US military actions in Venezuela AFP via Getty ImagesNearly two hours after President Donald Trump announced on Truth Social that he was banning Anthropic products from the federal government, Secretary of Defense Pete Hegseth took it one step further and announced that he was now designating the AI company as a “supply-chain risk,” which Anthropic says it is willing to challenge in court.
The decision could immediately impact numerous major tech companies that use Claude in their line of work for the Pentagon, including Palantir and AWS. It is not immediately clear to what extent the Pentagon may blacklist companies that contract with Claude for other services outside of national security, Anthropic has responded, claiming the designation applies only to the use of its Claude AI on Department of Defense contract work specifically.
Read Article >Trump orders federal agencies to drop Anthropic’s AI

Image: Cath Virginia / The Verge, Getty ImagesOn Friday afternoon, Donald Trump posted on Truth Social, accusing Anthropic, the AI company behind Claude, of attempting to “STRONG-ARM” the Pentagon and directing federal agencies to “IMMEDIATELY CEASE” use of its products. At issue is Anthropic CEO Dario Amodei’s refusal of an updated agreement with the US military agreeing to “any lawful use” of Anthropic’s technology, as Defense Secretary Pete Hegseth mandated in a January memo, to the frustration of many tech workers across the industry.
As we explained earlier this week, that agreement would give the US military access to use the company’s services for mass domestic surveillance and lethal autonomous weapons, or AI that has full power to track and kill targets with no humans involved in the decision-making process. OpenAI and xAI have reportedly already agreed to the new terms, though OpenAI is reportedly looking to negotiate with the Pentagon to adopt the same red lines as Anthropic.
Read Article >- Even Ilya Sutskever weighed in on the Anthropic-Pentagon situation.
The OpenAI co-founder, who left after CEO Sam Altman’s ouster and reinstatement and then started his own AI startup called Safe Superintelligence, posted on X:
It’s extremely good that Anthropic has not backed down, and it’s siginficant that OpenAI has taken a similar stance.
In the future, there will be much more challenging situations of this nature, and it will be critical for the relevant leaders to rise up to the occasion, for fierce competitors to put their differences aside. Good to see that happen today.
We don’t have to have unsupervised killer robots

Image: Cath Virginia / The VergeIt’s the day of the Pentagon’s looming ultimatum for Anthropic: allow the US military unchecked access to its technology, including for mass surveillance and fully autonomous lethal weapons, or potentially be designated a “supply chain risk” and potentially lose hundreds of billions of dollars in contracts. Amid the intensifying public statements and threats, tech workers across the industry are looking at their own companies’ government and military contracts, wondering what kind of future they’re helping to build.
While the Department of Defense has spent weeks negotiating with Anthropic over removing its guardrails, including allowing the US military to use Anthropic’s AI kill targets with no human oversight, OpenAI and xAI had reportedly already agreed to such terms, although OpenAI is reportedly attempting to adopt the same red lines in the agreements as Anthropic. The overall situation has left employees at some companies with defense contracts feeling betrayed. “When I joined the tech industry, I thought tech was about making people’s lives easier,” an Amazon Web Services employee told The Verge, “but now it seems like it’s all about making it easier to surveil and deport and kill people.”
Read Article >
Most Popular
- Anthropic’s most dangerous AI model just fell into the wrong hands
- Sony’s PlayStation 5 is $200 off for the first time since December
- The unraveling of Dan Crenshaw
- Framework is building a better couch keyboard because everyone hates the Logitech one
- Elon Musk admits that millions of Tesla vehicles won’t get unsupervised FSD