Anthropic’s cybersecurity-focused AI model found 271 bugs in Firefox 150, Mozilla CTO Bobby Holley said, calling Claude Mythos Preview “every bit as capable” as top security researchers. Reassuringly, Mozilla hasn’t “seen any bugs that couldn’t have been found by an elite human researcher,” either.
Anthropic
During a television interview with CNBC, he said Anthropic, which has been enmeshed in a dramatic lawsuit with the Department of Defense, had a positive meeting at the White House. Anthropic had come to discuss Mythos, its buzzy private model. “We had some very good talks with them, and I think they’re shaping up,” he said. “They’re very smart, and I think they can be of great use.”
Sources told Axios that the agency was among the roughly 40 organizations granted access. This, despite the Pentagon arguing that Anthropic is a threat to national security. The NSA has reportedly been using it primarily to identify vulnerabilities in its own network, but considering its track record, it’s understandable if you’re wary.
Claude Design — powered by the company’s newest model, Opus 4.7 — allows users to create designs, prototypes, pitch decks, marketing materials, and more. It’s available in research preview for paying subscribers.
[Anthropic]
Despite Anthropic’s ongoing battle with the Pentagon, Bloomberg reports that the White House Office of Management and Budget’s CIO told government officials that it is preparing for their agencies to use Anthropic’s cybersecurity-focused AI model.
Anthropic’s private cybersecurity-focused model is being used by a handful of large companies, including Nvidia, Apple, and JPMorgan Chase, to plug high-stakes vulnerabilities in their systems, creating a lot of buzz. On the podcast, I unpacked the model, the competition, and the stakes.
Anthropic says the changes to the desktop app make it easier to work on multiple tasks at once, with a new sidebar for managing sessions, a drag-and-drop layout for customizing the app’s workspace, and a built-in terminal and file editor.

OpenAI, Google, and Anthropic are eating the software world alive.

It’s a make-or-break year for Anthropic and OpenAI, which are facing more pressure than ever to make more cash than they burn.
As a result, “the company will continue to be excluded from new contracts and Pentagon systems,” according to The Wall Street Journal.
[The Wall Street Journal]


The “multiple gigawatts of next-generation TPU capacity” are expected to come online beginning in 2027 to “power our frontier Claude models.” The company also says that its run-rate revenue has surpassed $30 billion.
In addition to bringing Claude integration to Copilot for “long-running, multi-step tasks,” Microsoft is also launching an improved Researcher agent for information gathering and a new Critique feature, which essentially tasks GPT with drafting research and then has Claude give it an edit pass for accuracy.
[Microsoft 365 Blog]
The name of the new model will be “Mythos,” Fortune reported — and other internal information, like details of an invite-only CEO event, were available in an “unsecured data trove.”

Judge Lin wrote that ‘punishing Anthropic … is classic illegal First Amendment retaliation.’


Anthropic is seeking a preliminary injunction to block its designation as a military supply-chain risk, and it just faced off with the Trump administration before Judge Rita Lin, who’ll be making the call. A decision is anticipated in the next few days — for a sense of how the hearing went, you can check out Lawfare’s Molly Roberts Bluesky live-post.


Anthropic filed a lawsuit earlier this month over its “supply chain risk” designation, but the Department of Defense held firm in a new court filing, alleging that the company could ostensibly “attempt to disable its technology or preemptively alter the behavior of its model either before or during ongoing warfighting operations” in the event it felt its red lines were “being crossed.” The filing added that the Pentagon “deemed that an unacceptable risk to national security.”
[CourtListener]

Techdirt’s Mike Masnick on the history of the NSA and mass surveillance in America, and why Anthropic’s fight with the Pentagon should worry us.
Claude can now communicate across Excel and PowerPoint, saving you from needing to keep switching tabs or re-explaining datasets at every step. Anthropic said it’s Claude “carrying the conversation across apps without losing track of what’s happening in either.”

Co-founder Jack Clark, who will lead the new Anthropic Institute, said he had “no concerns” about research funding.


The multi-agent tool, called Code Review, should catch “bugs human reviewers often miss,” Anthropic said. Agents run in parallel and deliver a high-level overview, plus in-line comments for individual issues.
Code Review is available in research preview for Enterprise and Teams customers.


The Cowork integration was built in close collaboration with Anthropic and aims to help Copilot perform “long-running, multi-step tasks,” according to Microsoft’s announcement. The feature is in testing and will be available to preview later this month through Microsoft’s Frontier program.
The designation from the US Department of War — that’s busy disrupting actual supply chains and human life in several countries — is having the inverse effect of driving up demand for Claude, which has been breaking daily signup records since early last week in every country where Claude is available.
AppFigures data also shows it topping App Store charts for free and AI apps in dozens of countries, including the US, Canada, and much of Europe.
In a blog post, CEO Dario Amodei confirmed reports that the Defense Department had sent them a letter formally designating them a supply-chain risk, and said Anthropic planned to challenge them in court. He also clarified how it would currently impact Claude users:
The language used by the Department of War in the letter (even supposing it was legally sound) matches our statement on Friday that the vast majority of our customers are unaffected by a supply chain risk designation. With respect to our customers, it plainly applies only to the use of Claude by customers as a direct part of contracts with the Department of War, not all use of Claude by customers who have such contracts.
In a scathing 1,600 word memo to employees sent on Friday, CEO Dario Amodei suggested Anthropic’s relationship with the government soured because, unlike OpenAI or its executives, “we haven’t donated to Trump” and “we haven’t given dictator-style praise to Trump.”
The leaked remarks could complicate Amodei’s last-ditch efforts to salvage the company’s relationship with the US military and prevent it from being iced out of defense work.
Companies that do business with the US military are pivoting away from Anthropic’s AI after Defense Secretary Pete Hegseth announced he was designating it a “supply chain risk” last week, CNBC reports. While Anthropic can still challenge the designation in court, defense companies say they’re abandoning Claude preemptively “out of an abundance of caution.”
The OpenAI CEO laid out some updated wording he hoped would address people’s concerns about mass domestic surveillance, though the new language still included the phrase “consistent with applicable laws.” Altman also said he reiterated over the weekend that Anthropic should not be designated a supply chain risk.
[X (formerly Twitter)]


Most Popular
- Anthropic’s most dangerous AI model just fell into the wrong hands
- Sony’s PlayStation 5 is $200 off for the first time since December
- The unraveling of Dan Crenshaw
- Framework is building a better couch keyboard because everyone hates the Logitech one
- Elon Musk admits that millions of Tesla vehicles won’t get unsupervised FSD















