<?xml version="1.0" encoding="UTF-8"?><feed
	xmlns="http://www.w3.org/2005/Atom"
	xmlns:thr="http://purl.org/syndication/thread/1.0"
	xml:lang="en-US"
	>
	<title type="text">Anthropic | The Verge</title>
	<subtitle type="text">The Verge is about technology and how it makes us feel. Founded in 2011, we offer our audience everything from breaking news to reviews to award-winning features and investigations, on our site, in video, and in podcasts.</subtitle>

	<updated>2026-04-22T17:12:21+00:00</updated>

	<link rel="alternate" type="text/html" href="https://www.theverge.com/anthropic" />
	<id>https://www.theverge.com/rss/anthropic/index.xml</id>
	<link rel="self" type="application/atom+xml" href="https://www.theverge.com/rss/anthropic/index.xml" />

	<icon>https://platform.theverge.com/wp-content/uploads/sites/2/2025/01/verge-rss-large_80b47e.png?w=150&amp;h=150&amp;crop=1</icon>
		<entry>
			
			<author>
				<name>Lauren Feiner</name>
			</author>
			
			<title type="html"><![CDATA[Anthropic&#8217;s Mythos rollout has missed America’s cybersecurity agency]]></title>
			<link rel="alternate" type="text/html" href="https://www.theverge.com/policy/916758/anthropic-mythos-preview-cisa-left-out" />
			<id>https://www.theverge.com/?p=916758</id>
			<updated>2026-04-22T13:12:21-04:00</updated>
			<published>2026-04-22T12:57:36-04:00</published>
			<category scheme="https://www.theverge.com" term="AI" /><category scheme="https://www.theverge.com" term="Anthropic" /><category scheme="https://www.theverge.com" term="Policy" /><category scheme="https://www.theverge.com" term="Politics" /><category scheme="https://www.theverge.com" term="Security" /><category scheme="https://www.theverge.com" term="Tech" />
							<summary type="html"><![CDATA[Several US federal agencies are taking up Anthropic's new cybersecurity model to find vulnerabilities, but one is reportedly not getting in on the action: the nation's central cybersecurity coordinator. On Tuesday, Axios reported that the Cybersecurity and Infrastructure Security Agency (CISA) didn't have access to Mythos Preview, which Anthropic has touted as a powerful tool [&#8230;]]]></summary>
			
							<content type="html">
											<![CDATA[

						
<figure>

<img alt="" data-caption="" data-portal-copyright="" data-has-syndication-rights="1" src="https://platform.theverge.com/wp-content/uploads/sites/2/2026/04/STK269_ANTHROPIC_2_D.webp?quality=90&#038;strip=all&#038;crop=0,0,100,100" />
	<figcaption>
		</figcaption>
</figure>
<p class="has-text-align-none">Several US federal agencies are taking up Anthropic's new cybersecurity model to find vulnerabilities, but one is reportedly not getting in on the action: the nation's central cybersecurity coordinator. </p>
<p class="has-text-align-none">On Tuesday, <a href="https://www.axios.com/2026/04/21/cisa-anthropic-mythos-ai-security"><em>Axios </em>reported</a> that the Cybersecurity and Infrastructure Security Agency (CISA) didn't have access to Mythos Preview, which Anthropic has touted as a powerful tool for finding and patching security vulnerabilities. Meanwhile, other agencies like <a href="https://www.politico.com/news/2026/04/14/anthropic-mythos-federal-agency-testing-00872439">Commerce Department</a> and <a href="https://www.axios.com/2026/04/19/nsa-anthropic-mythos-pentagon">National Security Agency (NSA)</a> are reportedly using the model, and President Donald Trump's administration has been negotiating broader access, <a href="https://www.axios.com/2026/04/16/white-house-anthropic-ai-mythos-government-national-security"><em>Axios</em> wrote</a> last w …</p>
<p><a href="https://www.theverge.com/policy/916758/anthropic-mythos-preview-cisa-left-out">Read the full story at The Verge.</a></p>
						]]>
									</content>
			
					</entry>
			<entry>
			
			<author>
				<name>Jess Weatherbed</name>
			</author>
			
			<title type="html"><![CDATA[Anthropic’s most dangerous AI model just fell into the wrong hands]]></title>
			<link rel="alternate" type="text/html" href="https://www.theverge.com/ai-artificial-intelligence/916501/anthropic-mythos-unauthorized-users-access-security" />
			<id>https://www.theverge.com/?p=916501</id>
			<updated>2026-04-22T05:30:13-04:00</updated>
			<published>2026-04-22T05:18:40-04:00</published>
			<category scheme="https://www.theverge.com" term="AI" /><category scheme="https://www.theverge.com" term="Anthropic" /><category scheme="https://www.theverge.com" term="News" /><category scheme="https://www.theverge.com" term="Security" /><category scheme="https://www.theverge.com" term="Tech" />
							<summary type="html"><![CDATA[Anthropic's Mythos AI model, a powerful cybersecurity tool that the company said could be dangerous in the wrong hands, has been accessed by a "small group of unauthorized users," Bloomberg reports. An unnamed member of the group, identified only as "a third-party contractor for Anthropic," told the publication that members of a private online forum [&#8230;]]]></summary>
			
							<content type="html">
											<![CDATA[

						
<figure>

<img alt="Vector illustration of the Anthropic logo." data-caption="" data-portal-copyright="Image: The Verge" data-has-syndication-rights="1" src="https://platform.theverge.com/wp-content/uploads/sites/2/chorus/uploads/chorus_asset/file/25469782/STK269_ANTHROPIC_D.jpg?quality=90&#038;strip=all&#038;crop=0,0,100,100" />
	<figcaption>
		</figcaption>
</figure>
<p class="has-text-align-none">Anthropic's Mythos AI model, a powerful cybersecurity tool that the company said could be dangerous in the wrong hands, has been accessed by a "small group of unauthorized users," <a href="https://www.bloomberg.com/news/articles/2026-04-21/anthropic-s-mythos-model-is-being-accessed-by-unauthorized-users"><em>Bloomberg</em></a> reports. An unnamed member of the group, identified only as "a third-party contractor for Anthropic," told the publication that members of a private online forum got into Mythos via a mix of tactics, utilizing the contractor's access and "commonly used internet sleuthing tools."</p>
<p class="has-text-align-none">The Claude Mythos Preview is a new general-purpose model that's capable of identifying and exploiting vulnerabilities "in every major operating system and every major web browser …</p>
<p><a href="https://www.theverge.com/ai-artificial-intelligence/916501/anthropic-mythos-unauthorized-users-access-security">Read the full story at The Verge.</a></p>
						]]>
									</content>
			
					</entry>
			<entry>
			
			<author>
				<name>Hayden Field</name>
			</author>
			
			<title type="html"><![CDATA[Anthropic’s new cybersecurity model could get it back in the government’s good graces]]></title>
			<link rel="alternate" type="text/html" href="https://www.theverge.com/ai-artificial-intelligence/914229/tides-turning-anthropic-trump-administration-cybersecurity-mythos-preview" />
			<id>https://www.theverge.com/?p=914229</id>
			<updated>2026-04-21T09:36:22-04:00</updated>
			<published>2026-04-17T16:14:21-04:00</published>
			<category scheme="https://www.theverge.com" term="AI" /><category scheme="https://www.theverge.com" term="Anthropic" /><category scheme="https://www.theverge.com" term="News" /><category scheme="https://www.theverge.com" term="Policy" />
							<summary type="html"><![CDATA[The Trump administration has spent nearly two months fighting with AI company Anthropic. It's dubbed the company a "RADICAL LEFT, WOKE COMPANY" full of "Leftwing nut jobs" and a menace to national security. But some of the ice may reportedly be melting between the two, thanks to Anthropic's buzzy new cybersecurity-focused model: Claude Mythos Preview. [&#8230;]]]></summary>
			
							<content type="html">
											<![CDATA[

						
<figure>

<img alt="Photo illustration of Dario Amodei of Anthropic." data-caption="" data-portal-copyright="Image: Cath Virginia / The Verge, Getty Images" data-has-syndication-rights="1" src="https://platform.theverge.com/wp-content/uploads/sites/2/chorus/uploads/chorus_asset/file/25469941/STK202_DARIO_AMODEI_CVIRGINIA_D.jpg?quality=90&#038;strip=all&#038;crop=0,0,100,100" />
	<figcaption>
		</figcaption>
</figure>
<p class="has-text-align-none">The Trump administration has spent nearly two months fighting with AI company Anthropic. It's <a href="https://truthsocial.com/@realDonaldTrump/posts/116144552969293195">dubbed the company</a> a "RADICAL LEFT, WOKE COMPANY" full of "Leftwing nut jobs" and a menace to national security. But some of the ice may reportedly be melting between the two, thanks to Anthropic's <a href="https://www.theverge.com/ai-artificial-intelligence/908114/anthropic-project-glasswing-cybersecurity">buzzy new cybersecurity-focused model</a>: Claude Mythos Preview.</p>
<p class="has-text-align-none">Anthropic's relationship with the Pentagon <a href="https://www.theverge.com/ai-artificial-intelligence/883456/anthropic-pentagon-department-of-defense-negotiations">soured quickly</a> in late February after the company refused to budge on two red lines: using its technology for domestic mass surveillance or lethal fully autonomous weapons with no human in the loop. Anthropic's tech has in the past been used heavily b …</p>
<p><a href="https://www.theverge.com/ai-artificial-intelligence/914229/tides-turning-anthropic-trump-administration-cybersecurity-mythos-preview">Read the full story at The Verge.</a></p>
						]]>
									</content>
			
					</entry>
			<entry>
			
			<author>
				<name>Hayden Field</name>
			</author>
			
			<title type="html"><![CDATA[Anthropic releases a new Opus model amid Mythos Preview buzz]]></title>
			<link rel="alternate" type="text/html" href="https://www.theverge.com/ai-artificial-intelligence/913184/anthropic-claude-opus-4-7-cybersecurity" />
			<id>https://www.theverge.com/?p=913184</id>
			<updated>2026-04-16T12:00:23-04:00</updated>
			<published>2026-04-16T11:59:24-04:00</published>
			<category scheme="https://www.theverge.com" term="AI" /><category scheme="https://www.theverge.com" term="Anthropic" /><category scheme="https://www.theverge.com" term="News" />
							<summary type="html"><![CDATA[Anthropic has released its most powerful "generally available" model to date: Claude Opus 4.7. The company called it a step up from Opus 4.6 for advanced software engineering tasks, particularly in complex coding areas that in the past required more hand-holding. It's also supposed to be better at analyzing images and following instructions, and it [&#8230;]]]></summary>
			
							<content type="html">
											<![CDATA[

						
<figure>

<img alt="" data-caption="" data-portal-copyright="" data-has-syndication-rights="1" src="https://platform.theverge.com/wp-content/uploads/sites/2/2026/01/STKB364_CLAUDE_2_C.jpg?quality=90&#038;strip=all&#038;crop=0,0,100,100" />
	<figcaption>
		</figcaption>
</figure>
<p class="has-text-align-none">Anthropic has released its most powerful "generally available" model to date: Claude Opus 4.7. </p>
<p class="has-text-align-none">The company called it a step up from Opus 4.6 for advanced software engineering tasks, particularly in complex coding areas that in the past required more hand-holding. It's also supposed to be better at analyzing images and following instructions, and it can exhibit more "creativity" when creating slides and documents, per Anthropic.</p>
<p class="has-text-align-none">Opus 4.7 comes on the heels of <a href="https://www.theverge.com/ai-artificial-intelligence/908114/anthropic-project-glasswing-cybersecurity">Mythos Preview</a>, the buzzy cybersecurity-focused model Anthropic announced earlier this month, which the company has said is its most powerful model overall. Comparatively, Opus 4.7 is …</p>
<p><a href="https://www.theverge.com/ai-artificial-intelligence/913184/anthropic-claude-opus-4-7-cybersecurity">Read the full story at The Verge.</a></p>
						]]>
									</content>
			
					</entry>
			<entry>
			
			<author>
				<name>David Pierce</name>
			</author>
			
			<title type="html"><![CDATA[The AI code wars are heating up]]></title>
			<link rel="alternate" type="text/html" href="https://www.theverge.com/column/910019/ai-coding-wars-openai-google-anthropic" />
			<id>https://www.theverge.com/?p=910019</id>
			<updated>2026-04-21T12:08:39-04:00</updated>
			<published>2026-04-12T08:00:00-04:00</published>
			<category scheme="https://www.theverge.com" term="AI" /><category scheme="https://www.theverge.com" term="Anthropic" /><category scheme="https://www.theverge.com" term="Column" /><category scheme="https://www.theverge.com" term="Google" /><category scheme="https://www.theverge.com" term="OpenAI" /><category scheme="https://www.theverge.com" term="Tech" /><category scheme="https://www.theverge.com" term="The Stepback" />
							<summary type="html"><![CDATA[This is The Stepback, a weekly newsletter breaking down one essential story from the tech world. For more on the AI coding and vibe-coding booms, follow David Pierce. The Stepback arrives in our subscribers' inboxes at 8AM ET. Opt in for The Stepback here. How it started Writing code was a killer app for AI [&#8230;]]]></summary>
			
							<content type="html">
											<![CDATA[

						
<figure>

<img alt="An animation of laptops racing with live code being generated on their screens" data-caption="" data-portal-copyright="Image: Cath Virginia / The Verge, Turbosquid" data-has-syndication-rights="1" src="https://platform.theverge.com/wp-content/uploads/sites/2/2026/04/268441_AI_CODING_RACE_CVIRGINIA.gif?quality=90&#038;strip=all&#038;crop=0,0,100,100" />
	<figcaption>
		</figcaption>
</figure>
<p class="has-text-align-none"><em>This is </em><a href="https://www.theverge.com/the-stepback-newsletter">The Stepback</a><em>,</em> <em>a weekly newsletter breaking down one essential story from the tech world. For more on the AI coding and vibe-coding booms, <a href="https://www.theverge.com/authors/david-pierce" data-type="link" data-id="https://www.theverge.com/authors/david-pierce">follow David Pierce</a>. </em>The Stepback<em> arrives in our subscribers' inboxes at 8AM ET. Opt in for </em>The Stepback <a href="https://www.theverge.com/newsletters"><em>here</em></a><em>.</em></p>
<h2 class="wp-block-heading has-text-align-none">How it started</h2>
<p class="has-text-align-none">Writing code was a killer app for AI even before anyone was really talking about AI. In the spring of 2021, 18 months before the world knew the word "ChatGPT," Microsoft debuted the very first product of a partnership with a nonprofit called OpenAI: <a href="https://www.theverge.com/2021/6/29/22555777/github-openai-ai-tool-autocomplete-code">a tool called GitHub Copilot</a> that watched developers as they wrote code and tried to autocomplete snippets and lines for them …</p>
<p><a href="https://www.theverge.com/column/910019/ai-coding-wars-openai-google-anthropic">Read the full story at The Verge.</a></p>
						]]>
									</content>
			
					</entry>
			<entry>
			
			<author>
				<name>Nilay Patel</name>
			</author>
			
			<title type="html"><![CDATA[The AI industry’s race for profits is now existential]]></title>
			<link rel="alternate" type="text/html" href="https://www.theverge.com/podcast/909042/ai-monetization-cliff-anthropic-openai-profitable-ai-existential-moment" />
			<id>https://www.theverge.com/?p=909042</id>
			<updated>2026-04-10T05:07:31-04:00</updated>
			<published>2026-04-09T10:00:00-04:00</published>
			<category scheme="https://www.theverge.com" term="AI" /><category scheme="https://www.theverge.com" term="Anthropic" /><category scheme="https://www.theverge.com" term="Business" /><category scheme="https://www.theverge.com" term="Decoder" /><category scheme="https://www.theverge.com" term="OpenAI" /><category scheme="https://www.theverge.com" term="Podcasts" /><category scheme="https://www.theverge.com" term="Tech" />
							<summary type="html"><![CDATA[Today on Decoder, let’s talk about the looming AI monetization cliff, and whether some of the biggest companies in the space can become real, profitable businesses before they careen right off it. My guest today is Hayden Field, who’s our senior AI reporter here at The Verge. She’s been keeping close tabs on both Anthropic [&#8230;]]]></summary>
			
							<content type="html">
											<![CDATA[

						
<figure>

<img alt="A photo illustration of OpenAI CEO Sam Altman and Anthropic CEO Dario Amodei superimposed over a cliff." data-caption="" data-portal-copyright="Image: The Verge" data-has-syndication-rights="1" src="https://platform.theverge.com/wp-content/uploads/sites/2/2026/04/DCD_0409.jpg?quality=90&#038;strip=all&#038;crop=0,0,100,100" />
	<figcaption>
		</figcaption>
</figure>
<p class="has-drop-cap has-text-align-none">Today on <em>Decoder</em>, let’s talk about the looming AI monetization cliff, and whether some of the biggest companies in the space can become real, profitable businesses before they careen right off it.</p>

<p class="has-text-align-none">My guest today is Hayden Field, who’s our senior AI reporter here at <em>The Verge</em>. She’s been keeping close tabs on both Anthropic and OpenAI, and how these two companies in particular tell us a whole lot about the AI industry in 2026.&nbsp;</p>

<p class="has-text-align-none">You’ve certainly heard a version of the monetization cliff story before. The biggest AI firms are built off the back of hundreds of billions in capital investment, and they’re linked to even greater amounts of forward-looking investment in data center build-out, chips, and other infrastructure spend. At some point, the profits have to materialize, or the bubble pops. Maybe AGI arrives, maybe the economy crashes, who knows.&nbsp;</p>

<p class="has-text-align-none">You’ve heard me ask some version of this question to scores of CEOs here on this show, and a majority of them have hinted toward the bubble popping — they think some companies will fail in spectacular fashion, some will succeed, and the opportunities, especially the money, are simply too big to ignore. We’re doing this, whether we want to or not — the market depends on it.&nbsp;</p>

<div class="wp-block-vox-media-highlight vox-media-highlight"><img src="https://platform.theverge.com/wp-content/uploads/sites/2/chorus/uploads/chorus_asset/file/24792604/The_Verge_Decoder_Tileart.jpg?quality=90&#038;strip=all&#038;crop=0,0,100,100" alt="" title="" data-has-syndication-rights="1" data-caption="" data-portal-copyright="" />


<p><em>Verge</em> subscribers, don&#8217;t forget you get exclusive access to ad-free <em>Decoder</em> wherever you get your podcasts. Head <a href="https://www.theverge.com/account/podcasts">here</a>. Not a subscriber? You can <a href="https://www.theverge.com/subscribe">sign up here</a>. </p>
</div>

<p class="has-text-align-none">So these last few weeks have felt like a very important inflection point, as both Anthropic and OpenAI have started to react to the reality of needing to go public — needing to make money. </p>

<p class="has-text-align-none">The catalyst for this change is AI agents, and products like Claude Code and Cowork, as well as the open-source OpenClaw and OpenAI’s Codex, have radically changed how these companies are thinking about their resources. And this is starting to affect how they behave — the products they support or suddenly kill, the restrictions they impose on customers, and the money they’re willing to burn toward their next big milestone.&nbsp;</p>

<p class="has-text-align-none">That&#8217;s because agents are valuable to customers right now, but agents also use far more compute. So the way people are using agents is burning tokens at a rate way faster than these companies anticipated, and that’s causing them to make hard decisions.&nbsp;</p>

<p class="has-text-align-none">We saw this most evidently last month when OpenAI abruptly <a href="https://www.theverge.com/ai-artificial-intelligence/899850/openai-sora-ai-chatgpt">killed its video-generation app Sora</a>, ditching a $1 billion Disney licensing deal in the process. Why? It costs too much to run, and OpenAI needs the compute for Codex. We saw it again just last week, when Anthropic decided it would no longer let Claude users burn through compute resources using the OpenClaw agent framework through a standard subscription plan, instead forcing those users <a href="https://www.theverge.com/ai-artificial-intelligence/907074/anthropic-openclaw-claude-subscription-ban">onto pay-as-you-go plans</a>, which cost substantially more.&nbsp;</p>

<p class="has-text-align-none">As you’ll hear Hayden explain here, these are glimmers of a make-or-break moment for the AI industry, as both Anthropic and OpenAI barrel toward two of the biggest IPOs in history. And the pressure on these companies to make money has never been this intense.&nbsp;</p>

<p class="has-text-align-none">The projections these companies have made, which just this week were <a href="https://www.wsj.com/tech/ai/openai-anthropic-ipo-finances-04b3cfb9?">leaked to the <em>Wall Street Journal</em></a>, tell a story of mind-boggling growth, to the tune of hundreds of billions in revenue and profitability by the end of the decade. But the most important questions now are can the AI companies pull this off, and what compromises will they make to reach that goal and avoid crashing and burning?&nbsp;</p>

<p class="has-text-align-none">Okay: <em>Verge</em> senior policy reporter Hayden Field on the AI monetization cliff and the race to profitability. Here we go.</p>

<iframe frameborder="0" height="200" src="https://playlist.megaphone.fm?e=VMP1417581812" width="100%"></iframe>

<p class="has-text-align-none"><em>If you’d like to read more about what we discussed in this episode, check out these links:</em></p>

<ul class="wp-block-list">
<li>The vibes are off at OpenAI | <a href="https://www.theverge.com/ai-artificial-intelligence/908513/the-vibes-are-off-at-openai"><em>The Verge</em></a></li>



<li>Anthropic essentially bans OpenClaw from Claude | <a href="https://www.theverge.com/ai-artificial-intelligence/907074/anthropic-openclaw-claude-subscription-ban"><em>The Verge</em></a></li>



<li>Why OpenAI killed Sora | <a href="https://www.theverge.com/ai-artificial-intelligence/902368/openai-sora-dead-ai-video-generation-competition"><em>The Verge</em></a></li>



<li>OpenAI just bought TBPN | <a href="https://www.theverge.com/ai-artificial-intelligence/906022/openai-buys-tbpn"><em>The Verge</em></a></li>



<li>National poll shows voters like AI less than ICE | <a href="https://www.theverge.com/ai-artificial-intelligence/891724/nbc-news-march-2026-poll-ai-ice"><em>The Verge</em></a></li>



<li>The spiraling cost of making AI | <a href="https://www.wsj.com/tech/ai/the-spiraling-cost-of-making-ai-0679bcea?mod=WTRN_pos1"><em>WSJ</em></a></li>



<li>OpenAI’s Fidji Simo taking leave amid exec shake-up | <a href="https://www.wired.com/story/openais-fidji-simo-is-taking-a-leave-of-absence/"><em>Wired</em></a></li>



<li>OpenAI raises another $122B at $850B valuation | <a href="https://www.theverge.com/ai-artificial-intelligence/904727/openai-chatgpt-investment"><em>The Verge</em></a></li>
</ul>

<p class="has-text-align-none"><em><sub>Questions or comments about this episode? Hit us up at decoder@theverge.com. We really do read every email!</sub></em></p>
						]]>
									</content>
			
					</entry>
			<entry>
			
			<author>
				<name>Hayden Field</name>
			</author>
			
			<title type="html"><![CDATA[A new Anthropic model found security problems &#8216;in every major operating system and web browser&#8217;]]></title>
			<link rel="alternate" type="text/html" href="https://www.theverge.com/ai-artificial-intelligence/908114/anthropic-project-glasswing-cybersecurity" />
			<id>https://www.theverge.com/?p=908114</id>
			<updated>2026-04-07T14:16:03-04:00</updated>
			<published>2026-04-07T14:00:00-04:00</published>
			<category scheme="https://www.theverge.com" term="AI" /><category scheme="https://www.theverge.com" term="Anthropic" /><category scheme="https://www.theverge.com" term="News" />
							<summary type="html"><![CDATA[Anthropic is debuting a new AI model as part of a cybersecurity partnership with Nvidia, Google, Amazon Web Services, Apple, Microsoft, and other companies. Project Glasswing, as it's called, is billed as a way for large companies, and potentially even the government, to flag vulnerabilities in their systems with virtually no human intervention. Anthropic is [&#8230;]]]></summary>
			
							<content type="html">
											<![CDATA[

						
<figure>

<img alt="Photo collage showing mouse cursors circling like sharks in a tablet screen." data-caption="" data-portal-copyright="The Verge" data-has-syndication-rights="1" src="https://platform.theverge.com/wp-content/uploads/sites/2/2025/04/STK461_INTERNET_CHILD_SAFETY_Stock_B_CVirginia.jpg?quality=90&#038;strip=all&#038;crop=0,0,100,100" />
	<figcaption>
		</figcaption>
</figure>
<p class="has-text-align-none">Anthropic is debuting a new AI model as part of a cybersecurity partnership with Nvidia, Google, Amazon Web Services, Apple, Microsoft, and other companies. Project Glasswing, as it's called, is billed as a way for large companies, and potentially even the government, to flag vulnerabilities in their systems with virtually no human intervention.</p>
<p class="has-text-align-none">Anthropic is offering its launch partners access to Claude Mythos Preview, a new general-purpose model that it's not currently planning to publicly release due to security concerns. Newton Cheng, the cyber lead for Anthropic's frontier red team, told <em>The Verge</em> that the model will ideally give cyber  …</p>
<p><a href="https://www.theverge.com/ai-artificial-intelligence/908114/anthropic-project-glasswing-cybersecurity">Read the full story at The Verge.</a></p>
						]]>
									</content>
			
					</entry>
			<entry>
			
			<author>
				<name>Jay Peters</name>
			</author>
			
			<title type="html"><![CDATA[Anthropic essentially bans OpenClaw from Claude by making subscribers pay extra]]></title>
			<link rel="alternate" type="text/html" href="https://www.theverge.com/ai-artificial-intelligence/907074/anthropic-openclaw-claude-subscription-ban" />
			<id>https://www.theverge.com/?p=907074</id>
			<updated>2026-04-03T20:14:00-04:00</updated>
			<published>2026-04-03T19:52:49-04:00</published>
			<category scheme="https://www.theverge.com" term="AI" /><category scheme="https://www.theverge.com" term="Anthropic" /><category scheme="https://www.theverge.com" term="News" />
							<summary type="html"><![CDATA[Using OpenClaw with Claude AI is about to get a lot more expensive, thanks to Anthropic's new policy changes. Beginning April 4th at 3PM ET, users will "no longer be able to use your Claude subscription limits for third-party harnesses including OpenClaw," according to an email sent to users on Friday evening. Instead, if users [&#8230;]]]></summary>
			
							<content type="html">
											<![CDATA[

						
<figure>

<img alt="The OpenClaw logo on a dark blue background." data-caption="" data-portal-copyright="Image: The Verge" data-has-syndication-rights="1" src="https://platform.theverge.com/wp-content/uploads/sites/2/2026/02/STKB382_OPEN_CLAW_D.jpg?quality=90&#038;strip=all&#038;crop=0,0,100,100" />
	<figcaption>
		</figcaption>
</figure>
<p class="has-text-align-none">Using OpenClaw with Claude AI is about to get a lot more expensive, thanks to Anthropic's new policy changes. Beginning April 4th at 3PM ET, users will "no longer be able to use your Claude subscription limits for third-party harnesses including OpenClaw," according to an email sent to users on Friday evening. Instead, if users want to use OpenClaw with Claude, they'll have to use a "pay-as-you-go option" that will be billed separate from their Claude subscription. </p>
<p class="has-text-align-none">With OpenClaw creator Peter Steinberger now <a href="https://www.theverge.com/ai-artificial-intelligence/879623/openclaw-founder-peter-steinberger-joins-openai">employed by OpenAI</a>, Anthropic may also be encouraging subscribers to use <a href="https://www.theverge.com/ai-artificial-intelligence/899430/anthropic-claude-code-cowork-ai-control-computer">more of its own tools, like Claude Cowork, instead</a>. Steinber …</p>
<p><a href="https://www.theverge.com/ai-artificial-intelligence/907074/anthropic-openclaw-claude-subscription-ban">Read the full story at The Verge.</a></p>
						]]>
									</content>
			
					</entry>
			<entry>
			
			<author>
				<name>Emma Roth</name>
			</author>
			
			<title type="html"><![CDATA[Claude Code leak exposes a Tamagotchi-style ‘pet’ and an always-on agent]]></title>
			<link rel="alternate" type="text/html" href="https://www.theverge.com/ai-artificial-intelligence/904776/anthropic-claude-source-code-leak" />
			<id>https://www.theverge.com/?p=904776</id>
			<updated>2026-03-31T18:24:19-04:00</updated>
			<published>2026-03-31T18:24:19-04:00</published>
			<category scheme="https://www.theverge.com" term="AI" /><category scheme="https://www.theverge.com" term="Anthropic" /><category scheme="https://www.theverge.com" term="News" /><category scheme="https://www.theverge.com" term="Security" /><category scheme="https://www.theverge.com" term="Tech" />
							<summary type="html"><![CDATA[After Anthropic released Claude Code's 2.1.88 update, users quickly discovered that it contained a package with a source map file containing its TypeScript codebase, with one person on X calling attention to the leak and posting a file containing the code. The leaked data reportedly contains more than 512,000 lines of code and provides a [&#8230;]]]></summary>
			
							<content type="html">
											<![CDATA[

						
<figure>

<img alt="The Claude logo on a beige background." data-caption="" data-portal-copyright="Image: The Verge" data-has-syndication-rights="1" src="https://platform.theverge.com/wp-content/uploads/sites/2/2025/08/STKB364_CLAUDE_D.jpg?quality=90&#038;strip=all&#038;crop=0,0,100,100" />
	<figcaption>
		</figcaption>
</figure>
<p class="has-text-align-none">After Anthropic released Claude Code's <a href="https://x.com/ClaudeCodeLog/status/2038773096379748786?s=20">2.1.88 update</a>, users quickly discovered that it contained a package with a source map file containing its TypeScript codebase, with one <a href="https://x.com/Fried_rice/status/2038894956459290963?s=20">person on X</a> calling attention to the leak and posting a file containing the code. The leaked data reportedly contains more than 512,000 lines of code and provides a look into the inner workings of the AI-powered coding tool, as <a href="https://arstechnica.com/ai/2026/03/entire-claude-code-cli-source-code-leaks-thanks-to-exposed-map-file/">reported earlier by <em>Ars Technica</em></a> and <a href="https://venturebeat.com/technology/claude-codes-source-code-appears-to-have-leaked-heres-what-we-know"><em>VentureBeat</em></a>.</p>
<p class="has-text-align-none">Users <a href="https://x.com/vineetwts/status/2038911973975601275?s=20">who have dug into</a> the code claim to have uncovered upcoming features, Anthropic's <a href="https://x.com/vedolos/status/2038977464840630611?s=20">instructions for the AI bot</a>, and insight into its <a href="https://x.com/himanshustwts/status/2038924027411222533">"memory" architecture</a>. Some things spotted by users inclu …</p>
<p><a href="https://www.theverge.com/ai-artificial-intelligence/904776/anthropic-claude-source-code-leak">Read the full story at The Verge.</a></p>
						]]>
									</content>
			
					</entry>
			<entry>
			
			<author>
				<name>Hayden Field</name>
			</author>
			
			<title type="html"><![CDATA[Judge sides with Anthropic to temporarily block the Pentagon&#8217;s ban]]></title>
			<link rel="alternate" type="text/html" href="https://www.theverge.com/ai-artificial-intelligence/902149/anthropic-dod-pentagon-lawsuit-supply-chain-risk-injunction" />
			<id>https://www.theverge.com/?p=902149</id>
			<updated>2026-03-27T05:28:55-04:00</updated>
			<published>2026-03-26T20:33:44-04:00</published>
			<category scheme="https://www.theverge.com" term="AI" /><category scheme="https://www.theverge.com" term="Analysis" /><category scheme="https://www.theverge.com" term="Anthropic" /><category scheme="https://www.theverge.com" term="Report" />
							<summary type="html"><![CDATA[After Anthropic's weeks-long standoff with the Pentagon, the company won one milestone: A judge granted Anthropic a preliminary injunction in its lawsuit, which sought to reverse its government blacklisting while the judicial process plays out. "The Department of War's records show that it designated Anthropic as a supply chain risk because of its 'hostile manner [&#8230;]]]></summary>
			
							<content type="html">
											<![CDATA[

						
<figure>

<img alt="" data-caption="" data-portal-copyright="Image: Cath Virginia / The Verge, Getty Images" data-has-syndication-rights="1" src="https://platform.theverge.com/wp-content/uploads/sites/2/2026/02/268367_dod_and_anthropics_public_fight_over_lethal_autonomous_weapons_a_mass_surveillance_CVirginia.jpg?quality=90&#038;strip=all&#038;crop=0,0,100,100" />
	<figcaption>
		</figcaption>
</figure>
<p class="has-text-align-none">After Anthropic's weeks-long standoff with the Pentagon, the company won one milestone: A judge granted Anthropic a preliminary injunction in its lawsuit, which sought to reverse its government blacklisting while the judicial process plays out. </p>
<p class="has-text-align-none">"The Department of War's records show that it designated Anthropic as a supply chain risk because of its 'hostile manner through the press,'" Judge Rita F. Lin, a district judge in the northern district of California, wrote in the <a href="https://www.courtlistener.com/docket/72379655/134/anthropic-pbc-v-us-department-of-war/">order</a>, which will go into effect in seven days. "Punishing Anthropic for bringing public scrutiny to the government's contracting position is classic illegal First Amendme …</p>
<p><a href="https://www.theverge.com/ai-artificial-intelligence/902149/anthropic-dod-pentagon-lawsuit-supply-chain-risk-injunction">Read the full story at The Verge.</a></p>
						]]>
									</content>
			
					</entry>
	</feed>
