<?xml version="1.0" encoding="UTF-8"?><feed
	xmlns="http://www.w3.org/2005/Atom"
	xmlns:thr="http://purl.org/syndication/thread/1.0"
	xml:lang="en-US"
	>
	<title type="text">Hayden Field | The Verge</title>
	<subtitle type="text">The Verge is about technology and how it makes us feel. Founded in 2011, we offer our audience everything from breaking news to reviews to award-winning features and investigations, on our site, in video, and in podcasts.</subtitle>

	<updated>2026-04-23T14:11:09+00:00</updated>

	<link rel="alternate" type="text/html" href="https://www.theverge.com/author/hayden-field" />
	<id>https://www.theverge.com/authors/hayden-field/rss</id>
	<link rel="self" type="application/atom+xml" href="https://www.theverge.com/authors/hayden-field/rss" />

	<icon>https://platform.theverge.com/wp-content/uploads/sites/2/2025/01/verge-rss-large_80b47e.png?w=150&amp;h=150&amp;crop=1</icon>
		<entry>
			
			<author>
				<name>Hayden Field</name>
			</author>
			
			<title type="html"><![CDATA[You’re about to feel the AI money squeeze]]></title>
			<link rel="alternate" type="text/html" href="https://www.theverge.com/ai-artificial-intelligence/917380/ai-monetization-anthropic-openai-token-economics-revenue" />
			<id>https://www.theverge.com/?p=917380</id>
			<updated>2026-04-23T10:11:09-04:00</updated>
			<published>2026-04-23T09:45:00-04:00</published>
			<category scheme="https://www.theverge.com" term="AI" /><category scheme="https://www.theverge.com" term="Business" /><category scheme="https://www.theverge.com" term="Report" />
							<summary type="html"><![CDATA[Earlier this month, millions of OpenClaw users woke up to a sweeping mandate: The viral AI agent tool, which this year took the worldwide tech industry by storm, had been severely restricted by Anthropic.&#160; Anthropic, like other leading AI labs, was under immense pressure to lessen the strain on its systems and start turning a [&#8230;]]]></summary>
			
							<content type="html">
											<![CDATA[

						
<figure>

<img alt="A small human figure with a laptop being squeezed by a larger humanoid shape covered in blue digital lines, itself being squeezed by a larger arm in a black suit sleeve." data-caption="" data-portal-copyright="Image: Vincent Kilbride / The Verge" data-has-syndication-rights="1" src="https://platform.theverge.com/wp-content/uploads/sites/2/2026/04/vincentkilbride-theverge-ai-monetisation.jpg?quality=90&#038;strip=all&#038;crop=0,0,100,100" />
	<figcaption>
		</figcaption>
</figure>
<p class="has-drop-cap has-text-align-none">Earlier this month, millions of OpenClaw users woke up to a sweeping mandate: The viral AI agent tool, which this year took the worldwide tech industry by storm, had been severely restricted by Anthropic.&nbsp;</p>

<p class="has-text-align-none">Anthropic, like other leading AI labs, was under immense pressure to lessen the strain on its systems and start turning a profit. So if the users wanted its Claude AI to power their popular agents, they’d have to start paying handsomely for the privilege.&nbsp;</p>

<p class="has-text-align-none">“Our subscriptions weren’t built for the usage patterns of these third-party tools,” wrote Boris Cherny, head of Claude Code, on <a href="https://x.com/bcherny/status/2040206440556826908?s=20">X</a>. “We want to be intentional in managing our growth to continue to serve our customers sustainably long-term. This change is a step toward that.”&nbsp;</p>

<p class="has-text-align-none">The announcement was a sign of the times. Investors have poured hundreds of billions of dollars into companies like OpenAI and Anthropic to help them scale and build out their compute. Now, they’re expecting returns. After years of offering cheap or totally free access to advanced AI systems, the bill is starting to come due — and downstream, users are beginning to feel the pinch.&nbsp;</p>

<p class="has-text-align-none">Over the past few years, most top AI labs have introduced new subscription tiers to court power users. OpenAI and Anthropic shifted their pricing plans for enterprise. OpenAI introduced in-platform advertisements. Anthropic, of course, restricted third-party tools.&nbsp;</p>

<p class="has-text-align-none">In some ways, this is a tale as old as time,&nbsp;and particularly, a clear echo of the tech boom of the ’10s. Venture capitalists helped startups subsidize fast growth in all kinds of areas: ride-hailing apps, e-commerce, takeout and grocery delivery. Once companies cemented their power, they raised prices, added new revenue streams, and delivered a return to investors. Or they didn’t — and they crashed and burned.&nbsp;</p>

<p class="has-text-align-none">But AI companies have gone through more investor money at a faster pace than any other sector in recent history. AI companies have broken ground on data centers around the world, dedicating billions of dollars with promises of better models, lower costs, and AI for everyone. Even stemming the flow of losses will be difficult —&nbsp;let alone making the kind of money investors are hoping for. “When you sink trillions of dollars into data centers, you’re going to expect a return,” said Will Sommer, a senior director analyst at Gartner, who specializes in economic forecasting and quantitative modeling.&nbsp;</p>

<figure class="wp-block-pullquote"><blockquote><p>“When you sink trillions of dollars into data centers, you’re going to expect a return.”</p></blockquote></figure>

<p class="has-text-align-none">“Is the era of basically free or close-to-free AI kind of coming to an end here?” said Mark Riedl, a professor in the Georgia Tech School of Interactive Computing. “It’s too soon to say for certain, but there are some signs.”&nbsp;</p>

<p class="has-text-align-none">Gartner’s Sommer studies long-term economic market trends related to generative AI, including calculating just how much money is at stake. Between 2024 and 2029, he said, Gartner estimates that capital investment in AI data centers will reach about $6.3 trillion — a “massive amount of money.”</p>

<p class="has-text-align-none">To avoid a write-down of these assets, major AI model providers would ideally generate a return on invested capital (ROIC) of about 25 percent, Sommer said. (That’s about what Amazon, Microsoft, and Google tend to earn on their overall capital investments.) On the other hand, if the returns fall below 12 percent, institutional capital loses interest — there’s better money elsewhere, Sommer said. Below 7 percent, you’re in write-down territory, which is “an unmitigated disaster for all of the investors in this technology,” Sommer said.</p>

<p class="has-text-align-none">To reach that bare minimum of 7 percent, Gartner forecasts that large AI companies would need to earn cumulatively close to $7 trillion in AI-driven revenue through 2029, which is close to $2 trillion per year by the end of the period. In order to achieve “historic returns,” the providers would need to earn nearly $8.2 trillion in the same period.</p>

<p class="has-text-align-none">OpenAI has already made $600 billion in spending commitments through 2030, the company said <a href="https://www.cnbc.com/2026/02/20/openai-resets-spend-expectations-targets-around-600-billion-by-2030.html">in February</a>, which Sommer says is already a “massive step down” from the $1.4 trillion it had planned before. Based on OpenAI’s revenue forecasts and potential compound annual growth, Sommer said that even in the best-case scenario, he predicts that the lab would only hit a fraction of the overall spend required to hit that 7 percent ROIC.</p>

<p class="has-text-align-none">How do model providers like OpenAI make this money? By selling access to what are known as tokens. A token is essentially a unit of data input that an AI model can understand and process — it could be text, images, audio, or something else. One token is generally worth about four characters in the English language — the word “bathroom,” for instance, would likely be processed as two tokens. One paragraph in English is generally about 100 tokens, and a 1,500-word essay may be about 2,050 tokens, per an OpenAI <a href="https://help.openai.com/en/articles/4936856-what-are-tokens-and-how-to-count-them">estimate</a>.&nbsp;</p>

<p class="has-text-align-none">To hit investors’ revenue expectations, providers would need to process a “mind-bending” number of tokens, Sommer said.</p>

<p class="has-text-align-none">By most measures, companies’ numbers are already pretty big. Google announced it was processing 1.3 quadrillion tokens in October, for instance. If you add all the providers’ estimates up, Sommer said, you get 100 to 200 quadrillion tokens a year. But to achieve the the $2 trillion in annual spend Gartner calculated,&nbsp;providers would need to be generating, by conservative estimates,&nbsp;a cumulative 10 sextillion tokens per year. (To make that slightly less abstract, a quadrillion has 15 zeros, and a sextillion has 21.) Even assuming a very generous profit margin of 10 percent per token, that would mean that token consumption between now and 2030 would need to grow by 50,000–100,000x.&nbsp;</p>

<figure class="wp-block-pullquote"><blockquote><p>To hit investors’ revenue expectations, providers would need to process a “mind-bending” number of tokens</p></blockquote></figure>

<p class="has-text-align-none">Right now, constantly seeking more data centers and strapped for compute, companies aren’t capable of processing this many tokens. Even if they could, they’d face a problem: they’re likely taking a loss on them. Sommer estimates that if you only account for the direct cost of infrastructure and electricity, “every company is making very reasonable margins on every token.” But that margin is probably tighter or nonexistent with newer, more token-hungry models. And it’s eaten up completely by indirect operation costs, like building out more compute and the “ungodly” expense of constantly training the next big model.</p>

<p class="has-text-align-none">“As soon as you then add all of the infrastructure that needs to be built for the next generation of model, and you look at how these models are going to scale, it becomes increasingly untenable,” Sommer said.</p>

<p class="has-text-align-none">Sommer predicts that many companies “won’t be able to sustain their burn rate,” and says market consolidation is virtually inevitable —&nbsp;in his eyes, no more than two large language model providers in any regional market will survive. And the era where nearly every service has a fairly generous unpaid tier probably isn’t going to last.</p>

<p class="has-text-align-none">“For the [labs] that have a lot of users that were free, I think the question was never really if you’d monetize the free tier but it was when, and how badly do you do it,” Jay Madheswaran, cofounder of legal AI startup Eve, which is a client of both OpenAI and Anthropic, told <em>The Verge</em>.</p>

<p class="has-text-align-none">Even if you do find a way to square the math, building customer loyalty can be just as complicated. Top labs are constantly leapfrogging each other on model debuts, feature releases, strategy shifts, hiring announcements, and more. It can be tough to stay on top long enough to corner any part of the market —&nbsp;engineers and developers are famous for switching which model they’re using on any given day, and it’s easy to do so.&nbsp;</p>

<p class="has-text-align-none">So labs are increasingly emphasizing the importance of locking users into their platform and tools. Anthropic, which primarily builds for enterprise clients, has been going <a href="https://www.theverge.com/report/874308/anthropic-claude-code-opus-hype-moment">all in on its coding efforts</a>, and OpenAI has recently pledged to mirror Anthropic’s focus on coding and enterprise, ahead of both companies reportedly racing each other to IPO by the end of 2026.&nbsp;</p>

<p class="has-text-align-none">For now, that competition is benefiting end users. “It’s an arms race where you cannot let up at all because the switching cost is zero,” said Soham Mazumdar, cofounder and CEO of Wisdom AI, adding, “As a common man, I’m going to be the winner longer-term.”&nbsp;</p>

<p class="has-text-align-none">In the early days of AI, the bulk of compute costs went to training initial models, while inference (or performing tasks) was cheaper. As models have advanced and systems have added features, however, inference has gotten far more resource-intensive. AI agents, or tools that ideally can complete complex, multistep tasks on your behalf without constant hand-holding, now use vastly more tokens than the basic chatbot models did a few years back.</p>

<p class="has-text-align-none">Reasoning models, which increasingly power AI agents, are notoriously expensive on the inference side as well, said Georgia Tech’s Riedl. These agents —&nbsp;such as popular open-source platform OpenClaw —&nbsp;are typically more efficient and effective than ones without reasoning, but they also expend far more tokens doing behind-the-scenes work the end user may not see. That may look like “thinking through” a lot of different potential paths, launching sub-agents to do portions of a task, or verifying the accuracy of different steps of the process.&nbsp;</p>

<p class="has-text-align-none">“You put in your one-sentence prompt… and it’ll talk out loud to itself for thousands and thousands of tokens, thousands and thousands of words, maybe even tens of thousands when you get into coding,” Riedl said, adding, “If you have thousands or millions of people using these things every single day, the inference costs of just the users generating tons and tons of tokens all the time really outweighs the training side of things.” If model providers were making a straightforward profit on all these tokens and had the compute to handle them easily, that wouldn’t be a problem for them —&nbsp;but as things stand, it’s a strain.&nbsp;</p>

<figure class="wp-block-pullquote"><blockquote><p>“The use cases have exploded, and we’re out of capacity.”</p></blockquote></figure>

<p class="has-text-align-none">“Anybody who was building agents in the past couple of years sort of saw this coming,” said Aaron Levie, CEO of Box, adding, “The use cases have exploded, and we’re out of capacity.”&nbsp;</p>

<p class="has-text-align-none">Top AI labs have recently changed their policies on API usage and third-party tools —&nbsp;like Anthropic <a href="https://www.theverge.com/ai-artificial-intelligence/907074/anthropic-openclaw-claude-subscription-ban">essentially banning</a> the use of OpenClaw unless subscribers pay extra —&nbsp;due to the extra strain. “You’ve got these tools that are basically just sitting as background processors on everyone’s laptops and desktops, just continuously waking themselves up, generating some tokens, doing some stuff, and putting themselves back to sleep,” says Riedl.&nbsp;</p>

<p class="has-text-align-none">And no matter what you’re doing with a reasoning-model-powered AI agent, there are likely going to be wasted tokens —&nbsp;meaning times that an AI model goes down a non-useful path and then backtracks, or checks on how something is going but doesn’t change anything, or even pauses to write itself a poem. In an era where labs are likely losing money on some tokens and companies are strapped for compute, the industry is trying to reduce wasted tokens and build more focused and targeted models.&nbsp;</p>

<p class="has-text-align-none">Although it may be good for both paying customers and AI labs alike to make models use fewer tokens, it ironically works against the mission of massively increasing token usage. As Gartner’s Sommer puts it, pricing models may change significantly down the line, but right now, there’s a “narrow space on the treadmill” between short- and long-term goals.&nbsp;</p>

<p class="has-text-align-none">Add this all up, and big AI companies are at a transition point: they’ve attracted huge numbers of users by offering free access, and now they need to keep those users while charging a lot more. “On one hand, they want to see more tokens being generated but they have to either suck up the costs, which they can sort of do as long as venture capital is flowing, or pass the costs back on to [customers],” Riedl said. “Maybe the economics are a little upside down right now.”&nbsp;</p>

<p class="has-text-align-none">These days, OpenAI and Anthropic <a href="https://podcasts.apple.com/us/podcast/chatgpt-the-super-assistant-era-bg2-guest-interview/id1727278168?i=1000755428126">are</a> <a href="https://www.theregister.com/2026/04/16/anthropic_ejects_bundled_tokens_enterprise/">often</a> <a href="https://help.openai.com/en/articles/8265053-what-is-chatgpt-enterprise">weighing</a> the advantages of older flat-rate subscription plans and ones with metered fees. Both companies’ enterprise plans are now token-based, since usership is “uneven,” as Andrew Filev, founder of Zencoder, called it —&nbsp;one person may use it once or twice a week for a few minutes, while another is running five agents in the background around the clock.&nbsp;</p>

<figure class="wp-block-pullquote"><blockquote><p>For consumer chatbots, some monetization is taking the form of advertising</p></blockquote></figure>

<p class="has-text-align-none">In consumer chatbots, some model makers are trying to mitigate this with advertising. OpenAI recently introduced ads within ChatGPT, which show up as a separate sidebar, and it’s <a href="https://digiday.com/marketing/openai-builds-tool-to-track-whether-chatgpt-ads-convert/">reportedly</a> working on a tool to track how well those ads work. (Anthropic famously <a href="https://www.theverge.com/news/874084/ai-chatgpt-claude-super-bowl-ads-openai-anthropic">decried the move</a> in its 2026 Super Bowl ads.)&nbsp;</p>

<p class="has-text-align-none">But for companies that build tools on top of models like GPT-5 or Claude Opus, the price of tokens is going up, and the extra cost is largely trickling down to <em>their </em>customers. Multiple tech companies <em>The Verge</em> spoke with said they, or their customers, are changing strategies to offset the new pricing. Some are considering moving fully or partially to open-source models, and some are using considerable time and resources to evaluate how expensive high-end models perform on certain tasks compared to cheaper alternatives.</p>

<p class="has-text-align-none">David DeSanto, CEO of software company Anaconda, recently returned from a five-week trip around the world speaking to customers. He said that many were moving to self-host AI models —&nbsp;deploying their own within Amazon Bedrock or Google’s Vertex AI to have more control over the supply chain —&nbsp;or changing to open-source or open-weight models for a lot of their needs, since many such models have significantly improved on benchmarks as of late. Some companies also worry about the security of sending IP to a commercial frontier lab, so they only use ChatGPT or Claude models for “mission-critical applications,” he said.&nbsp;</p>

<p class="has-text-align-none">“Everyone I spoke to had some version of this problem —&nbsp;their token usage has gone up, so their usage-based billing cost has gone up, or the tier they were on no longer has the same cap, and now they’re having to go to a more expensive tier to try to keep the same amount of usage per month as part of their flat rate,” DeSanto said.</p>

<p class="has-text-align-none">Eve, a company that sells software to plaintiff lawyers, is constantly balancing quality and token costs, Madheswaran said — especially since Eve’s token usage has gone up 100x year-over-year to date. So it’s always switching between open-source models and varying ones from Anthropic and OpenAI.&nbsp;</p>

<p class="has-text-align-none">But even a 1 percent regression in quality of output negatively impacts Eve’s customers “quite significantly,” Madheswaran said, which is why Eve spends a lot of internal resources tracking model quality. The company typically finds itself using the newer, more expensive reasoning models about 25–30% of the time, splitting the rest of its usage between Eve’s own open-source variants and smaller, cheaper models from leading labs. Madheswaran said the company has found that some cheap models are just as accurate as expensive ones, depending on the query.</p>

<p class="has-text-align-none">“What open source is really doing is it’s putting pressure on these companies to make their cheaper models cheaper because their profit margins there are much, much better,” Madheswaran said.&nbsp;</p>

<figure class="wp-block-pullquote"><blockquote><p>“What open source is really doing is it’s putting pressure on these companies to make their cheaper models cheaper.”</p></blockquote></figure>

<p class="has-text-align-none">Wisdom AI, which provides AI-powered data analysis, hasn’t had to pass on cost increases yet. The team is testing out how different models perform on different types of tasks, and then budgeting accordingly. Mazumdar said it’s been testing out Cerebras, which is popular for open-weight models, lately, “in anticipation of how expensive things will get” from the premier labs like OpenAI and Anthropic. “[Big AI companies] have been giving this away for free,” Mazumdar said. “What they’re trying to do is, the moment they sense there’s an enterprise at play, or there’s propensity to pay, they absolutely jack up the prices drastically.”&nbsp;</p>

<p class="has-text-align-none">But he said there’s always a cost, especially on the coding front. “The reality is this: If you’re doing coding of any kind, then the open-source models simply don’t come close, and that’s the unfortunate reality of where we are today,” he said.&nbsp;</p>

<p class="has-text-align-none">Box’s Levie believes the changes will play out over the next 24 months. He said the VC subsidized era of AI was likely necessary for growth —&nbsp;after all, if two companies with largely equal products are competing for the same customers, and one is offering a (subsidized) product at a lower price, the latter will obviously win out, at least in the short term. But now it’s time to build more efficiency into the system, and not everyone is going to survive it.&nbsp;</p>

<p class="has-text-align-none">“The size of the market is so large that I think it actually will sort of all work out,” Levie said. “At an individual company level, you have to decide: Can you keep up with this flywheel, or are you going to be priced out based on an inability to raise capital or an inability to make the model more efficient for your tasks?”&nbsp;</p>

<p class="has-text-align-none">Eve’s Madheswaran thinks the industry will soon move from focusing on the so-called “best” model to what works the best for a business’s personalized, niche use cases. “That’s my guess, and obviously I’m betting our entire company on it.”</p>

<p class="has-text-align-none">Gartner’s Sommer likens the whole scenario to what he called the “stegosaurus paradox.” When scientists first discovered the stegosaurus fossil, he said, they didn’t understand how a large body could be supported by such a small head with a tiny mouth — and the theory they developed was that the stegosaurus would need to constantly be eating, and eating a highly nutritious diet.&nbsp;</p>

<p class="has-text-align-none">“We see AI as kind of being the same deal,” Sommer said — for the stegosaurus (AI labs) to survive, then providers need to find more food for it (the entire global economy, not just the tech market) and it has to be highly nutritious, too (i.e., providers need to be able to earn a margin from it and stop subsidizing). If the stegosaurus paradox isn’t resolved, and the mouth is “too small for the body,” he said, it will lead to write-downs, falling valuations, dried-up financing, and a broad resetting of expectations for AI worldwide. Therefore, Sommer said, a sustainable business model “would require that genAI be infused in everything from billboards to checkout kiosks,” with providers taking a cut of all of those transactions.</p>

<p class="has-text-align-none">“The free era was really a land grab —&nbsp;it’s a common strategy used by startups,” said Eve’s Madheswaran. “That’s just not a business model. You can’t do that for too long.”</p>
						]]>
									</content>
			
					</entry>
			<entry>
			
			<author>
				<name>Hayden Field</name>
			</author>
			
			<title type="html"><![CDATA[John Ternus’ first big problem is AI]]></title>
			<link rel="alternate" type="text/html" href="https://www.theverge.com/ai-artificial-intelligence/915662/john-ternus-apple-ceo-tim-cook-ai-problem-siri" />
			<id>https://www.theverge.com/?p=915662</id>
			<updated>2026-04-21T11:11:38-04:00</updated>
			<published>2026-04-21T09:37:55-04:00</published>
			<category scheme="https://www.theverge.com" term="AI" /><category scheme="https://www.theverge.com" term="Analysis" /><category scheme="https://www.theverge.com" term="Apple" /><category scheme="https://www.theverge.com" term="Report" /><category scheme="https://www.theverge.com" term="Tech" />
							<summary type="html"><![CDATA[Less than a year ago, Apple made headlines for a lack of AI announcements at its annual WWDC event. Ten months later, the company has announced that hardware executive John Ternus will succeed longtime CEO Tim Cook as chief executive —&#160;and the official release doesn’t mention AI once.&#160; Ternus, currently Apple’s SVP of hardware engineering, [&#8230;]]]></summary>
			
							<content type="html">
											<![CDATA[

						
<figure>

<img alt="Photo collage of John" data-caption="" data-portal-copyright="Image: The Verge; Getty Images" data-has-syndication-rights="1" src="https://platform.theverge.com/wp-content/uploads/sites/2/2026/04/JohnTernus.jpg?quality=90&#038;strip=all&#038;crop=0,0,100,100" />
	<figcaption>
		</figcaption>
</figure>
<p class="has-text-align-none">Less than a year ago, Apple made headlines for a lack of AI announcements at its annual WWDC event. Ten months later, the company has announced that hardware executive John Ternus will succeed longtime CEO Tim Cook as chief executive —&nbsp;and the official release doesn’t mention AI once.&nbsp;</p>

<p class="has-text-align-none">Ternus, currently Apple’s SVP of hardware engineering, will take over as CEO on September 1st, after Cook’s decade and a half in the role. Ternus is a 25-year veteran of the company and the first Apple CEO in about 30 years to come from the hardware sector. According to Apple, he’s led hardware engineering work for every model of iPad, as well as the most recent iPhone family and AirPods. Yesterday’s <a href="https://www.apple.com/newsroom/2026/04/tim-cook-to-become-apple-executive-chairman-john-ternus-to-become-apple-ceo/">announcement</a> highlighted Ternus’ work adding better noise cancellation and hearing health upgrades for AirPods, overseeing the MacBook Neo’s debut, and upping Apple products’ durability and repairability. Not once did the company mention his plans or relevant experience for advancing AI. </p>

<p class="has-text-align-none">And with all eyes on Apple’s C-suite after more than a year of failed promises about the company’s AI assistant offerings, that’s sure to be noticed.&nbsp;</p>

<p class="has-text-align-none">In recent years, Apple has taken on a <a href="https://www.bloomberg.com/news/articles/2025-05-18/apple-intelligence-struggles-to-keep-up-with-chatgpt-ai-competitors">reputation</a> for trailing competitors in the AI race. Its AI assistant Siri lacks the capabilities of competing products from Google, Microsoft, OpenAI, and Anthropic, and it relies on other companies for the underlying models. Microsoft and Google have both gone all in on incorporating agentic AI features into their operating systems in ways that Apple simply hasn’t —&nbsp;and sometimes when it’s tried, like via Apple Intelligence’s notification summaries, it’s gotten <a href="https://arstechnica.com/apple/2024/11/apple-intelligence-notification-summaries-are-honestly-pretty-bad/">made fun of</a> for missing the mark.&nbsp;</p>

<p class="has-text-align-none">That’s not to say that stuffing AI into a system is a sign of success. Microsoft, for instance, has been <a href="https://futurism.com/artificial-intelligence/microsoft-screwed-up-windows-11-copilot">widely criticized</a> for going too far with its AI integration for Windows 11 and introducing Copilot into every corner of the operating system, even in its Notepad and Snipping Tool. It led to user backlash, the rise of the term “Microslop,” and a decision to walk back some of the changes (or at least <a href="https://www.techradar.com/computing/windows/microsoft-has-begun-stripping-out-ai-from-windows-11-but-its-already-being-criticized-for-not-going-far-enough">appear to do so</a> via a rebrand, amid <a href="https://futurism.com/artificial-intelligence/drama-microsoft-windows-12-ai">user fears</a> that Windows 12 would further embrace AI in everything). Some Microsoft users reportedly switched to the MacBook Neo, which offered a lower price point to compete with Microsoft devices and, ironically, less AI-ification. So if Ternus can use his decades at Apple, and his time working under Steve Jobs, to advance Apple’s AI systems right —&nbsp;in the thoughtful, well-designed, and simplistic way that Apple is known for —&nbsp;then he may be able to catch the company up in some ways.&nbsp;&nbsp;</p>

<p class="has-text-align-none">But besides integration plans, there’s still the basic problem of the actual AI assistant features that Apple has fallen behind on. Other AI labs have spent the past couple of years dramatically advancing agentic AI systems, which aim to perform complex and multistep tasks on users’ behalf, though there’s still room for improvement. Apple has a long-standing reputation for showing up late to a product category with a winning entry, but here, it’s simply made promises and failed to deliver.</p>

<p class="has-text-align-none"><a href="https://www.theverge.com/apple/682984/apple-punts-on-siri-updates-as-it-struggles-to-keep-up-in-the-ai-race">Last June</a> at WWDC, executives referenced Apple Intelligence and highlighted live translation features, but personalization features for Siri — first mentioned at WWDC 2024 —&nbsp;were delayed, with executives saying the rollout would happen “over the course of the next year.” <a href="https://x.com/markgurman/status/1896250347838202153">Ads ran</a> in 2024 showing Siri with capabilities that still haven’t arrived nearly two years later. Craig Federighi, Apple’s SVP of software engineering, said at the time that the updates to Siri “needed more time to reach our high quality bar, and we look forward to sharing more about it in the coming year.” Fast-forward 10 months, and there’s no official word on when the new Siri will arrive, even as WWDC 2026 quickly approaches.&nbsp;</p>

<p class="has-text-align-none">Last year, the company’s strategy seemed to be <a href="https://www.theverge.com/apple/682984/apple-punts-on-siri-updates-as-it-struggles-to-keep-up-in-the-ai-race">leaning on</a> OpenAI’s ChatGPT to fill in a few of Siri’s gaps, like integrating ChatGPT into Apple’s Image Playground and adding visual intelligence features. Executives have repeatedly said in the past they hope Apple users will be able to choose other competitors’ models to use as well, and as of January, Apple finally <a href="https://www.cnbc.com/2026/01/12/apple-google-ai-siri-gemini.html">inked its deal</a> with Google to tap Gemini for help fueling Apple’s future foundation models, potentially costing Apple <a href="https://www.bloomberg.com/news/articles/2025-11-05/apple-plans-to-use-1-2-trillion-parameter-google-gemini-model-to-power-new-siri">$1 billion</a> per year. But even that deal was potentially late —&nbsp;last April, during Google’s search monopoly trial, CEO Sundar Pichai said the agreement with Apple would hopefully be signed within months, resulting in a rollout by the end of 2025.&nbsp;</p>

<p class="has-text-align-none">Now, the question is whether the all-new, Gemini-powered Siri will roll out by WWDC 2026, or whether the debut will happen later, once Ternus is officially at the helm. On Alphabet’s February earnings call, executives <a href="https://techcrunch.com/2026/02/04/alphabet-wont-talk-about-the-google-apple-ai-deal-even-to-investors/">largely ignored</a> a question about the company’s AI partnerships, including with Apple, though Pichai said he was looking forward to Google powering “the next generation of Apple foundation models based on Gemini technology.”&nbsp;</p>

<p class="has-text-align-none">Ternus, who <a href="https://www.nytimes.com/2026/01/08/technology/apple-ceo-tim-cook-john-ternus.html">reportedly</a> has a reputation for maintaining Apple products rather than innovating new ones, will be tasked with a tall order in leading the world’s first trillion-dollar company into its new AI era —&nbsp;not only playing catch-up, but trying to get ahead of its competitors, which are already moving at breakneck speed.</p>
						]]>
									</content>
			
					</entry>
			<entry>
			
			<author>
				<name>Hayden Field</name>
			</author>
			
			<title type="html"><![CDATA[Anthropic’s new cybersecurity model could get it back in the government’s good graces]]></title>
			<link rel="alternate" type="text/html" href="https://www.theverge.com/ai-artificial-intelligence/914229/tides-turning-anthropic-trump-administration-cybersecurity-mythos-preview" />
			<id>https://www.theverge.com/?p=914229</id>
			<updated>2026-04-21T09:36:22-04:00</updated>
			<published>2026-04-17T16:14:21-04:00</published>
			<category scheme="https://www.theverge.com" term="AI" /><category scheme="https://www.theverge.com" term="Anthropic" /><category scheme="https://www.theverge.com" term="News" /><category scheme="https://www.theverge.com" term="Policy" />
							<summary type="html"><![CDATA[The Trump administration has spent nearly two months fighting with AI company Anthropic. It’s dubbed the company a “RADICAL LEFT, WOKE COMPANY” full of “Leftwing nut jobs” and a menace to national security. But some of the ice may reportedly be melting between the two, thanks to Anthropic’s buzzy new cybersecurity-focused model: Claude Mythos Preview. [&#8230;]]]></summary>
			
							<content type="html">
											<![CDATA[

						
<figure>

<img alt="Photo illustration of Dario Amodei of Anthropic." data-caption="" data-portal-copyright="Image: Cath Virginia / The Verge, Getty Images" data-has-syndication-rights="1" src="https://platform.theverge.com/wp-content/uploads/sites/2/chorus/uploads/chorus_asset/file/25469941/STK202_DARIO_AMODEI_CVIRGINIA_D.jpg?quality=90&#038;strip=all&#038;crop=0,0,100,100" />
	<figcaption>
		</figcaption>
</figure>
<p class="has-text-align-none">The Trump administration has spent nearly two months fighting with AI company Anthropic. It’s <a href="https://truthsocial.com/@realDonaldTrump/posts/116144552969293195">dubbed the company</a> a “RADICAL LEFT, WOKE COMPANY” full of “Leftwing nut jobs” and a menace to national security. But some of the ice may reportedly be melting between the two, thanks to Anthropic’s <a href="https://www.theverge.com/ai-artificial-intelligence/908114/anthropic-project-glasswing-cybersecurity">buzzy new cybersecurity-focused model</a>: Claude Mythos Preview.</p>

<p class="has-text-align-none">Anthropic’s relationship with the Pentagon <a href="https://www.theverge.com/ai-artificial-intelligence/883456/anthropic-pentagon-department-of-defense-negotiations">soured quickly</a> in late February after the company refused to budge on two red lines: using its technology for domestic mass surveillance or lethal fully autonomous weapons with no human in the loop. Anthropic’s tech has in the past been used heavily by the DoD and, it was the first company to have its models cleared to operate on classified military networks. The stalemate led to public insults on social media, Anthropic being categorized as a “<a href="https://www.theverge.com/ai-artificial-intelligence/890347/pentagon-anthropic-supply-chain-risk">supply chain risk</a>,” the company filing a lawsuit <a href="https://www.theverge.com/ai-artificial-intelligence/891377/anthropic-dod-lawsuit">fighting that designation</a>, and a <a href="https://www.theverge.com/ai-artificial-intelligence/902149/anthropic-dod-pentagon-lawsuit-supply-chain-risk-injunction">temporary injunction</a> halting its ban.</p>

<p class="has-text-align-none">Anthropic has recently attempted to get back in the US government’s good graces, at least in some capacity, with Mythos Preview. And <a href="https://www.axios.com/2026/04/17/anthropic-trump-administration-mythos">judging from reports</a> that Anthropic CEO Dario Amodei attended a meeting at the White House on Friday, it may be working. Anthropic confirmed the meeting on Friday. “Anthropic CEO Dario Amodei today met with senior administration officials for a productive discussion on how Anthropic and the US government can work together on key shared priorities such as cybersecurity, America’s lead in the AI race, and AI safety,” said Anthropic spokesperson Max Young. “The meeting reflected Anthropic&#8217;s ongoing commitment to engaging with the US government on the development of responsible AI. We are grateful for their time and are looking forward to continuing these discussions.”</p>

<p class="has-text-align-none">Mythos Preview was announced with major fanfare about its capabilities —&nbsp;including the ability to find security issues in virtually every large web browser and operating system. Anthropic says the model is its most powerful yet, and it’s currently only available for private access. It’s being marketed as a way to flag high-stakes vulnerabilities in some of the most-used internet infrastructure we have, so that companies like Apple, Nvidia, and JPMorgan Chase —&nbsp;which have already signed on to use it —&nbsp;can plug them up before bad actors can exploit them. The release of Mythos Preview has already reportedly sparked <a href="https://www.cnbc.com/2026/04/10/powell-bessent-us-bank-ceos-anthropic-mythos-ai-cyber.html">emergency meetings</a> between US bank leaders and Federal Reserve Chairman Jerome Powell.</p>

<div class="wp-block-vox-media-highlight vox-media-highlight">
<p class="has-text-align-none"><em>Are you a current or former AI industry employee? Contact me via Signal at haydenfield.11 on a non-work device with tips.</em></p>
</div>

<p class="has-text-align-none">The Trump administration, too, seems to be taking notice. In a release about Mythos Preview, Anthropic wrote that it had already been in “ongoing discussions with US government officials about Claude Mythos Preview and its offensive and defensive cyber capabilities.” Earlier this month, when <em>The Verge</em> <a href="https://www.theverge.com/ai-artificial-intelligence/908114/anthropic-project-glasswing-cybersecurity">asked for details</a>, Dianne Penn, a head of product management at Anthropic, confirmed that the company had “briefed senior officials in the US government about Mythos and what it can do,” and that the company is still “committed to working closely with all different levels of government.” The company declined to specify who, exactly, had been briefed.</p>

<p class="has-text-align-none">Anthropic also reportedly <a href="https://www.bloomberg.com/news/articles/2026-04-13/anthropic-hires-trump-linked-lobbying-firm-ballard-partners?embedded-checkout=true">recently</a> hired Ballard Partners, a lobbying firm linked to Trump, which has inspired more reports that a deal between Anthropic and the White House may be in the works.</p>

<p class="has-text-align-none">On Friday, <em>Axios</em> <a href="https://www.axios.com/2026/04/17/anthropic-trump-administration-mythos">reported</a> that Amodei was scheduled for a meeting with White House chief of staff Susie Wiles later that day. Describing the reasons for the meeting, a source familiar with the negotiations said “it would be grossly irresponsible for the U.S. government to deprive itself of the technological leaps that the new model presents” and that “it would be a gift to China.” The outlet also reported that “some parts of the U.S. intelligence community, plus the Cybersecurity and Infrastructure Security Agency (CISA, part of Homeland Security)” are testing Mythos Preview, and that other departments and agencies are interested.</p>

<p class="has-text-align-none">If Amodei’s meeting opens up conversations about further integrating Anthropic’s Claude into government usage across agencies, it’s possible that the DoD could shift its views on Claude accordingly as well. It would be an anticlimactic end to a bitter fight over national security — but hardly the first time the administration has suddenly reversed course.</p>
						]]>
									</content>
			
					</entry>
			<entry>
			
			<author>
				<name>Hayden Field</name>
			</author>
			
			<title type="html"><![CDATA[Anthropic releases a new Opus model amid Mythos Preview buzz]]></title>
			<link rel="alternate" type="text/html" href="https://www.theverge.com/ai-artificial-intelligence/913184/anthropic-claude-opus-4-7-cybersecurity" />
			<id>https://www.theverge.com/?p=913184</id>
			<updated>2026-04-16T12:00:23-04:00</updated>
			<published>2026-04-16T11:59:24-04:00</published>
			<category scheme="https://www.theverge.com" term="AI" /><category scheme="https://www.theverge.com" term="Anthropic" /><category scheme="https://www.theverge.com" term="News" />
							<summary type="html"><![CDATA[Anthropic has released its most powerful “generally available” model to date: Claude Opus 4.7. The company called it a step up from Opus 4.6 for advanced software engineering tasks, particularly in complex coding areas that in the past required more hand-holding. It’s also supposed to be better at analyzing images and following instructions, and it [&#8230;]]]></summary>
			
							<content type="html">
											<![CDATA[

						
<figure>

<img alt="" data-caption="" data-portal-copyright="" data-has-syndication-rights="1" src="https://platform.theverge.com/wp-content/uploads/sites/2/2026/01/STKB364_CLAUDE_2_C.jpg?quality=90&#038;strip=all&#038;crop=0,0,100,100" />
	<figcaption>
		</figcaption>
</figure>
<p class="has-text-align-none">Anthropic has released its most powerful “generally available” model to date: Claude Opus 4.7. </p>

<p class="has-text-align-none">The company called it a step up from Opus 4.6 for advanced software engineering tasks, particularly in complex coding areas that in the past required more hand-holding. It’s also supposed to be better at analyzing images and following instructions, and it can exhibit more “creativity” when creating slides and documents, per Anthropic.</p>

<p class="has-text-align-none">Opus 4.7 comes on the heels of <a href="https://www.theverge.com/ai-artificial-intelligence/908114/anthropic-project-glasswing-cybersecurity">Mythos Preview</a>, the buzzy cybersecurity-focused model Anthropic announced earlier this month, which the company has said is its most powerful model overall. Comparatively, Opus 4.7 is much more limited. In Opus 4.7’s <a href="https://anthropic.com/claude-opus-4-7-system-card">system card</a>, Anthropic wrote that Opus 4.7 doesn’t even advance the company’s “capability frontier,” since Claude Mythos Preview received higher results “on every relevant evaluation.”</p>

<p class="has-text-align-none">For security reasons, Anthropic is only currently making Mythos Preview available privately to select partners, such as Nvidia, JPMorgan Chase, Google, Apple, and Microsoft. “We stated that we would keep Claude Mythos Preview’s release limited and test new cyber safeguards on less capable models first,” Anthropic wrote in a <a href="https://www.anthropic.com/news/claude-opus-4-7">blog post</a>. “Opus 4.7 is the first such model: its cyber capabilities are not as advanced as those of Mythos Preview (indeed, during its training we experimented with efforts to differentially reduce these capabilities).”</p>

<p class="has-text-align-none">The company said it’s releasing the new model with additional cybersecurity safeguards compared to Opus 4.6 and that findings from the deployment of those safeguards “will help us work towards our eventual goal of a broad release of Mythos-class models.”</p>

<p class="has-text-align-none">The company added that security professionals wishing to use the model for cybersecurity purposes, like vulnerability research, could join its new Cyber Verification Program, which ostensibly would let up on some of the safeguards Anthropic introduced for Opus 4.7.</p>

<p class="has-text-align-none">Early testers for Opus 4.7 included Anthropic customers like Intuit, Harvey, Replit, Cursor, Notion, Shopify, Vercel, and Databricks. Pricing remains the same as Opus 4.6, at $5 per million input tokens and $25 per million output tokens, Anthropic said.</p>
						]]>
									</content>
			
					</entry>
			<entry>
			
			<author>
				<name>Hayden Field</name>
			</author>
			
			<title type="html"><![CDATA[Read OpenAI&#8217;s latest internal memo about beating the competition — including Anthropic]]></title>
			<link rel="alternate" type="text/html" href="https://www.theverge.com/ai-artificial-intelligence/911118/openai-memo-cro-ai-competition-anthropic" />
			<id>https://www.theverge.com/?p=911118</id>
			<updated>2026-04-13T14:34:52-04:00</updated>
			<published>2026-04-13T12:21:08-04:00</published>
			<category scheme="https://www.theverge.com" term="AI" /><category scheme="https://www.theverge.com" term="OpenAI" /><category scheme="https://www.theverge.com" term="Report" />
							<summary type="html"><![CDATA[OpenAI’s chief revenue officer, Denise Dresser, sent a four-page memo to employees on Sunday about the company’s strategic direction, emphasizing the need to lock in users and grow its enterprise business.&#160; The memo, which was viewed by The Verge, repeatedly underlines the importance of building a moat around its AI products, to combat how easy [&#8230;]]]></summary>
			
							<content type="html">
											<![CDATA[

						
<figure>

<img alt="An illustration showing a computer with the OpenAI logo." data-caption="OpenAI released a report breaking down how people use ChatGPT and who they are. | Image: The Verge" data-portal-copyright="Image: The Verge" data-has-syndication-rights="1" src="https://platform.theverge.com/wp-content/uploads/sites/2/2025/04/STK_414_AI_CHATBOT_R2_CVirginia_B.jpg?quality=90&#038;strip=all&#038;crop=0,0,100,100" />
	<figcaption>
	OpenAI released a report breaking down how people use ChatGPT and who they are. | Image: The Verge	</figcaption>
</figure>
<p class="has-text-align-none">OpenAI’s chief revenue officer, Denise Dresser, sent a four-page memo to employees on Sunday about the company’s strategic direction, emphasizing the need to lock in users and grow its enterprise business.&nbsp;</p>

<p class="has-text-align-none">The memo, which was viewed by <em>The Verge</em>, repeatedly underlines the importance of building a moat around its AI products, to combat how easy it is for users to switch between whichever model is topping the charts on any given day or week. Dresser, who <a href="https://www.theverge.com/ai-artificial-intelligence/906965/openais-agi-boss-is-taking-a-leave-of-absence">recently</a> took over much of former COO Brad Lightcap’s duties as he transitions to a new role focused on special projects, also emphasizes the importance of focusing on enterprise clients. It’s part of the company’s recent strategy to <a href="https://www.theverge.com/ai-artificial-intelligence/902368/openai-sora-dead-ai-video-generation-competition">avoid focusing on “side quests”</a> and go all in on its biggest revenue drivers. <a href="https://www.cnbc.com/2026/04/13/openai-touts-amazon-alliance-in-memo-microsoft-limited-our-ability.html">CNBC</a> earlier reported on the memo. </p>

<p class="has-text-align-none">“Multi-product adoption makes us harder to replace,” Dresser wrote, later adding, “We should stop thinking like a company with separate product lines. We should think like a platform company with multiple entry points and one integrated enterprise offering.”</p>

<p class="has-text-align-none">Dresser also addressed the intensifying competition between OpenAI and its longtime rival Anthropic, writing that “the market is as competitive as I have ever seen it” and that though Anthropic’s “coding focus gave them an early wedge,” “you do not want to be a single-product company in a platform war.” The memo also accuses Anthropic of inflating its stated run rate and says it was a “strategic misstep” for the company to not acquire enough compute. Both OpenAI and Anthropic reportedly plan to go public this year.&nbsp;</p>

<p class="has-text-align-none">“Their story is built on fear, restriction, and the idea that a small group of elites should control AI,” Dresser wrote of Anthropic.&nbsp;</p>

<p class="has-text-align-none">OpenAI has long marketed itself as “democratic AI” allowing more access to the people, often implying that Anthropic and its enterprise focus do the opposite. In February, OpenAI CEO Sam Altman <a href="https://www.theverge.com/news/874084/ai-chatgpt-claude-super-bowl-ads-openai-anthropic">wrote</a>, “Anthropic serves an expensive product to rich people.”&nbsp;</p>

<p class="has-text-align-none">Read Dresser’s memo in full below.&nbsp;</p>

<blockquote class="wp-block-quote is-layout-flow wp-block-quote-is-layout-flow">
<p class="has-text-align-none"><strong>The System That Will Win Enterprise AI</strong></p>



<p class="has-text-align-none">As we start Q2, I want to begin where we always should: with our customers. I have been spending time with leaders across our largest enterprises, most influential startups, and key venture firms. The message is clear. People are excited about what we are building, and they want a deeper view into our roadmap so they can plan with confidence and stay ahead of the market.</p>



<p class="has-text-align-none">Enterprise AI is entering a more mature phase. Raw capability still matters, but it is no longer enough. Customers want fit: how well AI plugs into their workflows, knowledge, controls, and day-to-day operations, and how effectively it can be deployed, trusted, and improved over time. They want a system they can trust and build on.</p>



<p class="has-text-align-none">We are building that system: the best models for work, a platform for agents, deep integration with business context, and the ability to deploy and improve at scale. And customers are validating that direction in the clearest possible way. Multi-year, multi-product, nine-figure deals are rising, and existing customers are expanding as they standardize on our capabilities across more of their organizations.</p>



<p class="has-text-align-none">I am incredibly proud of how this team is showing up. We are earning trust through the depth, quality, and care we bring to the work. The opportunity ahead is massive, and our biggest constraint right now is not demand. It is capacity. That is why talent remains a top priority in Q2. We will keep hiring deliberately, keep the bar high, and keep building a team that matches the excellence our customers expect from us and we expect from each other.</p>



<p class="has-text-align-none">We have everything we need to extend our lead from here. We have the compute. We have the products. We have the customer pull. This is the moment to lean in and make the case, clearly and confidently, that OpenAI is the platform enterprises should trust to build, deploy, and scale with.</p>



<p class="has-text-align-none">Here are five customer-backed priorities I want us to focus on.</p>



<p class="has-text-align-none"><strong>1. Win the model layer for work</strong></p>



<p class="has-text-align-none">Enterprises buy business outcomes. They pay for models that help employees write faster, analyze better, code more productively, support customers more effectively, and make higher-quality decisions. They pay for higher revenue per employee, faster cycle times, lower support costs, and better execution.</p>



<p class="has-text-align-none">Spud is an important step in the intelligence foundation for the next generation of work. Early feedback from our customers is very positive. Spud is not only our smartest model yet, but it also delivers on everything that matters for high-value professional work: stronger reasoning, better understanding of intent and dependencies, better follow-through and more reliable output in production.</p>



<p class="has-text-align-none">Better model performance lifts the rest of the stack. Spud will make all of our key products significantly better. It expands the workflows we can own and gives customers another reason to consolidate around us. This is our iterative deployment strategy in practice: push the frontier, deploy it into real products, learn from real usage, and compound those lessons into better systems on the path to the super app.</p>



<p class="has-text-align-none">Our compute advantage sets us up to deliver continuous leaps in capability. Customers already feel it in real product terms: higher token limits, lower latency, and more reliable execution of complex workflows. Every step forward in compute lets us train stronger models, serve more demand, and lower the cost per unit of intelligence. That is durable business leverage.</p>



<p class="has-text-align-none"><strong>2. Win the agent platform layer</strong></p>



<p class="has-text-align-none">The market has moved from prompts to agents. That shift is a massive opportunity for us.</p>



<p class="has-text-align-none">Customers want systems that can reason, use tools, operate across workflows, and perform reliably inside real business environments. That means orchestration, control, observability, security, integration, and governance.&nbsp;</p>



<p class="has-text-align-none">Frontier allows us to own the platform layer. We need to position Frontier as the default platform for enterprise agents – the core intelligence layer enterprises use to build, deploy, manage, and scale systems.</p>



<p class="has-text-align-none">This is where our advantage can compound. Frontier ties model intelligence directly to agent performance. As our models improve, the platform gets more valuable. As the platform gets embedded, switching costs rise. As customers run more workflows through the system, OpenAI becomes harder to replace and more central to how work gets done.</p>



<p class="has-text-align-none">That is how we move from product vendor to operating infrastructure.</p>



<p class="has-text-align-none"><strong>3. Expand the market through Amazon</strong></p>



<p class="has-text-align-none">Our Microsoft partnership has been foundational to our success. But it has also limited our ability to meet enterprises where they are – for many that’s Bedrock.</p>



<p class="has-text-align-none">Since we announced the partnership at the end of February, inbound demand from our customers for this offering has been frankly staggering. We are firing on all cylinders to establish this as a scaled distribution channel.</p>



<p class="has-text-align-none">The Amazon Stateful Runtime Environment matters because it expands access and upgrades the product surface at the same time. By enabling memory, context, and continuity across interactions, we move beyond stateless model access toward systems that can operate reliably over time and across complex business processes.</p>



<p class="has-text-align-none">This will expand our market in three ways: 1. It lowers adoption friction for AWS-native customers. 2: It strengthens our position with regulated and security-sensitive buyers by running inside their AWS environment and existing governance model. 3. It further integrates our platform from model access to production runtime for long-running, multi-step agents.</p>



<p class="has-text-align-none"><strong>4. Sell the full AI-native stack</strong></p>



<p class="has-text-align-none">Customers want a platform not point solutions. That’s what we have: ChatGPT for Work is the front door for knowledge work. Codex is the system for software and agentic development. The API is the engine for embedded intelligence inside customer products and workflows. Frontier is the agent platform. The Amazon runtime extends our reach into production-grade, stateful execution.</p>



<p class="has-text-align-none">That breadth is a major strategic advantage because customers do not all start in the same place. Some start with employees. Some start with developers. Some start with internal systems. Some start with external products. Our job is to meet them wherever they enter and then expand them across the full stack.</p>



<p class="has-text-align-none">This is the flywheel we should be building around: better models drive more usage, more usage drives deeper integration, deeper integration drives multi-product adoption, and multi-product adoption makes us harder to replace.</p>



<p class="has-text-align-none">We should stop thinking like a company with separate product lines. We should think like a platform company with multiple entry points and one integrated enterprise offering.</p>



<p class="has-text-align-none"><strong>5. Own deployment</strong></p>



<p class="has-text-align-none">The biggest bottleneck in enterprise AI is no longer whether the technology works. It is whether</p>



<p class="has-text-align-none">companies can deploy it successfully and at scale.</p>



<p class="has-text-align-none">DeployCo gives us the chance to turn product demand into repeatable enterprise transformation. It will be a deployment engine that helps companies prove value faster, reduce risk, and scale adoption across the organization.</p>



<p class="has-text-align-none">This can become a force multiplier across everything else we are building.It helps customers move faster. It sharpens our feedback loops. It surfaces repeatable deployment patterns. It improves product, sales, and customer success all at once. And, alongside our Frontier Alliance partners, it gives us a serious path to scale execution across the market.</p>



<p class="has-text-align-none">The companies that win enterprise AI will not just have the best models. They will have the best ability to get those models deployed into real workflows, inside real organizations, with real measurable value. We should be the best in the world at that.</p>



<p class="has-text-align-none"><strong>A note on the competitive landscape</strong></p>



<p class="has-text-align-none">The market is as competitive as I have ever seen it. I believe that is ultimately a good thing. It means the opportunity is immense and important. However, there is no question it can be noisy, volatile and distracting at times. Competition inspires us and will make us all better and most importantly our customers will feel that benefit. To that point, as you have not heard me say many times, the number one focus should be spending time with our customers. When we spend time with our customers, listening to what their problems and ambitions are, focusing on how we can invest in them and help, everything else gets quiet and comes into focus.</p>



<p class="has-text-align-none">With that all being said, here are a few things worth keeping in mind, especially on Anthropic.</p>



<p class="has-text-align-none">● Their story is built on fear, restriction, and the idea that a small group of elites should control AI. Our positive message will win over time: build powerful systems, put in the right safeguards, expand access, and help people do more.</p>



<p class="has-text-align-none">● Their strategic misstep to not acquire enough compute is showing up in the product. Customers feel it through throttling, weaker availability, and a less reliable experience. We saw the exponential compute curve earlier, acted on it faster, and now have a real structural advantage.</p>



<p class="has-text-align-none">● Their coding focus gave them an early wedge. But you do not want to be a single-product company in a platform war. As AI spreads beyond developers into every team, workflow, and industry, that narrowness can become a real liability.</p>



<p class="has-text-align-none">● Their stated run rate is inflated. They use accounting treatment that makes revenue look bigger than it is, including grossing up rev share with Amazon and Google. Our analysis shows that this overstates their run rate by roughly $8 billion (at the current $30 stated).We report Microsoft revshare net, which is more inline with standards we would be held to as a public company.</p>



<p class="has-text-align-none"><strong>Let’s Go Build</strong></p>



<p class="has-text-align-none">Finally, one of the best things about the work we do is the people we get to do it with. I am so proud of this company and our team. It is a privilege to work with all of you and to be alive at this moment in the epicenter of the future. Lets all stay focused, work as one team and operate at the highest level of excellence and row in the same direction.&nbsp;</p>



<p class="has-text-align-none">The market is ours to win, let&#8217;s execute accordingly.</p>
</blockquote>
						]]>
									</content>
			
					</entry>
			<entry>
			
			<author>
				<name>Hayden Field</name>
			</author>
			
			<title type="html"><![CDATA[Meta is reentering the AI race with a new model called Muse Spark]]></title>
			<link rel="alternate" type="text/html" href="https://www.theverge.com/tech/908769/meta-muse-spark-ai-model-launch-rollout" />
			<id>https://www.theverge.com/?p=908769</id>
			<updated>2026-04-09T09:09:40-04:00</updated>
			<published>2026-04-08T12:12:54-04:00</published>
			<category scheme="https://www.theverge.com" term="AI" /><category scheme="https://www.theverge.com" term="Meta" /><category scheme="https://www.theverge.com" term="News" /><category scheme="https://www.theverge.com" term="Tech" />
							<summary type="html"><![CDATA[Meta Superintelligence Labs is launching its first model since Mark Zuckerberg spent billions overhauling the company’s AI efforts. Called Muse Spark, the model now powers the Meta AI app and the Meta AI website in the US, per the company’s announcement. In the coming weeks, Meta says, it will appear in WhatsApp, Instagram, Facebook, Messenger, [&#8230;]]]></summary>
			
							<content type="html">
											<![CDATA[

						
<figure>

<img alt="" data-caption="" data-portal-copyright="Photo by Amelia Holowaty Krales / The Verge" data-has-syndication-rights="1" src="https://platform.theverge.com/wp-content/uploads/sites/2/2025/10/257980_Meta_Ray-Ban_Display_AKrales_0312.jpg?quality=90&#038;strip=all&#038;crop=0,0,100,100" />
	<figcaption>
		</figcaption>
</figure>
<p class="has-text-align-none">Meta Superintelligence Labs is launching its first model since Mark Zuckerberg <a href="https://www.theverge.com/meta/685711/meta-scale-ai-ceo-alexandr-wang">spent billions overhauling</a> the company’s AI efforts. Called Muse Spark, the model now powers the Meta AI app and the Meta AI website in the US, per the company’s announcement. In the coming weeks, Meta says, it will appear in WhatsApp, Instagram, Facebook, Messenger, and Meta’s smart glasses, as well as roll out in other countries.&nbsp;&nbsp;</p>

<p class="has-text-align-none">Like Google Gemini, which easily integrates into Google’s product suite, Meta touts Muse Spark as “purpose-built for Meta’s products.” The model, the first in a new series, will also be available to some of Meta’s partners in “private preview” via the API. The company promises the ability to run multiple AI sub-agents to handle queries better and faster, as well as support for multimodal input that includes both text and images. The latter is particularly relevant to Meta’s AI-powered camera glasses, which it’s bet on as the (<a href="https://www.theverge.com/news/869882/mark-zuckerberg-meta-earnings-q4-2025">latest</a>) future of computing. It lets users toggle between a faster “Instant” mode and a “Thinking” mode that’s supposed to deliver more thoroughly reasoned results, similar to options like <a href="https://www.theverge.com/news/619199/microsoft-copilot-free-unlimited-voice-think-deeper-open-ai-o1-reasoning-model-ai">Microsoft’s Think Deeper</a>.</p>
<img src="https://platform.theverge.com/wp-content/uploads/sites/2/2026/04/02_Nutrition.jpg?quality=90&#038;strip=all&#038;crop=0,0,100,100" alt="An image of a chatbot that someone is asking to “estimate the total calorie count” of a Bento box. The bot responds by placing an orange dot over each item and labeling them with a calorie count." title="An image of a chatbot that someone is asking to “estimate the total calorie count” of a Bento box. The bot responds by placing an orange dot over each item and labeling them with a calorie count." data-has-syndication-rights="1" data-caption="" data-portal-copyright="Image: Meta" />
<p class="has-text-align-none">Meta also highlighted that Muse Spark can answer “complex questions in science, math, and health.” Health-focused AI chatbots have been a <a href="https://www.theverge.com/report/866683/chatgpt-health-sharing-data">controversial topic</a> in recent months, as they handle sensitive personal data and can propagate misinformation. Meta said that Muse Spark’s multimodal perception is “especially valuable for health” and can “navigate health questions with more detailed responses, including some questions involving images and charts.” Meta may be looking to compete with OpenAI’s ChatGPT Health and Anthropic’s Claude for Healthcare, which both debuted in January. In its announcement, it showed its chatbot estimating a calorie count for a meal — a popular, but <a href="https://www.theverge.com/column/825219/optimizer-ai-nutrition-tracking-wellness">often hit-or-miss</a>, use of AI tech.</p>

<p class="has-text-align-none">In the future, Meta hopes the model will power new features “that cite recommendations and content people share across Instagram, Facebook, and Threads.” The company also said that it has larger models in development and hopes to open-source future versions. It describes Muse Spark as an “early data point” on the trajectory of its new Muse series.</p>

<p class="has-text-align-none">The Muse series is set to be Meta’s second major foray into powerful AI, following its Llama models. Zuckerberg revamped the company’s AI program after the delayed <a href="https://www.theverge.com/meta/645012/meta-llama-4-maverick-benchmarks-gaming">and disappointing</a> release of Llama 4 in 2025.</p>
						]]>
									</content>
			
					</entry>
			<entry>
			
			<author>
				<name>Hayden Field</name>
			</author>
			
			<title type="html"><![CDATA[The vibes are off at OpenAI]]></title>
			<link rel="alternate" type="text/html" href="https://www.theverge.com/ai-artificial-intelligence/908513/the-vibes-are-off-at-openai" />
			<id>https://www.theverge.com/?p=908513</id>
			<updated>2026-04-11T12:31:00-04:00</updated>
			<published>2026-04-08T09:47:38-04:00</published>
			<category scheme="https://www.theverge.com" term="AI" /><category scheme="https://www.theverge.com" term="Analysis" /><category scheme="https://www.theverge.com" term="OpenAI" /><category scheme="https://www.theverge.com" term="Report" />
							<summary type="html"><![CDATA[OpenAI is in a relatively precarious position. The company is and has been a funding behemoth —&#160;just over a week ago, it closed $122 billion in funding at a post-money valuation of $852 billion. It’s potentially planning for an IPO later this year. ChatGPT’s longtime lead in consumer-facing AI led it to name-brand status akin [&#8230;]]]></summary>
			
							<content type="html">
											<![CDATA[

						
<figure>

<img alt="Photo collage of Sam Altman in front of the OpenAI logo." data-caption="" data-portal-copyright="Image: Cath Virginia / The Verge; Getty Images" data-has-syndication-rights="1" src="https://platform.theverge.com/wp-content/uploads/sites/2/chorus/uploads/chorus_asset/file/25431704/STK201_SAM_ALTMAN_CVIRGINIA_C.jpg?quality=90&#038;strip=all&#038;crop=0,0,100,100" />
	<figcaption>
		</figcaption>
</figure>
<p class="has-text-align-none">OpenAI is in a relatively precarious position. The company is and has been a funding behemoth —&nbsp;just over a week ago, it closed $122 billion in funding at a post-money valuation of $852 billion. It’s potentially planning for an IPO later this year. ChatGPT’s longtime lead in consumer-facing AI led it to name-brand status akin to “Kleenex” for tissues. But in recent months, a slew of executive reshufflings, discontinued projects, and other news has raised questions about how stable the company really is — and how long it may be able to stay on top.&nbsp;</p>

<p class="has-text-align-none">OpenAI’s current batch of public controversies started early in the year. At the end of February, the company agreed to an <a href="https://www.theverge.com/ai-artificial-intelligence/887309/openai-anthropic-dod-military-pentagon-contract-sam-altman-hegseth">apparently expansive</a> Pentagon contract that its competitor Anthropic had refused to sign out of concerns about autonomous weapons and domestic mass surveillance. The move created controversy both internally and externally, and even CEO Sam Altman <a href="https://x.com/sama/status/2028640354912923739">acknowledged</a> OpenAI had come off as “opportunistic and sloppy.”</p>

<p class="has-text-align-none">Then came the product announcements. Last month, OpenAI unexpectedly announced it would <a href="https://www.theverge.com/ai-artificial-intelligence/902368/openai-sora-dead-ai-video-generation-competition">discontinue Sora</a>, an AI video-generation app that it had planned to roll into ChatGPT. It exited its Disney partnership so rapidly that the companies had reportedly been working together <a href="https://www.reuters.com/technology/openai-set-discontinue-sora-video-platform-app-wsj-reports-2026-03-24/">just 30 minutes before</a> Disney found out about the shutdown. The company said it was <a href="https://www.theverge.com/ai-artificial-intelligence/901293/openai-adult-mode-erotic-chatbot-shelved-indefinitely">shelving long-gestating plans</a> for the ability to sext with ChatGPT last month as well. “We cannot miss this moment because we are distracted by side quests,” OpenAI’s Fidji Simo reportedly told employees <a href="https://www.wsj.com/tech/ai/openai-chatgpt-side-projects-16b3a825">last month</a>, as the company announced it would pivot to focusing on enterprise and coding tools. Even its once-heralded Stargate data center project may have <a href="https://www.theinformation.com/articles/inside-openais-scramble-get-computing-power-stargate-stalled">largely stalled</a>.</p>

<p class="has-text-align-none">Just last Friday, the company announced a <a href="https://www.theverge.com/ai-artificial-intelligence/906965/openais-agi-boss-is-taking-a-leave-of-absence">laundry list of changes</a> to its C-suite. Simo, OpenAI’s CEO of AGI deployment — who was until recently the company’s CEO of applications — is stepping away from her role “for the next several weeks” due to medical leave, with company president Greg Brockman stepping in to run the product organization and run its super app initiative. CMO Kate Rouch decided to depart to focus on her health. Brad Lightcap decided to leave his role as OpenAI’s COO to instead start a role “focused on special projects” and reporting directly to Altman.</p>

<p class="has-text-align-none">At the start of this week, a piece in <a href="https://www.newyorker.com/magazine/2026/04/13/sam-altman-may-control-our-future-can-he-be-trusted"><em>The New Yorker</em></a> expanded on <a href="https://www.theverge.com/ai-artificial-intelligence/814876/ilya-sutskever-deposition-openai-sam-altman-elon-musk-lawsuit">years</a> <a href="https://www.cnbc.com/2023/11/17/sam-altman-leaves-openai-mira-murati-appointed-interim-boss.html">of</a> <a href="https://www.washingtonpost.com/technology/2023/11/22/sam-altman-fired-y-combinator-paul-graham/">reports</a> of Altman potentially misleading OpenAI’s board, former company executives, and even contemporaries in roles he held before cofounding OpenAI.&nbsp;</p>

<p class="has-text-align-none">And later this month, OpenAI is scheduled to defend itself in a potentially nasty court battle with cofounder Elon Musk, whose suit against the company has already revealed extensive internal communications from its early days.</p>

<div class="wp-block-vox-media-highlight vox-media-highlight">
<h2 class="wp-block-heading"></h2>



<p class="has-text-align-none"><em>Are you a current or former OpenAI employee? Contact me via Signal at haydenfield.11 on a non-work device with tips.</em></p>
</div>

<p class="has-text-align-none">The barrage of recent changes, and headlines, has seemed to leave the company reeling — and looking to control its narrative. Last week OpenAI announced that it was <a href="https://www.theverge.com/ai-artificial-intelligence/906022/openai-buys-tbpn">acquiring TBPN</a>, the online viral news show. Simo wrote that it made the deal to “help create a space for a real, constructive conversation about the changes AI creates—with builders and people using the technology at the center.” She wrote, “As I&#8217;ve been thinking about the future of how we communicate at OpenAI, one thing that&#8217;s become clear is that the standard communications playbook just doesn&#8217;t apply to us.”&nbsp;</p>

<p class="has-text-align-none">OpenAI is vulnerable, especially as it nears its potential IPO. As investors pour in billions of dollars, all eyes are on its balance sheet. CFO Sarah Friar has <a href="https://www.theinformation.com/articles/openai-ceo-cfo-diverge-ipo-timing">reportedly</a> expressed concerns that the company isn’t ready to go public as soon as Altman desires. There’s never been more pressure to generate revenue.</p>

<p class="has-text-align-none">“We have a strong leadership team focused on our biggest priorities: advancing frontier research, growing our global user base of nearly 1 billion users, and powering enterprise use cases. We&#8217;re well-positioned to keep executing with continuity and momentum,” said OpenAI spokesperson Elana Widmann in a statement to <em>The Verge</em>.</p>

<p class="has-text-align-none">In the past, Altman hadn’t expressed much concern about when and how OpenAI would turn a profit; in 2024, <a href="https://www.theinformation.com/articles/openai-projections-imply-losses-tripling-to-14-billion-in-2026">reports suggested</a> that the company didn’t expect to do so until 2029. At OpenAI’s annual Dev Day in October, Altman told reporters, “Obviously, someday we have to be very profitable, and we’re confident and patient that we will get there.” But he appeared defensive later that same month on a <a href="https://www.youtube.com/watch?v=Gnl833wXRz0">podcast appearance</a>, when host Brad Gerstner told him, “The single biggest question I’ve heard all week, and hanging over the market, is ‘How can a company with $13 billion in revenue make $1.4 trillion in spend commitments?’ You’ve heard the criticism, Sam.” Altman interrupted to respond, “First of all, we’re doing well more revenue than that. Second of all, Brad, if you want to sell your shares, I&#8217;ll find you a buyer. I just&#8230; Enough.” And in December, Altman reportedly <a href="https://www.theinformation.com/articles/openai-ceo-declares-code-red-combat-threats-chatgpt-delays-ads-effort">announced</a> that the company was declaring a “code red” amid competition to ChatGPT.&nbsp;</p>

<p class="has-text-align-none">As the pressure builds to square OpenAI’s revenue with its nearly unprecedented spending, the company is looking to put its compute behind projects with the highest profit potential. It’s attempting to catch up to leading rival Anthropic’s current popularity in coding, while also facing significant competition from Google, since Gemini is well integrated within Google’s ecosystem of apps and tools. It’s possible the company will find a way to pull ahead —&nbsp;but things may not be going as smoothly as Altman hopes.</p>
						]]>
									</content>
			
					</entry>
			<entry>
			
			<author>
				<name>Hayden Field</name>
			</author>
			
			<title type="html"><![CDATA[A new Anthropic model found security problems &#8216;in every major operating system and web browser&#8217;]]></title>
			<link rel="alternate" type="text/html" href="https://www.theverge.com/ai-artificial-intelligence/908114/anthropic-project-glasswing-cybersecurity" />
			<id>https://www.theverge.com/?p=908114</id>
			<updated>2026-04-07T14:16:03-04:00</updated>
			<published>2026-04-07T14:00:00-04:00</published>
			<category scheme="https://www.theverge.com" term="AI" /><category scheme="https://www.theverge.com" term="Anthropic" /><category scheme="https://www.theverge.com" term="News" />
							<summary type="html"><![CDATA[Anthropic is debuting a new AI model as part of a cybersecurity partnership with Nvidia, Google, Amazon Web Services, Apple, Microsoft, and other companies. Project Glasswing, as it’s called, is billed as a way for large companies, and potentially even the government, to flag vulnerabilities in their systems with virtually no human intervention. Anthropic is [&#8230;]]]></summary>
			
							<content type="html">
											<![CDATA[

						
<figure>

<img alt="Photo collage showing mouse cursors circling like sharks in a tablet screen." data-caption="" data-portal-copyright="The Verge" data-has-syndication-rights="1" src="https://platform.theverge.com/wp-content/uploads/sites/2/2025/04/STK461_INTERNET_CHILD_SAFETY_Stock_B_CVirginia.jpg?quality=90&#038;strip=all&#038;crop=0,0,100,100" />
	<figcaption>
		</figcaption>
</figure>
<p class="has-text-align-none">Anthropic is debuting a new AI model as part of a cybersecurity partnership with Nvidia, Google, Amazon Web Services, Apple, Microsoft, and other companies. Project Glasswing, as it’s called, is billed as a way for large companies, and potentially even the government, to flag vulnerabilities in their systems with virtually no human intervention.</p>

<p class="has-text-align-none">Anthropic is offering its launch partners access to Claude Mythos Preview, a new general-purpose model that it’s not currently planning to publicly release due to security concerns. Newton Cheng, the cyber lead for Anthropic’s frontier red team, told <em>The Verge</em> that the model will ideally give cyber defenders a “head start” against adversaries. The partners will use the model to analyze their system to spot high-stakes vulnerabilities and help patch them up. Access is restricted to keep those same adversaries from using it to find weak points and conduct attacks.</p>

<p class="has-text-align-none">Though Claude Mythos Preview wasn’t specifically trained for cybersecurity purposes, Anthropic said in a release that the model’s “strong agentic coding and reasoning skills” are behind its cybersecurity advances. In an interview with <em>The Verge</em>, Newton Cheng, the cyber lead for Anthropic’s frontier red team, declined to share specific details of the model’s cybersecurity successes beyond the company’s <a href="https://red.anthropic.com/2026/mythos-preview/">publicly-released examples</a>, but Anthropic’s blog post said that in recent weeks, Mythos Preview has flagged “thousands of high-severity vulnerabilities, including some in every major operating system and web browser.” Anthropic’s blog post doesn’t mention keeping humans in the loop for the model’s cybersecurity sweeps; in fact, it highlights that the model identified vulnerabilities “and develop[ed] many related exploits — entirely autonomously, without any human steering.”</p>

<p class="has-text-align-none">Claude Mythos Preview’s existence was first reported <a href="https://fortune.com/2026/03/26/anthropic-says-testing-mythos-powerful-new-ai-model-after-data-leak-reveals-its-existence-step-change-in-capabilities/">last month</a> in a data leak, which Anthropic attributes to human error. Dianne Penn, a head of product management at Anthropic, told <em>The Verge</em> in an interview that the company is “taking steps in terms of solidifying our processes … That was not related to software vulnerabilities in any way.”&nbsp;</p>

<p class="has-text-align-none">Mythos Preview will be privately available to the company’s Glasswing partners, which also include JPMorgan Chase, Broadcom, Cisco, CrowdStrike, the Linux Foundation, and Palo Alto Networks, plus about 40 other organizations that maintain or build software infrastructure. For now, Anthropic will help subsidize the cost of using it. The company says it will commit up to $100 million in usage credits, plus $4 million in direct donations to the Linux Foundation and the Apache Software Foundation, said Cheng. In the long term, as Anthropic and other AI companies face pressure to turn a profit, the program could evolve into a paid service that provides a new revenue stream —&nbsp;if it works well enough for companies to keep using it.</p>

<p class="has-text-align-none">Despite its <a href="https://www.theverge.com/ai-artificial-intelligence/902149/anthropic-dod-pentagon-lawsuit-supply-chain-risk-injunction">highly public recent clash</a> with the Trump administration, Anthropic also said in the release that it has been in “ongoing discussions with US government officials about Claude Mythos Preview and its offensive and defensive cyber capabilities.” When <em>The Verge</em> asked what that meant, Penn confirmed that the company had “briefed senior officials in the US government about Mythos and what it can do,” and that the company is still “committed to working closely with all different levels of government.” Cheng said that though Anthropic is “engaged with” the government, he declined to speak to exactly who the company had briefed.</p>
						]]>
									</content>
			
					</entry>
			<entry>
			
			<author>
				<name>Hayden Field</name>
			</author>
			
			<title type="html"><![CDATA[OpenAI’s AGI boss is taking a leave of absence]]></title>
			<link rel="alternate" type="text/html" href="https://www.theverge.com/ai-artificial-intelligence/906965/openais-agi-boss-is-taking-a-leave-of-absence" />
			<id>https://www.theverge.com/?p=906965</id>
			<updated>2026-04-03T18:55:16-04:00</updated>
			<published>2026-04-03T16:22:59-04:00</published>
			<category scheme="https://www.theverge.com" term="AI" /><category scheme="https://www.theverge.com" term="News" /><category scheme="https://www.theverge.com" term="OpenAI" />
							<summary type="html"><![CDATA[OpenAI is undergoing another round of C-suite changes, according to an internal memo viewed by The Verge.&#160; Fidji Simo, OpenAI’s CEO of AGI deployment — who was until recently the company’s CEO of applications — says in the memo that she will be stepping away on medical leave “for the next several weeks” due to a neuroimmune [&#8230;]]]></summary>
			
							<content type="html">
											<![CDATA[

						
<figure>

<img alt="" data-caption="" data-portal-copyright="Image: Bloomberg via Getty Images" data-has-syndication-rights="1" src="https://platform.theverge.com/wp-content/uploads/sites/2/2026/04/gettyimages-1239231852.jpg?quality=90&#038;strip=all&#038;crop=0,0,100,100" />
	<figcaption>
		</figcaption>
</figure>
<p class="has-text-align-none">OpenAI is undergoing another round of C-suite changes, according to an internal memo viewed by <em>The Verge</em>.&nbsp;</p>

<p class="has-text-align-none">Fidji Simo, OpenAI’s CEO of AGI deployment — who was until recently the company’s CEO of applications — says in the memo that she will be stepping away on medical leave “for the next several weeks” due to a neuroimmune condition. While she’s out, OpenAI president Greg Brockman will be in charge of product, including leading OpenAI’s super app efforts. On the business side, CSO Jason Kwon, CFO Sarah Friar, and CRO Denise Dresser will take charge. </p>

<p class="has-text-align-none">OpenAI’s CMO, Kate Rouch, has also decided to step down in order to focus on her health, according to Simo. Gary Briggs will temporarily step in to replace Rouch, reporting to Kwon, and the two of them, along with Rouch, will lead the search for her replacement. Rouch “plans to return to a different, more narrowly scoped role when her health allows,” Simo says.&nbsp;</p>

<p class="has-text-align-none">Brad Lightcap, OpenAI’s COO, has also decided to step down from his role and transition into a new one “focused on special projects” and reporting to CEO Sam Altman, per the memo. Dresser will step in to take over much of his work, reporting to Simo, but two areas that Lightcap oversaw,&nbsp;government and “OpenAI for Countries,”&nbsp;will move into OpenAI’s strategy organization instead.&nbsp;</p>

<p class="has-text-align-none">The news comes after a few months of public relations setbacks for the company. It sparked controversy <a href="https://www.theverge.com/ai-artificial-intelligence/887309/openai-anthropic-dod-military-pentagon-contract-sam-altman-hegseth">both internally and externally</a> after signing on to new terms of use with the Pentagon, and it had to <a href="https://www.theverge.com/ai-artificial-intelligence/902368/openai-sora-dead-ai-video-generation-competition">drop Sora</a>, its AI video generation tool, in order to devote compute and other resources to <a href="https://www.theverge.com/report/874308/anthropic-claude-code-opus-hype-moment">catching up with competitors</a> in enterprise and coding tools. OpenAI’s chief communications officer, Hannah Wong, departed her post in January.</p>

<p class="has-text-align-none">Just yesterday, OpenAI also <a href="https://www.theverge.com/ai-artificial-intelligence/906022/openai-buys-tbpn">announced</a> it was purchasing viral online talk show TBPN, and Simo wrote in a memo about that announcement that the company wants to “help create a space for a real, constructive conversation about the changes AI creates.”&nbsp;</p>

<p class="has-text-align-none">In a statement to <em>The Verge</em>, OpenAI spokesperson Elana Widmann said, “We have a strong leadership team focused on our biggest priorities: advancing frontier research, growing our global user base of nearly 1 billion users, and powering enterprise use cases. We&#8217;re well-positioned to keep executing with continuity and momentum.”&nbsp;</p>

<p class="has-text-align-none">Here is the full text of Simo’s memo:</p>

<blockquote class="wp-block-quote is-layout-flow wp-block-quote-is-layout-flow">
<p class="has-text-align-none">Hi team,</p>



<p class="has-text-align-none">I hope you are all having a great break. I wanted to share three updates in the AGI Deployment org with you today.</p>



<p class="has-text-align-none">First, Brad has decided to transition into a new role focused on special projects, including our DeployCo effort, reporting to Sam. He’s been our go-to for complex deals and investments across the company, and this shift allows him to focus all his energy there. Thank you so much, Brad, for everything you’ve done as COO. We’re all deeply grateful for what you’ve built and driven.</p>



<p class="has-text-align-none">Denise will step in to take over Brad’s scope, with the exception of our government and OpenAI for Countries work, which will be brought into our Strategy org. Denise will report directly to me. Denise has decades of enterprise experience, including several senior roles at Salesforce, most recently as CEO of Slack. She is the perfect person to lead all of our commercial teams into the next chapter. Please join me in congratulating Denise on her new role; I couldn’t be more excited to partner even more closely with her.</p>



<p class="has-text-align-none">Second, Kate has decided to step down as CMO to focus on her cancer recovery, and plans to return to a different, more narrowly scoped role when her health allows. Gary will lead Marketing until we hire a new CMO, and will report to Jason. Gary and Jason will lead the search for Kate’s replacement, with Kate’s help while she&#8217;s on leave. We are so grateful to Kate for having built up an amazing marketing team in record time and having made our brands and products shine on the biggest stages like the Super Bowl. While we will miss her brilliance leading marketing, and while it was an agonizing decision for her, I am so glad that she&#8217;s focusing on her health. Please join me in sending her all the healing vibes.</p>



<p class="has-text-align-none">Third, a personal update. I have to take medical leave for the next several weeks. I have done everything possible to try to avoid it, but sadly my body isn’t cooperating.</p>



<p class="has-text-align-none">As I shared when I joined, I had a relapse of my neuroimmune condition a few weeks before starting the job. It’s been a bit of a rollercoaster since, and the last month has been particularly rough health-wise. For my entire time here, I’ve postponed medical tests and new therapies to stay completely focused on the job and not miss a single day of work. I took time off for the first time two weeks before the break for some medical tests, and it&#8217;s now clear that I’ve pushed a little too far and I really need to try new interventions to stabilize my health.</p>



<p class="has-text-align-none">The timing is maddening because we have such an exciting roadmap ahead that the team is executing on, and I hate to miss even a minute of it. But the company is in great hands; we have an excellent leadership team that’s ready to step up. While I’m out, Greg will handle product; we have a great strategy to execute on and everyone is focused on this. On the business side, Jason, Sarah, and Denise will hold down the fort. Really grateful to them for giving me the space to get back on my feet.</p>



<p class="has-text-align-none">I can’t wait to be back in the arena with you all soon.</p>



<p class="has-text-align-none">Much love to you all.</p>
</blockquote>
						]]>
									</content>
			
					</entry>
			<entry>
			
			<author>
				<name>Hayden Field</name>
			</author>
			
			<title type="html"><![CDATA[OpenAI just bought TBPN]]></title>
			<link rel="alternate" type="text/html" href="https://www.theverge.com/ai-artificial-intelligence/906022/openai-buys-tbpn" />
			<id>https://www.theverge.com/?p=906022</id>
			<updated>2026-04-02T14:21:18-04:00</updated>
			<published>2026-04-02T13:40:07-04:00</published>
			<category scheme="https://www.theverge.com" term="AI" /><category scheme="https://www.theverge.com" term="News" /><category scheme="https://www.theverge.com" term="OpenAI" />
							<summary type="html"><![CDATA[OpenAI has purchased TBPN, an online talk show that often interviews AI executives and other tech leaders. The show goes live every weekday at 2PM PT, often for a three-hour duration, counting OpenAI CEO Sam Altman, as well as executives from Meta, Microsoft, Palantir, and Andreessen Horowitz, among its past guests, and Bloomberg, CNBC, and [&#8230;]]]></summary>
			
							<content type="html">
											<![CDATA[

						
<figure>

<img alt="" data-caption="" data-portal-copyright="Image: The Verge" data-has-syndication-rights="1" src="https://platform.theverge.com/wp-content/uploads/sites/2/2026/03/STK155_OPEN_AI_CVirginia__C.jpg?quality=90&#038;strip=all&#038;crop=0,0,100,100" />
	<figcaption>
		</figcaption>
</figure>
<p class="has-text-align-none">OpenAI has purchased <a href="https://www.tbpn.com/">TBPN</a>, an online talk show that often interviews AI executives and other tech leaders. The show goes live every weekday at 2PM PT, often for a three-hour duration, counting OpenAI CEO Sam Altman, as well as executives from Meta, Microsoft, Palantir, and Andreessen Horowitz, among its past guests, and Bloomberg, CNBC, and Fox Business as its competitors. </p>

<p class="has-text-align-none">TBPN’s livestream is primarily available on <a href="https://x.com/tbpn">X</a> and <a href="https://www.youtube.com/@TBPNLive">YouTube</a>, but many users watch it on X. OpenAI’s purchase comes as a lawsuit between Altman and Elon Musk, who was a co-founder of OpenAI before splitting from the project and now owns X, is headed to trial later this month.</p>

<p class="has-text-align-none">TBPN host John Coogan wrote <a href="https://x.com/johncoogan/status/2039756493621542915?s=20">on X</a>, “This is a full circle moment for me as I’ve worked with [Altman] for well over a decade. He funded my first company in 2013,” and the show started today’s <a href="https://x.com/i/broadcasts/1AGRnaYrwoVGl">live broadcast</a> by focusing on the acquisition.</p>
<div class="youtube-embed"><iframe title="OpenAI Acquires TBPN" src="https://www.youtube.com/embed/V78F9fA8Viw?rel=0" allowfullscreen allow="accelerometer *; clipboard-write *; encrypted-media *; gyroscope *; picture-in-picture *; web-share *;"></iframe></div>
<p class="has-text-align-none">TBPN averages about 70,000 viewers per episode, and it generated more than $5 million in advertising revenue this year, with projections to draw in more than $30 million in 2026 revenue, according to <em><a href="https://www.wsj.com/cmo-today/openai-buys-tech-industry-talk-show-tbpn-484c01c5">The Wall Street Journal</a></em>.</p>

<p class="has-text-align-none">OpenAI’s reasoning for purchasing the show involved “accelerating the global conversation around AI,” according <a href="https://openai.com/index/openai-acquires-tbpn/">to a memo sent around the company</a> Thursday by Fidji Simo, its CEO of AGI deployment. Simo writes, “As I&#8217;ve been thinking about the future of how we communicate at OpenAI, one thing that&#8217;s become clear is that the standard communications playbook just doesn&#8217;t apply to us … With the mission of bringing AGI to the world comes a responsibility to help create a space for a real, constructive conversation about the changes AI creates—with builders and people using the technology at the center.”</p>

<p class="has-text-align-none">The TBPN team will help with OpenAI’s corporate comms and marketing, but Simo writes that it will retain “editorial independence” with regards to running programming and choosing guests. The team will operate under OpenAI’s Strategy organization and report to its VP of global policy, Chris Lehane.</p>

<p class="has-text-align-none">The acquisition also comes at a time when OpenAI’s public image has taken some hits. Although the company recently closed a $122 billion funding round at a post-money valuation of $852 billion, it’s still reeling from internal and external controversy after Altman <a href="https://www.theverge.com/ai-artificial-intelligence/887309/openai-anthropic-dod-military-pentagon-contract-sam-altman-hegseth">signed a deal with the Department of Defense</a> while Anthropic is publicly battling with the Pentagon. OpenAI is also under more pressure than ever to generate revenue ahead of its reported plans to go public this year, and recently announced it would <a href="https://www.theverge.com/ai-artificial-intelligence/902368/openai-sora-dead-ai-video-generation-competition">shut down the Sora video generator</a> in order to devote compute and other resources to enterprise and coding tools.</p>

<p class="has-text-align-none">Jordi Hays, co-host of TBPN, wrote <a href="https://x.com/jordihays/status/2039756490387624327?s=20">on X</a>, “The world is changing quickly but TBPN will stay the same. Live every weekday just with a lot more resources.” </p>
						]]>
									</content>
			
					</entry>
	</feed>
