<?xml version="1.0" encoding="UTF-8"?><feed
	xmlns="http://www.w3.org/2005/Atom"
	xmlns:thr="http://purl.org/syndication/thread/1.0"
	xml:lang="en-US"
	>
	<title type="text">Alex Heath | The Verge</title>
	<subtitle type="text">The Verge is about technology and how it makes us feel. Founded in 2011, we offer our audience everything from breaking news to reviews to award-winning features and investigations, on our site, in video, and in podcasts.</subtitle>

	<updated>2026-01-25T15:19:39+00:00</updated>

	<link rel="alternate" type="text/html" href="https://www.theverge.com/author/alex-heath" />
	<id>https://www.theverge.com/authors/alex-heath/rss</id>
	<link rel="self" type="application/atom+xml" href="https://www.theverge.com/authors/alex-heath/rss" />

	<icon>https://platform.theverge.com/wp-content/uploads/sites/2/2025/01/verge-rss-large_80b47e.png?w=150&amp;h=150&amp;crop=1</icon>
		<entry>
			
			<author>
				<name>Alex Heath</name>
			</author>
			
			<title type="html"><![CDATA[AI labs wage a reputational knife fight at Davos]]></title>
			<link rel="alternate" type="text/html" href="https://www.theverge.com/column/866573/ai-labs-wage-a-reputational-knife-fight-at-davos" />
			<id>https://www.theverge.com/?p=866573</id>
			<updated>2026-01-25T10:19:39-05:00</updated>
			<published>2026-01-23T09:40:00-05:00</published>
			<category scheme="https://www.theverge.com" term="AI" /><category scheme="https://www.theverge.com" term="Column" /><category scheme="https://www.theverge.com" term="Sources" />
							<summary type="html"><![CDATA[This is an excerpt of Sources by Alex Heath, a newsletter about AI and the tech industry, syndicated just for The Verge subscribers once a week. The leaders of the three preeminent frontier AI labs spent this week at the World Economic Forum in Davos, Switzerland, taking shots at each other like candidates in a presidential [&#8230;]]]></summary>
			
							<content type="html">
											<![CDATA[

						
<figure>

<img alt="" data-caption="Demis Hassabis, chief executive officer of DeepMind Technologies Ltd., during a panel session at the World Economic Forum (WEF) in Davos, Switzerland. | Image: Bloomberg via Getty Images" data-portal-copyright="Image: Bloomberg via Getty Images" data-has-syndication-rights="1" src="https://platform.theverge.com/wp-content/uploads/sites/2/2026/01/gettyimages-2194484502.jpg?quality=90&#038;strip=all&#038;crop=0,0,100,100" />
	<figcaption>
	Demis Hassabis, chief executive officer of DeepMind Technologies Ltd., during a panel session at the World Economic Forum (WEF) in Davos, Switzerland. | Image: Bloomberg via Getty Images	</figcaption>
</figure>
<p class="has-text-align-none"><em>This is an excerpt of <a href="https://sources.news/" target="_blank" rel="noreferrer noopener">Sources by Alex Heath</a>, a newsletter about AI and the tech industry, syndicated just for The Verge subscribers once a week.</em></p>

<p class="has-text-align-none">The leaders of the three preeminent frontier AI labs spent this week at the World Economic Forum in Davos, Switzerland, taking shots at each other like candidates in a presidential primary.</p>

<p class="has-text-align-none">I helped start the news cycle. During an interview on Tuesday, <a href="https://sources.news/p/googles-ai-boss-no-plans-for-ads">I asked</a> Google DeepMind CEO Demis Hassabis about OpenAI&#8217;s decision to test ads in ChatGPT. &#8220;It&#8217;s interesting they&#8217;ve gone for that so early,&#8221; he said. &#8220;Maybe they feel they need to make more revenue.&#8221;</p>

<p class="has-text-align-none">The next day, Anthropic CEO Dario Amodei piled it on during an interview I watched at The Wall Street Journal House in Davos. &#8220;We don&#8217;t need to monetize a billion free users because we&#8217;re in some death race with some other large player,&#8221; he said. He also teased an upcoming essay focused on the &#8220;bad things&#8221; AI could bring — a dark counterpart to his optimistic <a href="https://www.darioamodei.com/essay/machines-of-loving-grace">&#8220;Machines of Loving Grace&#8221; essay</a> from last year. During another Davos appearance, he compared the US allowing Nvidia to sell GPUs to China to &#8220;selling nuclear weapons to North Korea.&#8221;</p>

<figure class="wp-block-pullquote"><blockquote><p> “We don&#8217;t need to monetize a billion free users because we&#8217;re in some death race with some other large player.”</p></blockquote></figure>

<p class="has-text-align-none">OpenAI&#8217;s retort came from Chris Lehane, its head of policy and perhaps the most formidable political operator in Silicon Valley. Lehane earned the nickname &#8220;master of disaster&#8221; in the Clinton White House, where he specialized in opposition research and crisis management. At Airbnb, he helped the company survive regulatory battles that threatened its existence. Now he&#8217;s the most high-profile policy chief of any AI lab and is applying tactics from his campaigning days to the AI race.</p>

<p class="has-text-align-none">When I sat down with Lehane for breakfast on Thursday morning near the main Promenade in Davos, he was ready to punch back. In response to Hassabis&#8217; ad comments, Lehane pointed to the obvious irony. &#8220;You do have to pay for compute if you&#8217;re going to give people access,&#8221; he told me. &#8220;I&#8217;m happy to have that conversation with the biggest online advertising platform in the world every day, seven days a week.&#8221; He also called Amodei’s comments &#8220;elitist&#8221; and &#8220;undemocratic.&#8221;</p>

<p class="has-text-align-none">&#8220;You&#8217;ll often have someone who is trying to move up from the second tier say things that are provocative, because it creates a feedback loop,&#8221; he told me between bites of scrambled eggs. &#8220;That gets you some attention. My experience in politics is that it often ends up being short-lived because ultimately, if you&#8217;re saying these things, people are going to hold you accountable to your actual solutions. If we&#8217;re going to lose a big chunk of jobs [to AI], what are you actually doing to address it, particularly if you&#8217;re raising these questions, right?”</p>

<p class="has-text-align-none">&#8220;The people making those critiques are often not focused on how to make this technology broadly accessible,&#8221; he continued. &#8220;They tend to come from a background that focuses almost exclusively on enterprise use cases. That&#8217;s a very elitist approach.&#8221;</p>

<figure class="wp-block-pullquote"><blockquote><p>“You&#8217;ll often have someone who is trying to move up from the second tier say things that are provocative, because it creates a feedback loop. That gets you some attention.”</p></blockquote></figure>

<p class="has-text-align-none">Reality is much more nuanced than these jabs the AI labs are making at each other. OpenAI is aggressively trying to take Anthropic’s enterprise AI business, as is Google. And while it’s true that ChatGPT is the most widely used chatbot, recasting its ad push as being part of some sort of democratic virtue, and not a financially motivated move to finally monetize most of ChatGPT’s usage, is a nice bit of spin.&nbsp;</p>

<p class="has-text-align-none">During our conversation, Lehane kept returning to his political framing. Being at Davos, he told me, felt &#8220;a little bit like walking downtown” in Manchester, New Hampshire, before a primary race: the weather, the signs everywhere, and the campaigns descending into one compressed environment, all trying to get attention.</p>

<p class="has-text-align-none">&#8220;We have the front-runner status,&#8221; Lehane said. &#8220;Even if the front-runner started off as a dark horse, we&#8217;ve now established ourselves on the basis of our innovation. And the others are all trying to position off of that.&#8221;</p>

<p class="has-text-align-none">After my conversations with AI leaders this week in Davos, I came away with the impression that the industry has collectively decided to gang up on OpenAI. Hassabis and Amodei praised each other onstage during an official WEF panel this week titled <a href="https://www.youtube.com/watch?v=NnVW9epLlTM">&#8220;The Day After AGI.&#8221;</a>&nbsp;</p>

<p class="has-text-align-none">&#8220;I think the thing we actually have in common is that both companies that are led by researchers who focus on the models, who focus on solving important problems in the world,” Amodei said during the panel. “I think those are the kind of companies that are going to succeed going forward, and I think we share that between us very much.&#8221; (Sam Altman skipped Davos this year and is <a href="https://www.bloomberg.com/news/articles/2026-01-21/openai-s-altman-meets-mideast-investors-for-50-billion-round">reportedly</a> in the Middle East raising tens of billions more dollars.)&nbsp;</p>

<p class="has-text-align-none">Meanwhile, OpenAI’s rivals have told me they’re particularly annoyed by Altman’s aggressive attempts to shore up AI capacity, and some are frustrated at getting boxed out of deals by an unprofitable company that hasn’t yet shown it has the revenue to pay for the eye-popping commitments it’s making.</p>

<p class="has-text-align-none">With hundreds of billions of dollars at stake and the race to AGI accelerating, I expect the rhetoric to get more heated this year. Lehane told me campaigns get nastier as Election Day approaches. If he&#8217;s right about the analogy, we&#8217;re still in the early primaries.</p>

<iframe src="https://sources.news/embed" width="480" height="320" frameborder="0"></iframe>
						]]>
									</content>
			
					</entry>
			<entry>
			
			<author>
				<name>Alex Heath</name>
			</author>
			
			<title type="html"><![CDATA[‘Sideshow’ concerns and billionaire dreams: What I learned from Elon Musk’s lawsuit against OpenAI]]></title>
			<link rel="alternate" type="text/html" href="https://www.theverge.com/column/863319/highlights-musk-v-altman-openai" />
			<id>https://www.theverge.com/?p=863319</id>
			<updated>2026-01-17T08:28:39-05:00</updated>
			<published>2026-01-16T08:15:00-05:00</published>
			<category scheme="https://www.theverge.com" term="AI" /><category scheme="https://www.theverge.com" term="Column" /><category scheme="https://www.theverge.com" term="Elon Musk" /><category scheme="https://www.theverge.com" term="OpenAI" /><category scheme="https://www.theverge.com" term="Sources" /><category scheme="https://www.theverge.com" term="Tech" />
							<summary type="html"><![CDATA[This is an excerpt of Sources by Alex Heath, a newsletter about AI and the tech industry, syndicated just for The Verge subscribers once a week. Elon Musk first sued OpenAI in February 2024. Despite OpenAI’s repeated attempts to throw it out, the case is now headed to a jury trial on April 27th in Northern [&#8230;]]]></summary>
			
							<content type="html">
											<![CDATA[

						
<figure>

<img alt="" data-caption="" data-portal-copyright="" data-has-syndication-rights="1" src="https://platform.theverge.com/wp-content/uploads/sites/2/chorus/uploads/chorus_asset/file/25431704/STK201_SAM_ALTMAN_CVIRGINIA_C.jpg?quality=90&#038;strip=all&#038;crop=0,0,100,100" />
	<figcaption>
		</figcaption>
</figure>
<p class="has-text-align-none"><em>This is an excerpt of <a href="https://sources.news/" target="_blank" rel="noreferrer noopener">Sources by Alex Heath</a>, a newsletter about AI and the tech industry, syndicated just for The Verge subscribers once a week.</em></p>

<p class="has-text-align-none">Elon Musk first sued OpenAI in February 2024. Despite OpenAI’s repeated attempts to throw it out, the case is now headed to a jury trial on April 27th in Northern California federal court.</p>

<p class="has-text-align-none">Musk’s main allegation is that OpenAI and its leaders abandoned the company’s original nonprofit mission that he funded. In turn, OpenAI has treated Musk’s claims as sour grapes. U.S. District Judge Yvonne Gonzalez Rogers recently decided that the case warranted going to trial,<a href="https://apnews.com/article/elon-musk-openai-fraud-sam-altman-ee5bfbc14c2be20906886a9ae1d2cb20"> saying</a> in court that “part of this is about whether a jury believes the people who will testify and whether they are credible.”</p>

<p class="has-text-align-none">Last week, thousands of pages of evidence from the case were unsealed, including partial 2025 depositions of most of the key players involved, including Sam Altman, Ilya Sutskever, Greg Brockman, Mira Murati, and Satya Nadella, along with ex-board members Helen Toner and Tasha McCauley — both of whom played key roles in the 2023 firing of Altman.</p>

<p class="has-text-align-none">Bits and pieces of this evidence have started trickling out in recent days, such as the news that Sutskever owned a whopping $4 billion in vested OpenAI shares when Altman was briefly fired two years ago. Altogether, the unsealed evidence offers a fascinating look not only at OpenAI’s early days but also at the circumstances surrounding Altman’s firing and Microsoft’s complex relationship with OpenAI.</p>

<p class="has-text-align-none">I’ve been covering OpenAI in depth for a while, and I closely reported on the whirlwind few days when Altman was fired and then rehired in late 2023. It’s through that lens that I’ve pulled out the below highlights from the evidence in <em>Musk v. Altman</em>:</p>

<h2 class="wp-block-heading">Sutskever had early concerns about treating open-source AI as a “side show.”</h2>

<p class="has-text-align-none">In 2022, OpenAI’s leaders seemed quite concerned about the prominence of open-source lab Stability AI, and Sutskever voiced his worry over text with Murati and others:</p>

<blockquote class="wp-block-quote is-layout-flow wp-block-quote-is-layout-flow">
<p class="has-text-align-none"><strong>Sutskever:</strong> My trepidation around open source is that we’re treating it as a side show, eg def not going far enough to really hurt stability</p>



<p class="has-text-align-none"><strong>Murati:</strong> We’re missing the opportunity to set standards with this massive growing group of devs, people are hungry to build things and we should lean in and bring our tech to as many people as possible, long term maximize our chance of maintaining lead, reducing competition</p>



<p class="has-text-align-none">But if we do everything to get this in a couple of weeks at any cost out bc we heard stability is open sourcing similar model, that’s not in line at all with my motivations</p>
</blockquote>

<h2 class="wp-block-heading">OpenAI leaders were divided over early investor Reid Hoffman’s decision to start a rival AI lab, Inflection.</h2>

<p class="has-text-align-none">They were also already considering prohibiting investors from backing competing labs. From an October 2022 exchange:</p>

<blockquote class="wp-block-quote is-layout-flow wp-block-quote-is-layout-flow">
<p class="has-text-align-none"><strong>Sutskever: </strong>I guess I just felt betrayed by him founding a direct competitor while simultaneously telling me that “I could not possibly imagine you’d find it objectionable”</p>



<p class="has-text-align-none"><strong>Altman: </strong>here’s how id summarize my thoughts on this:</p>



<p class="has-text-align-none">pros: he supported us in a moment where no one else would and it was pretty existential&#8211;i think openai would have been pretty fucked if he hasn’t stepped up then. also, he was instrumental to getting the first MSFT deal done, and has generally been quite helpful with MSFT related stuff he is generally a good board member.</p>



<p class="has-text-align-none">cons: he is very motivated by `collecting’ status. although i personally think he cares much more about openai than inflection, he was blinded enough by the startup of being able to call himself the cofounder of a company he made an uncareful decision.</p>



<p class="has-text-align-none">also, at this point, i think at this point openai has the leverage to ask for a soft promise for new investors not to invest in competitors, but only a select few companies ever get to do that)</p>



<p class="has-text-align-none"><strong>Brockman:</strong> oh also an aside, after taking to @<strong>Sam Altman</strong>, I’m planning to meet <strong>Patrick Collison</strong> tmrw and demo dv3. Will ask if he’d be interested in participating in the tender under the condition of not investing in AGI/big model competitors</p>
</blockquote>

<h2 class="wp-block-heading">Brockman wrote in his diary that he wanted to be a billionaire.</h2>

<p class="has-text-align-none">From his deposition:</p>

<blockquote class="wp-block-quote is-layout-flow wp-block-quote-is-layout-flow">
<p class="has-text-align-none"><em>Q: Why did you write “Financially, what will take me to 1 billion?”</em></p>



<p class="has-text-align-none">A: I think if we were going to do a for-profit entity, that I started to think about what would be motivating financial reward in that case as a secondary consideration.</p>



<p class="has-text-align-none"><em>Q: What was the primary consideration?</em></p>



<p class="has-text-align-none">A: Primary consideration was would we be able to pursue and achieve the mission.</p>



<p class="has-text-align-none"><em>Q: How important was the secondary consideration to you?</em></p>



<p class="has-text-align-none">A: The second consideration definitely mattered.</p>



<p class="has-text-align-none"><em>Q: At this point, did you aspire to be a billionaire?</em></p>



<p class="has-text-align-none">A: My primary motivation was to the mission.</p>



<p class="has-text-align-none"><em>Q: Was your secondary motivation to be a billionaire?</em></p>



<p class="has-text-align-none">A: I believe that as a &#8212; one thing I was definitely motivated by was the idea — I definitely had as a motivation that, yeah, potentially getting to $1 billion.</p>



<p class="has-text-align-none"><em>Q: So we know you achieved that goal at some point. Do you know precisely what day that happened?</em></p>



<p class="has-text-align-none">A: I do not know what day precisely that happened.</p>



<p class="has-text-align-none"><em>Q: When was the first day you realized you had surpassed that goal?</em></p>



<p class="has-text-align-none">A: I do not know what day I would say my, at least on paper, net worth would’ve exceeded 1 billion.</p>
</blockquote>

<h2 class="wp-block-heading">Nadella was worried about Microsoft’s position in AI when he started looking at OpenAI.</h2>

<p class="has-text-align-none">From his deposition:</p>

<blockquote class="wp-block-quote is-layout-flow wp-block-quote-is-layout-flow">
<p class="has-text-align-none"><em>Q: Did you feel that your progress was moving more slowly than you had liked?</em></p>



<p class="has-text-align-none">A: I mean, always as a CEO of a company, I feel my job is to sort of be dissatisfied with the rate of progress at all times. And so “yes” would be the answer, which is both in the absolute sense, which is, can we build products that are more capable in any particular domain, and also, you know, vis-à-vis competition.</p>



<p class="has-text-align-none">There were others achieving things that we looked at and said, “Hey, that’s great, and so how can we make sure we are competitive with it.”</p>
</blockquote>

<h2 class="wp-block-heading">Nadella almost wrote a book about AI called <em>An Inflection Point</em>.</h2>

<p class="has-text-align-none">According to an exhibit filed in the case, it was co-written with Marco Iansiti and was in development in 2023. From the first chapter:</p>

<blockquote class="wp-block-quote is-layout-flow wp-block-quote-is-layout-flow">
<p class="has-text-align-none">On Wednesday, August 24, 2022, with the Pacific Northwest summer showing all of its beauty, Bill Gates hosted a dinner at his home on Lake Washington, just a few miles from the Microsoft campus. No longer a Microsoft board member or even Microsoft’s largest shareholder, Bill remained the iconic co-founder and trusted advisor of the company’s senior technical leaders. Satya suggested the gathering, which included Chief Technology Officer, Kevin Scott, and a handful of top researchers. Food and drinks would be served, but the main entrée was a hush- hush demo by OpenAl founder Sam Altman of a forthcoming release of ChatGPT powered by GPT-4, an AI built on Large Language Models (LLMs). Bill had long encouraged researchers to develop a truly accomplished AI assistant but had voiced his skepticism about this particular approach.</p>
</blockquote>

<h2 class="wp-block-heading">Microsoft beat out Amazon when it initially started working with OpenAI.</h2>

<p class="has-text-align-none">Musk was opposed to working with <strong>Jeff Bezos</strong> and wrote the following in an early email to Altman: “I think Jeff is a bit of a tool and Satya is not, so I slightly prefer Microsoft, but I hate their marketing dept.” Altman responded that Amazon had “started really dicking us around.”</p>

<h2 class="wp-block-heading">The upside on Microsoft’s initial $1 billion investment in OpenAI was capped at $500 billion.</h2>

<p class="has-text-align-none">From a filing written by Musk’s lawyers:</p>

<blockquote class="wp-block-quote is-layout-flow wp-block-quote-is-layout-flow">
<p class="has-text-align-none">In November 2018, after dinner with Altman, Scott told Nadella that OpenAI’s new corporate structure offered both “a commercial vehicle for monetizing Open AI IP” and investment returns “capped at $500B.” Altman claimed the nonprofit would eventually benefit because — though OpenAI had yet to make a single dollar in returns — “[i]f [OpenAI] ever [does] get to $500B in returns, the balance over that goes directly to the 501(c)3.”</p>



<p class="has-text-align-none">Microsoft’s board initially approved a capital investment of $2 billion. But ultimately, it decided to limit its initial investment to $1 billion in the hopes that a smaller investment would “press OpenAI to commercialize,” in direct contravention of the nonprofit’s stated founding principles. In exchange for its investment, Microsoft received a convertible limited partnership interest and rights to OpenAI’s profits, with returns “capped” at 2000% of its $1 billion investment.</p>



<p class="has-text-align-none">Microsoft’s CFO noted in an internal email that the “cap is actually larger than 90% of public companies,” and the limit on Microsoft’s profits is not “terribly constraining nor terribly altruistic.” It was, in fact, “a good investment.” At Microsoft’s request, OpenAI agreed to keep any mention of Microsoft’s promised 2000% return on its investment out of its public announcement.</p>
</blockquote>

<h2 class="wp-block-heading">The second update to Microsoft’s partnership with OpenAI in 2021 included another $2 billion investment that wasn’t reported and came with a lower upside.</h2>

<p class="has-text-align-none">From a filing written by Musk’s lawyers:</p>

<blockquote class="wp-block-quote is-layout-flow wp-block-quote-is-layout-flow">
<p class="has-text-align-none">In March 2021, Microsoft quietly invested another $2 billion in OpenAI. Neither OpenAI nor Microsoft publicly announced the investment, which was subject to a lower 6x return multiple.</p>



<p class="has-text-align-none">In place of its 2019 license to a single OpenAI model, Microsoft secured rights to commercialize any OpenAI model developed during the term of the agreement (except AGI). Facilitating its commercial use of OpenAI’s IP, Microsoft was permitted to embed up to ten of its employees on-site at OpenAI.</p>



<p class="has-text-align-none">Anticipating increased product commercialization, Microsoft and OpenAI agreed to share any resulting revenue.</p>



<p class="has-text-align-none">Just three months later, in June 2021, Microsoft released GitHub CoPilot — its first product incorporating OpenAI’s technology.</p>
</blockquote>

<h2 class="wp-block-heading">Microsoft’s next $10 billion investment in OpenAI came with pressure from Nadella to go after the enterprise market and with more strings attached.</h2>

<p class="has-text-align-none">From a filing written by Musk’s lawyers:</p>

<blockquote class="wp-block-quote is-layout-flow wp-block-quote-is-layout-flow">
<p class="has-text-align-none">Prodding OpenAI to accelerate its own product development, Microsoft told Altman that OpenAI needed to generate $100 million in revenues to secure the next $10 billion commitment from Microsoft. To meet that goal, OpenAI expanded the team responsible for taking products to market and tried to expand its “enterprise business.”</p>



<p class="has-text-align-none">In the summer of 2022, OpenAI began negotiating with Microsoft a new $10 billion investment. That November, OpenAI released ChatGPT. It was an instant hit. Nadella urged Altman to release a paid version and persistently checked on the progress of its commercialization.</p>



<p class="has-text-align-none">Over the next several months, OpenAI secured Microsoft’s $10 billion investment, and the parties again amended the JDCA. OpenAI also changed its corporate structure.</p>



<p class="has-text-align-none">The 2023 agreement “cap[ped]” Microsoft’s return on this investment at 600%, or $60 billion to start, but increased Microsoft’s profit “cap” by 20% per year. Microsoft would receive 49% of OpenAI’s profits, while the OpenAI nonprofit entity would recover just 2% of OpenAI’s profits — at least until all outside investors were paid out their investment returns, valued in total at $261 billion.</p>



<p class="has-text-align-none">Underscoring the profit-focused aim of the partnership, the 2023 JDCA was specifically structured to “remove the impediments in commercialization.”</p>



<p class="has-text-align-none">Microsoft negotiated expanded IP rights to include all OpenAI IP developed before or during the term of the agreement (excluding AGI), and the right to embed up to 20 employees at OpenAI.</p>



<p class="has-text-align-none">Finally, Microsoft and OpenAI established an 80%-20% revenue share.</p>
</blockquote>

<h2 class="wp-block-heading">OpenAI considered adding AI safety experts Dan Hendrycks, Paul Christiano, Jacob Steinhardt, and Ajeya Cotra to the board before Altman was fired.</h2>

<p class="has-text-align-none">Altman apparently wanted board members with more &#8220;commercial&#8221; experience. From Toner’s deposition, in reference to internal discussions about expanding the board before it fired Altman in late 2023:</p>

<blockquote class="wp-block-quote is-layout-flow wp-block-quote-is-layout-flow">
<p class="has-text-align-none"><em>Q: Was it your impression that Mr. Altman was dragging his feet in these discussions?</em></p>



<p class="has-text-align-none">A: Yes. I think that’s a fair description.</p>



<p class="has-text-align-none"><em>Q: Did Mr. Altman’s actions result in the board being deadlocked over any proposal to add an additional AI safety board member?</em></p>



<p class="has-text-align-none">A: I’d say he contributed to us significantly being deadlocked, yes.</p>



<p class="has-text-align-none"><em>Q: Did Mr. Altman propose different candidates to the board?</em></p>



<p class="has-text-align-none">A: Yes.</p>



<p class="has-text-align-none"><em>Q: Were Mr. Altman’s alternative candidates also AI safety experts or did they have different backgrounds?</em></p>



<p class="has-text-align-none">A: To the best of my recollection, he generally proposed candidates with more of a commercial startup background.</p>
</blockquote>

<h2 class="wp-block-heading">Altman and Brockman proposed kicking Adam D’Angelo off the board before Altman was fired.</h2>

<p class="has-text-align-none">From Toner’s deposition:</p>

<blockquote class="wp-block-quote is-layout-flow wp-block-quote-is-layout-flow">
<p class="has-text-align-none">Adam runs a company called Quora, which has a product called Poe, which uses large language models, including those of OpenAI and some of its competitors.</p>



<p class="has-text-align-none">The way I perceived it was after GPT-4 was demoed to the board in summer 2022, Adam began taking his responsibilities as a board member more seriously, because the technology seemed to be advancing, and he became a more engaged board member.</p>



<p class="has-text-align-none">In the lead-up to — between that time in summer 2022 and April 2023, we had had several conversations as a board about what kinds of conflict of interest were acceptable or unacceptable on the board, because many potential board members, and current board members, had various involvements with various AI companies.</p>



<p class="has-text-align-none">So we had fairly detailed discussions about what was an unacceptable conflict of interest and had decided that being closely involved with a company that was training its own large language models, you know, highly advanced frontier language models that would compete with OpenAI’s, was the bar for excessive conflict of interest.</p>



<p class="has-text-align-none">So it was surprising to me when Sam emailed the board in April 2023 saying that Adam’s conflict of interest had grown too large and seemed like he needed to step off the board, and did we agree. Because Adam’s company produced a product that used others’ LLMs, they didn’t — they weren’t training their own. So clearly it didn’t meet the conflict of interest criteria we had all discussed.</p>



<p class="has-text-align-none">When I said as much via email, Greg Brockman chimed in with a different reason to remove Adam, namely that his position as both a customer and a board member was creating communication difficulties internally. I forget who exactly said what on the email chain, but other board members raised questions about that or wanted to know more about that.</p>



<p class="has-text-align-none">Ultimately, I spoke to Sam on the phone, and we sort of — at my urging, we agreed that, surely, the step before just removing Adam from the board, if the problem was how he was communicating inside the company, surely, the next step would be to discuss that with him and see if we can improve the situation. Sam said he would do that, he would have a conversation with Adam, to try and improve how he was communicating inside the company. And then the situation seemed to go away.</p>



<p class="has-text-align-none">I later found out that Sam had never had that conversation with Adam, or that he had talked with him but had never actually tried to solve that problem, but, instead, had just said the only thing that he, Sam, didn’t like about Adam’s product Poe was that it used Anthropic models, because Anthropic was a competitor.</p>



<p class="has-text-align-none">So, all in all, the situation seemed to me like there wasn’t actually a clear, concrete reason to ask Adam to move off the board, but that Sam and Greg were sort of searching for an excuse because he had been providing more active governance of the company.</p>
</blockquote>

<h2 class="wp-block-heading">Altman didn’t initially tell OpenAI’s board that he was personally running a company VC fund.</h2>

<p class="has-text-align-none">From Toner’s deposition:</p>

<blockquote class="wp-block-quote is-layout-flow wp-block-quote-is-layout-flow">
<p class="has-text-align-none">Adam D’Angelo was at a dinner with some other founders, investors, startup people, who were asking him about the structure of the startup fund and potential conflicts of interest between the startup fund and OpenAI’s investors more generally.</p>



<p class="has-text-align-none">And after that conversation, Adam emailed the board, including Sam, perhaps a couple of other OpenAI executives, to understand the structure better.</p>



<p class="has-text-align-none">And in the resulting back-and-forth, we learned that Sam was the, as I understand it, the owner of the fund. So the initial conversation was around whether it was fair for OpenAI’s investors that OpenAI was sort of contributing to this other fund and was also contributing sort of engineering expertise and time to portfolio companies in the startup fund in ways that may not — where the benefit may not accrue back to OpenAI investors.</p>



<p class="has-text-align-none">After we learned that Sam had a financial stake in the fund, we also had concerns about the fact that he had not disclosed that, given that his position on the board was one of a supposedly independent board director, meaning one with no financial interest in OpenAI.</p>
</blockquote>

<h2 class="wp-block-heading">Altman proposed making a donation to then-Congressman Will Hurd while he was in talks to join the OpenAI board.</h2>

<p class="has-text-align-none">From Toner’s deposition:</p>

<blockquote class="wp-block-quote is-layout-flow wp-block-quote-is-layout-flow">
<p class="has-text-align-none">Sam also suggested that he wanted to make a large, I believe, several-hundred-thousand-dollar campaign contribution to Will, while still expecting him to come back onto the board.</p>



<p class="has-text-align-none">He did not go ahead with this donation because Tasha, Adam, and I all said it seemed very inappropriate. But to me, the fact that he was considering that, the fact that he might have discussed it with Will in advance, the fact it was an option, was just a sign of total disregard for the board’s independence or ability to provide meaningful oversight of the company and the CEO.</p>



<p class="has-text-align-none"><em>Q: And that several-hundred-thousand-dollar campaign contribution, was it — did Mr. Altman discuss that that was going to come from him personally?</em></p>



<p class="has-text-align-none">A: Yes, to the best of my recollection.</p>
</blockquote>

<h2 class="wp-block-heading">There were concerns about Altman’s closeness with the current OpenAI board chairman, Bret Taylor.</h2>

<p class="has-text-align-none">From McCauley’s deposition:</p>

<blockquote class="wp-block-quote is-layout-flow wp-block-quote-is-layout-flow">
<p class="has-text-align-none">I had more context on <strong>Bret Taylor</strong> than I did on <strong>Larry [Summers]</strong>, and I had concerns about his ability to be — yeah, to make disinterested decisions in a way that was, wasn’t partial to Sam. I mean, you know, we had — he had been proposed by Sam for the board previously when we were there and when we were going through the process of expanding the board. And by the best of my recollection, you know, Sam had — had made recommendations on a number of different people. He was favorable to Bret Taylor.</p>



<p class="has-text-align-none">If I recall correctly, Adam had — I think I recall correctly that Adam had interviewed Bret in the process of considering other candidates, and that one of the — prior to all of this — sorry — like, in the process that we were running over this — you know, in the months prior, when we were trying to expand the board; and at that time, that — one of the takeaways from that conversation was that — I think — I’m going to try to recall this exactly as possible, but it was I think Bret may have expressed concern that — concern around the — the conflicts. I think that he had said he had known Sam for a very long time and had a lot of connections to Sam and whatnot.</p>
</blockquote>

<h2 class="wp-block-heading">There were at least six key issues that led the board to fire Altman.</h2>

<p class="has-text-align-none">From McCauley’s deposition:</p>

<blockquote class="wp-block-quote is-layout-flow wp-block-quote-is-layout-flow">
<p class="has-text-align-none"><em>Q: Was one of those incidents Mr. Altman’s foot-dragging over adding an AI safety expert to the board?</em></p>



<p class="has-text-align-none">A: That — that was — you know, I think the fact that that process was unable to result in adding independent members and an AI safety member to the board, it exacerbated our concerns, yes.</p>



<p class="has-text-align-none"><em>Q: And was another one of those incidents that — Mr. Altman’s representation that the three enhancements to GPT-4 had all been approved by the safety board?</em></p>



<p class="has-text-align-none">A: Yes, that was a factor.</p>



<p class="has-text-align-none"><em>Q: Was another one of those incidents Mr. Altman’s failure to disclose that a GPT-4 test was released in India without joint safety board review?</em></p>



<p class="has-text-align-none">A: Yes.</p>



<p class="has-text-align-none"><em>Q: Was another incident Mr. Altman’s failure to inform the board prior to ChatGPT’s release?</em></p>



<p class="has-text-align-none">A: Yes.</p>



<p class="has-text-align-none"><em>Q: And was another incident Mr. Altman’s misrepresentation about you allegedly saying Ms. Toner should obviously leave the board?</em></p>



<p class="has-text-align-none">A: Yes.</p>



<p class="has-text-align-none"><em>Q: And was another incident Mr. Altman’s misrepresentation that the legal department told him GPT-4 Turbo did not need safety board review?</em></p>



<p class="has-text-align-none">A: Yes, that — that we saw screenshots to that effect.</p>
</blockquote>

<h2 class="wp-block-heading">Sutskever had $4 billion worth of vested equity in OpenAI as of November 2023.</h2>

<p class="has-text-align-none">A text exchange between Altman, Nadella, and OpenAI COO <strong>Brad Lightcap </strong>revealed the stake. As Microsoft was in discussions to hire Altman and most of the OpenAI team, Lightcap wrote that paying employees for their equity would cost $25 billion or $29 billion, depending on whether Sutskever’s vested shares were included. While it’s impossible to know for sure without more evidence, the exchange suggests that Sutskever was OpenAI’s largest individual shareholder at the time. It’s unclear if he has sold any shares since.</p>

<h2 class="wp-block-heading">Sutskever thought “OpenAI would be destroyed” if Altman wasn’t rehired.</h2>

<p class="has-text-align-none">From his deposition:</p>

<blockquote class="wp-block-quote is-layout-flow wp-block-quote-is-layout-flow">
<p class="has-text-align-none"><em>Q: And why did you withdraw your support for Sam Altman being fired?</em></p>



<p class="has-text-align-none">A: Because I thought that OpenAI would be destroyed.</p>
</blockquote>

<h2 class="wp-block-heading">Altman told Musk, “It really fucking hurts when you publicly attack OpenAI.”</h2>

<p class="has-text-align-none">From a February 2023 email exchange:</p>

<blockquote class="wp-block-quote is-layout-flow wp-block-quote-is-layout-flow">
<p class="has-text-align-none"><strong>Altman:</strong> i remember seeing you in a tv interview a long time ago (maybe 60 minutes? where you being attacked by some guys, and you said they were heroes of yours and it was really tough.</p>



<p class="has-text-align-none">well, you’re my hero and that’s what it feels like when you attack openai. totally get we have some screwed some stuff up, but we have worked incredibly hard to do the right thing, and i think we have ensured that neither google nor anyone else is on a path to have unilateral control over AGI, which i believe we both think is critical.</p>



<p class="has-text-align-none">i am tremendously thankful for everything you’ve done to help —i dont think openai would have happened without you-and it really fucking hurts when you publicly attack openai</p>



<p class="has-text-align-none"><strong>Musk: </strong>I hear you and it is certainly not my intention to be hurtful, for which I apologize, but the fate of civilization is at stake.</p>



<p class="has-text-align-none"><strong>Altman: </strong>i agree with that, and i would really love to hear the things you think we should be doing differently/better.</p>



<p class="has-text-align-none">it’s also not clear to me how the attacks on twitter help the fate of civilization, but that’s less important to me that getting to the right substance.</p>



<p class="has-text-align-none">also, i checked with our team on recruiting from tesla. we really are doing very little relative to the size of the company, but i will make sure we don’t hurt tesla, i obviously think it’s a super important company.</p>
</blockquote>

<h2 class="wp-block-heading">“OpenAI has not yet done business with Helion but intends to if the technology works.”</h2>

<p class="has-text-align-none">Altman is personally the largest investor in Helion, which is building fusion power technology. From his deposition:</p>

<blockquote class="wp-block-quote is-layout-flow wp-block-quote-is-layout-flow">
<p class="has-text-align-none"><em>Q: While you were at Y Combinator, did you personally invest in any of the companies that Y Combinator sponsored?</em></p>



<p class="has-text-align-none">A: I did.</p>



<p class="has-text-align-none"><em>Q: Which ones?</em></p>



<p class="has-text-align-none">A: I couldn’t give you a list off the top of my head.</p>



<p class="has-text-align-none"><em>Q: Have any of those companies done business with OpenAI?</em></p>



<p class="has-text-align-none">A: Yes.</p>



<p class="has-text-align-none"><em>Q: Which ones?</em></p>



<p class="has-text-align-none">A: Our conflicts committee keeps track of all this and could tell you. I couldn’t do a list off the top of my head that would be exhaustive. Reddit is one. OpenAI has not yet done business with Helion but intends to if the technology works.</p>
</blockquote>

<h2 class="wp-block-heading">Altman thinks “things need to go right” for OpenAI to be worth $500 billion.</h2>

<p class="has-text-align-none">From his deposition:</p>

<blockquote class="wp-block-quote is-layout-flow wp-block-quote-is-layout-flow">
<p class="has-text-align-none"><em>Q: And do you personally agree that the company is worth at least 500 billion currently?</em></p>



<p class="has-text-align-none">A: That was the willing buyer-willing seller market price, so I won’t argue with it.</p>



<p class="has-text-align-none"><em>Q: Apart from your faith in the willing buyers and willing sellers, do you agree, being the one who runs the company, that the company is worth at least $500 billion today?</em></p>



<p class="has-text-align-none">A: If I were an outside market investor, I would — I think I would absolutely love to buy OpenAI shares at a 300-billion-dollar valuation, somewhat higher. At 500, I would start to say, “Could be, like, you know, things need to go right, but could be.”</p>
</blockquote>

<hr class="wp-block-separator has-alpha-channel-opacity" />

<h2 class="wp-block-heading">Other standout quotes:</h2>

<blockquote class="wp-block-quote is-layout-flow wp-block-quote-is-layout-flow">
<p class="has-text-align-none">“No, I was not surprised, because I was used to the board not being very informed about things.”</p>
</blockquote>

<p class="has-text-align-none">&#8211; Toner responding to a question during her deposition about whether she was surprised by the original release of ChatGPT.</p>

<blockquote class="wp-block-quote is-layout-flow wp-block-quote-is-layout-flow">
<p class="has-text-align-none">“I think there’s a real possibility that five or 10 years from now, people look back and think of the main role OpenAI played during the late 2010s/early 2020s as being the org that set off great excitement about and investment in AGI (and then lost its lead to other orgs).”</p>
</blockquote>

<p class="has-text-align-none">&#8211; Toner in a message relayed by Brockman to other OpenAI leaders.</p>

<blockquote class="wp-block-quote is-layout-flow wp-block-quote-is-layout-flow">
<p class="has-text-align-none">“Because of this pattern of lying… as was being reported to me, people in the company were copying that behavior, and there was kind of a culture of lying and a culture of, you know, yeah, deceit. And I think for us, as a board… this was just extremely concerning.”</p>
</blockquote>

<p class="has-text-align-none">&#8211; McCauley in her deposition.</p>

<blockquote class="wp-block-quote is-layout-flow wp-block-quote-is-layout-flow">
<p class="has-text-align-none">“I thought it would be one of the coolest things that humanity could ever build. I was a sci-fi nerd. I read a lot of books. I watched a lot of sci-fi TV and movies. And, you know, I thought it would be one of the most helpful things to help humanity prosper.”</p>
</blockquote>

<p class="has-text-align-none">&#8211; Altman, during his deposition, on why he wanted to join OpenAI.</p>

<blockquote class="wp-block-quote is-layout-flow wp-block-quote-is-layout-flow">
<p class="has-text-align-none">“I mean, he played a lot of video games.”</p>
</blockquote>

<p class="has-text-align-none">&#8211; Altman’s response to questioning during his deposition about Musk’s involvement in the early days of OpenAI.</p>

<blockquote class="wp-block-quote is-layout-flow wp-block-quote-is-layout-flow">
<p class="has-text-align-none">“I estimate that I spend and have spent all the way through about a quarter of my time recruiting for OpenAI.”</p>
</blockquote>

<p class="has-text-align-none">&#8211; Altman during his deposition.</p>

<blockquote class="wp-block-quote is-layout-flow wp-block-quote-is-layout-flow">
<p class="has-text-align-none">“I think it’s hard to find people as successful as Elon Musk.”</p>
</blockquote>

<p class="has-text-align-none">&#8211; Sutskever during his deposition.</p>

<blockquote class="wp-block-quote is-layout-flow wp-block-quote-is-layout-flow">
<p class="has-text-align-none">“It doesn’t matter who wins if everyone dies.”</p>
</blockquote>

<p class="has-text-align-none">&#8211; Brockman in an early exchange with OpenAI colleagues.</p>

<iframe src="https://sources.news/embed" width="480" height="320" frameborder="0"></iframe>
						]]>
									</content>
			
					</entry>
			<entry>
			
			<author>
				<name>Alex Heath</name>
			</author>
			
			<title type="html"><![CDATA[Lenovo is building an AI assistant that ‘can act on your behalf’]]></title>
			<link rel="alternate" type="text/html" href="https://www.theverge.com/column/857053/lenovo-ai-assistant-qira" />
			<id>https://www.theverge.com/?p=857053</id>
			<updated>2026-01-16T08:02:10-05:00</updated>
			<published>2026-01-06T21:30:00-05:00</published>
			<category scheme="https://www.theverge.com" term="AI" /><category scheme="https://www.theverge.com" term="Column" /><category scheme="https://www.theverge.com" term="Lenovo" /><category scheme="https://www.theverge.com" term="Sources" /><category scheme="https://www.theverge.com" term="Tech" />
							<summary type="html"><![CDATA[This is an excerpt of Sources by Alex Heath, a newsletter about AI and the tech industry, syndicated just for The Verge subscribers once a week. While most attention in the AI race is focused on model builders and cloud platforms, Lenovo sits closer to millions of users than most companies. As the world’s top PC [&#8230;]]]></summary>
			
							<content type="html">
											<![CDATA[

						
<figure>

<img alt="" data-caption="" data-portal-copyright="" data-has-syndication-rights="1" src="https://platform.theverge.com/wp-content/uploads/sites/2/2026/01/374.jpg?quality=90&#038;strip=all&#038;crop=0,0,100,100" />
	<figcaption>
		</figcaption>
</figure>
<p class="has-text-align-none"><em>This is an excerpt of <a href="https://sources.news/" target="_blank" rel="noreferrer noopener">Sources by Alex Heath</a>, a newsletter about AI and the tech industry, syndicated just for The Verge subscribers once a week.</em></p>

<p class="has-text-align-none">While most attention in the AI race is focused on model builders and cloud platforms, Lenovo sits closer to millions of users than most companies. As the world’s <a href="https://www.gartner.com/en/newsroom/press-releases/2025-10-16-gartner-says-worldwide-pc-shipments-grew-8-percent-in-third-quarter-of-2025">top PC maker by volume</a>, Lenovo ships tens of millions of devices every year. What it decides to ship, bundle, and integrate can directly shape how AI shows up in many everyday lives.</p>

<p class="has-text-align-none">That’s what made Lenovo’s announcement today at CES notable. At a flashy event on Tuesday at The Sphere in Las Vegas, it introduced Qira, a system-level, cross-device AI assistant designed to live across Lenovo laptops and Motorola phones. It’s Lenovo’s most ambitious AI effort to date and a rare look at how a hardware giant with global reach is thinking about integrating AI more deeply.</p>

<p class="has-text-align-none">Jeff Snow, Lenovo’s head of AI product, told me how Qira came together, why the company is deliberately avoiding a single exclusive AI partnership, and what he learned from earlier experiments like <a href="https://www.theverge.com/2024/11/27/24307171/hello-moto-ai">Moto AI</a> and<a href="https://www.theverge.com/2024/10/29/24282821/microsoft-windows-recall-feature-optional-uninstall"> Microsoft’s Recall debacle</a>.</p>

<p class="has-text-align-none">Qira emerged from a quiet but meaningful internal reorganization less than a year ago, according to Snow. Lenovo pulled AI teams out of individual hardware units such as PCs, tablets, and phones and centralized them into a new software-focused group that works across the entire company.</p>

<p class="has-text-align-none">For a company long optimized around hardware SKUs and supply chains, the move signaled a shift toward putting AI more front and center. “We wanted a built-in cross-device intelligence that works with you throughout the day, learns from your interactions, and can act on your behalf,” Snow said. He mentioned using Qira’s on-device model during his flight to CES to help him workshop how to talk about the news in meetings based on the notes and documents on his PC.</p>

<figure class="wp-block-pullquote"><blockquote><p>“We wanted a built-in cross-device intelligence that … learns from your interactions and can act on your behalf.”</p></blockquote></figure>

<p class="has-text-align-none">Qira is not built around a single flagship AI model. Instead, it’s modular. Under the hood, it mixes local, on-device models with cloud-based models, anchored by Microsoft and OpenAI infrastructure accessed through Azure. Stability AI’s diffusion model is also integrated, along with tie-ins to app-specific partners like Notion and Perplexity.</p>

<p class="has-text-align-none">“We didn’t want to hard-code ourselves to one model,” Snow said. “This space is moving too fast. Different tasks need different tradeoffs around performance, quality, and cost.”</p>

<p class="has-text-align-none">That stance runs counter to the push from major AI labs, many of which would happily become the exclusive intelligence layer for a company with Lenovo’s reach. Lenovo’s view is that optionality matters more, especially given its control over one of the largest consumer computing distribution channels in the world.</p>

<p class="has-text-align-none">Snow previously worked on Moto AI, Motorola’s assistant, which he said saw high initial engagement. More than half of Motorola users tried it, but retention wasn’t good. He said that too much of the experience felt like prompt-based chat features people could already get elsewhere.</p>

<p class="has-text-align-none">“That pushed us away from competing with chatbots,” Snow said. “Qira is about things chatbots can’t do, like continuity, context, and acting directly on your device.”</p>

<figure class="wp-block-pullquote"><blockquote><p>Cost pressures loom over all of this. </p></blockquote></figure>

<p class="has-text-align-none">Lenovo also paid close attention to the backlash around Microsoft’s Recall feature. Snow said Qira is designed from the outset with opt-in memory, persistent indicators, and clear user controls. Context ingestion is optional. Recording is visible. Nothing is silently collected.</p>

<p class="has-text-align-none">Cost pressures loom over all of this. Memory prices are rising as AI demand strains supply chains, and analysts expect PC prices to follow. Qira does not raise baseline system requirements for PCs, Snow said, but it performs best on higher-end machines with more RAM. Lenovo is working to bring local models down to smaller memory footprints, like 16 gigabytes of RAM, without watering down the experience.</p>

<p class="has-text-align-none">Strategically, Lenovo sees Qira as both a retention play and a hedge against hardware commoditization. In the short term, it hopes that tighter integration between laptops and phones will nudge customers to stay within the Lenovo ecosystem. Over the longer term, Snow framed Qira as a way to differentiate Lenovo devices when specs alone are no longer enough.</p>

<iframe src="https://sources.news/embed" width="480" height="320" frameborder="0"></iframe>
						]]>
									</content>
			
					</entry>
			<entry>
			
			<author>
				<name>Alex Heath</name>
			</author>
			
			<title type="html"><![CDATA[Threads wants to be the app you can’t wait to open in the morning]]></title>
			<link rel="alternate" type="text/html" href="https://www.theverge.com/column/847806/head-of-threads-interview" />
			<id>https://www.theverge.com/?p=847806</id>
			<updated>2025-12-19T07:50:45-05:00</updated>
			<published>2025-12-18T17:30:00-05:00</published>
			<category scheme="https://www.theverge.com" term="Column" /><category scheme="https://www.theverge.com" term="Social Media" /><category scheme="https://www.theverge.com" term="Sources" /><category scheme="https://www.theverge.com" term="Tech" />
							<summary type="html"><![CDATA[This is an excerpt of&#160;Sources by Alex Heath, a newsletter about AI and the tech industry, syndicated just for The Verge subscribers once a week. By all measures, Meta’s Threads app had a very good year. The app was Apple’s second-most-downloaded iOS app of the year, trailing only ChatGPT. Threads now has 400 million monthly [&#8230;]]]></summary>
			
							<content type="html">
											<![CDATA[

						
<figure>

<img alt="" data-caption="" data-portal-copyright="" data-has-syndication-rights="1" src="https://platform.theverge.com/wp-content/uploads/sites/2/2025/10/STK156_Instagram_threads_1.jpg?quality=90&#038;strip=all&#038;crop=0,0,100,100" />
	<figcaption>
		</figcaption>
</figure>
<p class="has-text-align-none"><em>This is an excerpt of&nbsp;<a href="https://sources.news/" target="_blank" rel="noreferrer noopener">Sources by Alex Heath</a>, a newsletter about AI and the tech industry, syndicated just for The Verge subscribers once a week.</em></p>

<p class="has-text-align-none">By all measures, Meta’s Threads app had a very good year. The app was Apple’s second-most-downloaded iOS app of the year, trailing only ChatGPT. Threads now has 400 million monthly and 150 million daily active users.</p>

<figure class="wp-block-pullquote"><blockquote><p>“There are consumers who are ravenous to consume the content.”</p></blockquote></figure>

<p class="has-text-align-none">That growth is still coming mainly from Meta’s other platforms. “We do a lot of work in Instagram and Facebook to show off what’s going on in Threads,” Connor Hayes, the head of Threads, told me this week. The playbook: surface personalized Threads content in your Instagram and Facebook feeds, get you to download the app, then wean you off needing those nudges to check it consistently. “We do a bunch of work to get people off of being dependent on those promotions and wake up in the morning and just want to open the app,” Hayes explained.</p>

<p class="has-text-align-none">Hayes, who helped launch Threads initially and was named its head in September, has been focused on clarifying the platform’s identity. In our conversation, he said the goal for Threads is to be “the place on the internet to talk about what’s going on in the world.” Practically, that means going vertical by vertical — sports, entertainment, news — and tipping both creators and consumers toward using the app more.</p>

<p class="has-text-align-none">When it comes to competitors, Hayes is focused on more than just X. “Reddit has a ton of activity that is analogous to what happened on Twitter in the early days,” he said. “Discord has a bunch of these big group chat-style communities.” He acknowledged Twitter, now X, as “the app that pioneered the core format,” but made clear that the battle for real-time conversation is crowded. </p>

<h2 class="wp-block-heading has-text-align-none">A traffic channel for creators</h2>

<p class="has-text-align-none">There’s no direct monetization for creators on Threads right now. Hayes is pitching something different: Threads as a traffic channel to other platforms where creators actually get paid.</p>

<p class="has-text-align-none">The clearest example is podcasts. Threads recently launched a feature that renders show and episode links from platforms like Spotify and lets users pin them to their profiles. Hayes said Threads is open to other partnerships with platforms like Substack and Patreon as well. But there’s no plan to let creators paywall content directly on Threads or to share ad revenue like YouTube. </p>

<h2 class="wp-block-heading has-text-align-none">Ads are coming, but slowly</h2>

<p class="has-text-align-none">Meanwhile, Threads is testing ads in four countries, including the US, but the load is deliberately light, Hayes told me. “We are ramping the ad load up steadily over the course of the next year,” he said, “but only doing it when we feel like there’s enough value on the consumer side of the app to justify doing that.” </p>

<h2 class="wp-block-heading has-text-align-none">Controlling the algorithm</h2>

<p class="has-text-align-none">Threads is testing a new feature called “Dear Algo” in a handful of countries. Users can ask to see more or less of a topic, share their algorithm prompt for others to use or remix, and have their personalized feed adjust to the prompt for three days. “After a heartbreaking loss of your sports team, you can be like, don’t show me NFL content for three days,” Hayes said. “But you’ll be ready on day four to come back in.”</p>

<p class="has-text-align-none">The broader point: content understanding has gotten better thanks to LLMs. “We now don’t just know that a thing is about basketball. We know that it’s the 1998 NBA Finals, and it’s this player taking a shot for this team.” That precision is what makes this kind of algorithm steering possible. Hayes has been surprised by how specific early user requests are with prompts like, “show me more football content, but not Patrick Mahomes.” </p>

<h2 class="wp-block-heading has-text-align-none">The fediverse is on maintenance mode</h2>

<p class="has-text-align-none">Threads still supports federation with other apps like Mastodon, but Hayes was clear that it’s not a top priority for the current roadmap. “It’s something that we’re supporting, it’s something that we’re maintaining, but it’s not the thing that we’re talking about that’s gonna help the app break out,” he said.</p>

<p class="has-text-align-none">“As someone who has built a zillion consumer products, it&#8217;s just really hard to keep these divergent platforms and products consistent on the same protocol over time,” he explained. “There’s always going to be the tradeoffs that these companies are thinking about of how much energy do I want to pour into compatibility with this ecosystem versus iterating on this thing I’m building and seeing what&#8217;s valuable.”</p>

<h2 class="wp-block-heading has-text-align-none">Prioritizing timeliness but not news</h2>

<p class="has-text-align-none">Threads used to be mocked for how it would surface old content. Now, the app prioritizes recommending content from the last 24 hours, according to Hayes. “If something is four or five days old, even if it&#8217;s really good, we probably won&#8217;t show that.” </p>

<p class="has-text-align-none"><a href="https://sources.news/p/x-wants-its-haters-back">Unlike X</a>, Hayes said Threads isn’t making a push to get more journalists and publishers on the app. “We just look at it like any other vertical, which is that there are certain creators who are really good at this and know a lot about it. There are consumers who are ravenous to consume the content.” He said Threads isn’t downranking news, but it’s “not one of the focus verticals right now.”</p>

<iframe loading="lazy" src="https://sources.news/embed" width="480" height="320" frameborder="0"></iframe>
						]]>
									</content>
			
					</entry>
			<entry>
			
			<author>
				<name>Alex Heath</name>
			</author>
			
			<title type="html"><![CDATA[The AI industry’s biggest week: Google’s rise, RL mania, and a party boat]]></title>
			<link rel="alternate" type="text/html" href="https://www.theverge.com/column/841207/ai-neurips-2025" />
			<id>https://www.theverge.com/?p=841207</id>
			<updated>2025-12-11T12:44:29-05:00</updated>
			<published>2025-12-09T21:00:00-05:00</published>
			<category scheme="https://www.theverge.com" term="AI" /><category scheme="https://www.theverge.com" term="Column" /><category scheme="https://www.theverge.com" term="Sources" /><category scheme="https://www.theverge.com" term="Tech" />
							<summary type="html"><![CDATA[This is an excerpt of Sources by Alex Heath, a newsletter about AI and the tech industry, syndicated just for The Verge subscribers once a week. Reinforcement learning (RL) is the next frontier, Google is surging, and the party scene has gotten completely out of hand. Those were the through lines from this year&#8217;s NeurIPS in [&#8230;]]]></summary>
			
							<content type="html">
											<![CDATA[

						
<figure>

<img alt="" data-caption="" data-portal-copyright="" data-has-syndication-rights="1" src="https://platform.theverge.com/wp-content/uploads/sites/2/2025/03/STK093_GOOGLE_E.jpg?quality=90&#038;strip=all&#038;crop=0,0,100,100" />
	<figcaption>
		</figcaption>
</figure>
<p class="has-text-align-none"><em>This is an excerpt of <a href="https://sources.news/" target="_blank" rel="noreferrer noopener">Sources by Alex Heath</a>, a newsletter about AI and the tech industry, syndicated just for The Verge subscribers once a week.</em></p>

<p class="has-text-align-none">Reinforcement learning (RL) is the next frontier, Google is surging, and the party scene has gotten completely out of hand. Those were the through lines from this year&#8217;s NeurIPS in San Diego.</p>

<p class="has-text-align-none">NeurIPS, or the “Conference on Neural Information Processing Systems,” started in 1987 as a purely academic affair. It has since ballooned alongside the hype around AI into a massive industry event where labs come to recruit and investors come to find the next wave of AI startups.</p>

<p class="has-text-align-none">I was regretfully unable to attend NeurIPS this year, but I still wanted to know what people were talking about on the ground in San Diego over the past week. So I asked engineers, researchers, and founders for their takeaways. The list below of responses includes Andy Konwinski, cofounder of Databricks and founder of the Laude Institute; Thomas Wolf, cofounder of Hugging Face; OpenAI’s Roon; and attendees from Meta, Waymo, Google DeepMind, Amazon, and a handful of other places.&nbsp;</p>

<p class="has-text-align-none">I asked everyone the same three questions: What&#8217;s the buzziest topic from the conference? Which labs feel like they&#8217;re surging or struggling? Who had the best party?&nbsp;</p>

<p class="has-text-align-none">The consensus was clear. &#8220;RL RL RL RL is taking over the world,&#8221; Anastasios Angelopoulos, CEO of LMArena, told me. The industry is coalescing around the idea that tuning models for specific use cases, rather than scaling the data used for pre-training, will drive the next wave of AI progress. What&#8217;s clear from the lab momentum question is that Google is having a moment. &#8220;Google DeepMind is feeling good,&#8221; Hugging Face’s Wolf told me.</p>

<p class="has-text-align-none">The party circuit was naturally relentless. Konwinski&#8217;s Laude Lounge emerged as one of the week&#8217;s hotspots — Jeff Dean, Yoshua Bengio, Ion Stoica, and about a dozen other top researchers came through. Model Ship, an invite-only cruise with 200 researchers, featured &#8220;a commitment to the dance floor that is unprecedented at a conference event,&#8221; one of the organizers of the cruise, Nathan Lambert, told me. Roon was dry about the whole scene: &#8220;you can learn more from twitter than from literally being there &#8230; mostly my on-the-ground feeling was &#8216;this is too much.'&#8221;</p>

<p class="has-text-align-none">Here’s what attendees had to say about NeurIPS this year:</p>

<p class="has-text-align-none"><strong>What was the buzziest topic among attendees that you think more people will be talking about in 2026?</strong></p>

<ul class="wp-block-list">
<li><strong>Andy Konwinski, founder of the Laude Institute:</strong> “I did a lot of interviews over the week, and when I asked people what felt overhyped to them, I heard agentic AI, RL, and world models, though I also heard RL and world models as areas people think are up-and-coming and most interesting to watch.”</li>



<li><strong>Thomas Wolf, cofounder of Hugging Face:</strong> “AI x science, interpretability, RL long rollouts”</li>



<li><strong>Roon, member of technical staff, OpenAI:</strong> “you can learn more from twitter than from literally being there / the tweets are saying the buzz is about continual learning / That’s possibly true / I can’t guarantee / mostly my on-the-ground feeling was ‘this is too much’”</li>



<li><strong>Maya Bechler-Speicher, research scientist at Meta:</strong> “I can’t say with certainty what the buzziest topic was — the conference is massive, and my exposure was naturally limited — but tabular foundation models were undoubtedly gaining significant traction, and I expect this momentum to continue into 2026. After years in which decision-tree–based methods dominated generalization on tabular data, we are finally seeing foundation-model approaches that consistently outperform them. Another area drawing considerable attention is physical AI, which remains full of open research questions and opportunities.”</li>



<li><strong>Anonymous researcher at a big AI lab:</strong> “I’m biased here, but AI for the physical world (robotics, engineering, etc, not just AI for science) looks like it&#8217;s finally taking off.”</li>



<li><strong>Nathan Lambert, senior researcher at the Allen Institute for AI:</strong> “It was accepted that [Ilya Sutskever]&#8217;s proclamation on the Dwarkesh Podcast that it&#8217;s now &#8216;The Age of Research’ rather than the age of scaling is a good moniker. No one area of the poster sessions or workshops was obviously labeled as the most important topic (e.g., last year&#8217;s NeurIPS was obsessed with reinforcement learning and reasoning after the launch of o1). Some groups reflected solemnly on how this was the first NeurIPS since DeepSeek R1 and a year of open model transformation, but most of the conference didn&#8217;t feel like it had an active role to play in it.”</li>



<li><strong>Brian Wilt, head of data at Waymo: </strong>“The buzziest topic among my friends was how much research was happening in frontier labs vs. academia and was likely unpublished.. From my perspective at Waymo, many of the (applied) problems I need to solve only emerge at scale (e.g., data, performance). However, there&#8217;s also a deep sense that we need another fundamental breakthrough besides scaling current architectures (as Ilya/[Andrej] Karpathy/others have alluded to)”</li>



<li><strong>Evgenii Nikishin, member of technical staff at OpenAI:</strong> “Continual learning was certainly among the buzziest topics. I don&#8217;t know yet how many scientific advances there will be in 2026 — maybe some, maybe little — but I think more people will be talking about it.” </li>



<li><strong>Paige Bailey, developer lead for Google DeepMind:</strong> “Definitely sovereign open models, especially deploying them on-prem with fine-tuning + RL. In terms of what people will be talking about in 2026, I think world Models and robotics are the big ones.”</li>



<li><strong>Sachin Dharashivkar, CEO of AthenaAgent:</strong> “Designing RL environments and training agents was the most discussed topic.”</li>



<li><strong>Ronak Malde, ex-DeepMind engineer and new founder of a stealth RL startup:</strong> “Continual learning. To support this next frontier, we’re going to need new architectures, new reward functions, new data sources, and new data scalability models.”</li>



<li><strong>Deniz Birlikci, researcher at Amazon:</strong> “Agents are not a model — they are a stack. Therefore, RL for agents should train with the same tools/stacks that will be used in production. More teams are thinking [about] how to create a dense taxonomy and labeling for their data, especially in RL, and I find this very important.” </li>



<li><strong>Richard Suwandi, student ambassador for The Chinese University of Hong Kong: </strong>“There were lots of discussions around whether we can build AI systems that are truly creative (not just optimizing within known boundaries, but capable of generating genuinely novel ideas and discoveries on their own). I expect this to become a major research frontier in 2026.”</li>



<li><strong>Anastasios Angelopoulos, CEO of LMArena:</strong> “RL RL RL RL is taking over the world”</li>
</ul>

<p class="has-text-align-none"><strong>Which labs feel like they’re surging in momentum, and which ones feel more shaky?</strong></p>

<ul class="wp-block-list">
<li><strong>Nathan Lambert (Allen Institute for AI): </strong>“The discussion of which labs are leading and falling behind felt fully like an export out of SF gossip in the last few weeks. Gemini and Anthropic are ascendant at the cost of OpenAI. At least OpenAI was mentioned, where I don&#8217;t think I heard anyone debating the capabilities of xAI once.”</li>



<li><strong>Evgenii Nikishin (OpenAI): </strong>“The Big 3 frontier Labs (GDM, Anthro, OAI) are having a good overall momentum, though each has their unique stronger and weaker sides. As for places that are not doing too great, think about quite a few LLM / imagen startups from 2022-2024 who were offering similar pitches and didn&#8217;t have unique value prop. I feel that many of them either already or are in the process of quietly dying.”</li>



<li><strong>Andy Konwinski (Laude Institute):</strong> “Surging momentum: Alibaba/Qwen, Moonshot/Kimi, Arcee, Reflection AI, Human&amp;, Prime Intellect all made announcements that very recently that were buzzing / Google w/ gemini 3, nano banana, TPUv7”</li>



<li><strong>Anonymous researcher: </strong>“Reflection had a massive booth given that they’re a very young startup &#8211; that’s definitely new.”</li>



<li><strong>Brian Wilt (Waymo): </strong>“I was proud that Alphabet/Google had the most accepted papers this year.”</li>



<li><strong>Paige Bailey (Google DeepMind): </strong>“Periodic Labs and Reflection AI feel like they are surging; they both have really interesting mission statements. I also loved seeing Anna and Azalea launch a company (Ricursive Intelligence).”</li>



<li><strong>Ronak Malde (stealth RL startup):</strong> “Several neolabs are going to launch in 2026 that shake up research as we know it. DeepMind is still crushing it. Kimi Moonshot and Deepseek are too.”</li>



<li><strong>Richard Suwandi (The Chinese University of Hong Kong): </strong>“One lab that clearly feels like it’s surging is Google DeepMind. At NeurIPS, you could really feel them pushing a new research agenda, with things like Nested Learning and Titans/MIRAS pointing toward more continual, long‑term memory rather than just bigger transformers, which was a refreshing shift in the hallway conversations.”</li>



<li><strong>Thomas Wolf (Hugging Face): </strong>“Google DeepMind is feeling good.”</li>
</ul>

<p class="has-text-align-none"><strong>What was the best party you attended or had FOMO over?</strong></p>

<ul class="wp-block-list">
<li><strong>Nathan Lambert (Allen Institute for AI/Model Ship co-organizer):</strong> “The paradigmatic example of a NeurIPS party for the current area of AI was <a href="https://modelship2025.ai/">Model Ship</a>, an invite-only cruise with 200 top researchers, investors, and personalities in the AI space. It had bespoke merch, free conversation, and a commitment to the dance floor that is unprecedented at a conference event.”</li>



<li><strong>Andy Konwinski (Laude Institute)</strong>: “I was a bit bummed that I couldn’t make it out to events organized by Robert Nishihara, Naveen Rao, and Nathan Lambert. I also was sad to miss Rich Sutton and Yejin Choi’s keynotes (though I ended up interviewing Yejin so we got to jam on the topics she spoke about).”</li>



<li><strong>Roon (OpenAI):</strong> “openai ones, a16z ones / I liked the a16z one because I got to meet lex [Fridman] that was cool / but even the parties I mostly tried to avoid kept getting partifuls that were like 750 people in a house or whatever / what a nightmare”</li>



<li><strong>Maya Bechler-Speicher (Meta):</strong> “The Meta party was one of the most impressive company events I’ve attended. Additionally, G-Research invited a very small group of researchers to a three-star Michelin restaurant, which was not a party per se but was absolutely exceptional.”</li>



<li><strong>Brian Wilt (Waymo):</strong> “My favorite event was a small gathering at comma.ai (HQ&#8217;d in San Diego), who develop an open-source driver assistant. I use it on my personal car, it&#8217;s perfect for when I&#8217;m not riding in Waymo in Phoenix. @yassineyousfi_ put together an online capture-the-flag to get in. @realGeorgeHotz took us on a tour of their data center and manufacturing. I did die a little when I typed their wifi password, ‘lidarisdoomed’”</li>



<li><strong>Evgenii Nikishin (OpenAI):</strong> “The OpenAI party 😎”</li>



<li><strong>Paige Bailey (Google DeepMind):</strong> “I actually had to head back late Friday/early Saturday, so I missed out on the end-of-conference workshops. I had major FOMO over the ML for Systems workshop, though, as well as the ‘Claude and Gemini Play Pokemon’ workshop &#8212; they both looked awesome!”</li>



<li><strong>Ronak Malde (stealth RL startup):</strong> “Radical VC bringing Jeff Dean and Geoffrey Hinton into one room was the highlight of the week.”</li>



<li><strong>Anastasios Angelopoulos (LMArena):</strong> “Laude Lounge”</li>



<li><strong>Thomas Wolf (Hugging Face):</strong> “The Hugging Face party where 2.5k+ people registered / I really enjoyed the Prime-intellect one”</li>



<li><strong>Dylan Patel, founder of SemiAnalysis:</strong> “Mine haha”</li>
</ul>

<p class="has-text-align-none">Yes, some people thought keynotes were parties. I guess academia lives on at NeurIPS after all.</p>

<iframe loading="lazy" src="https://sources.news/embed" width="480" height="320" frameborder="0"></iframe>
						]]>
									</content>
			
					</entry>
			<entry>
			
			<author>
				<name>Alex Heath</name>
			</author>
			
			<title type="html"><![CDATA[Anthropic&#8217;s AI bubble &#8216;YOLO&#8217; warning]]></title>
			<link rel="alternate" type="text/html" href="https://www.theverge.com/column/837779/anthropic-ai-bubble-warning" />
			<id>https://www.theverge.com/?p=837779</id>
			<updated>2025-12-03T16:33:29-05:00</updated>
			<published>2025-12-03T16:45:00-05:00</published>
			<category scheme="https://www.theverge.com" term="AI" /><category scheme="https://www.theverge.com" term="Anthropic" /><category scheme="https://www.theverge.com" term="Column" /><category scheme="https://www.theverge.com" term="Sources" /><category scheme="https://www.theverge.com" term="Tech" />
							<summary type="html"><![CDATA[This is an excerpt of Sources by Alex Heath, a newsletter about AI and the tech industry, syndicated just for The Verge subscribers once a week. Dario Amodei took the stage at the DealBook Summit on Wednesday to throw punches without naming names. The Anthropic CEO spent a good chunk of the interview with Andrew Ross [&#8230;]]]></summary>
			
							<content type="html">
											<![CDATA[

						
<figure>

<img alt="" data-caption="Andrew Ross Sorkin and Dario Amodei speak onstage during The New York Times DealBook Summit 2025 at Jazz at Lincoln Center on December 03, 2025 in New York City. | Image: Getty" data-portal-copyright="Image: Getty" data-has-syndication-rights="1" src="https://platform.theverge.com/wp-content/uploads/sites/2/2025/12/gettyimages-2249804453.jpg?quality=90&#038;strip=all&#038;crop=0,0,100,100" />
	<figcaption>
	Andrew Ross Sorkin and Dario Amodei speak onstage during The New York Times DealBook Summit 2025 at Jazz at Lincoln Center on December 03, 2025 in New York City. | Image: Getty	</figcaption>
</figure>
<p class="has-text-align-none"><em>This is an excerpt of <a href="https://sources.news/" target="_blank" rel="noreferrer noopener">Sources by Alex Heath</a>, a newsletter about AI and the tech industry, syndicated just for The Verge subscribers once a week.</em></p>

<p class="has-text-align-none">Dario Amodei took the stage at the DealBook Summit on Wednesday to throw punches without naming names.</p>

<p class="has-text-align-none">The Anthropic CEO spent a good chunk of the interview with Andrew Ross Sorkin drawing a careful line between his company’s approach and that of a certain competitor. When asked about whether the AI industry is in a bubble, Amodei separated the “technological side” from the “economic side” and then twisted the knife.</p>

<p class="has-text-align-none">“On the technological side, I feel really solid,” he said. “On the economic side, I have my concerns where, even if the technology fulfills all its promises, I think there are players in the ecosystem who, if they just make a timing error, they just get it off by a little bit, bad things could happen.”</p>

<p class="has-text-align-none">Who might those players be? Despite Sorkin’s prodding, Amodei wouldn’t name OpenAI or Sam Altman. But he didn’t have to.</p>

<p class="has-text-align-none">“There are some players who are YOLOing,” he said. “Let’s say you’re a person who just kind of constitutionally wants to YOLO things or just likes big numbers, then you may turn the dial too far.”</p>

<p class="has-text-align-none">He also touched on “circular deals,” where chip suppliers like Nvidia invest in AI companies that then spend those funds on their chips. Amodei acknowledged that Anthropic has done some of these deals, though “not at the same scale as some other players,” and walked through the math of how they can work responsibly: A new gigawatt data center costs roughly $10 billion to build over five years. A vendor invests upfront, and an AI startup pays back its share of the deal as revenue grows.</p>

<p class="has-text-align-none">While he again didn’t name names, he referenced the eye-popping numbers OpenAI has been trumpeting for its compute buildout. “I don’t think there’s anything wrong with that in principle,” he said. “Now, if you start stacking these where they get to huge amounts of money, and you’re saying, ‘By 2027 or 2028 I need to make $200 billion a year,’ then yeah, you can overextend yourself.”</p>

<h2 class="wp-block-heading has-text-align-none">The cone of uncertainty</h2>

<p class="has-text-align-none">The heart of Amodei’s argument was a concept he’s been using internally: the “cone of uncertainty.”</p>

<p class="has-text-align-none">He said that Anthropic’s revenue has grown tenfold annually for three years, from zero to $100 million in 2023, $100 million to $1 billion in 2024, and now somewhere between $8 billion and $10 billion by this year’s end. (Sam Altman, by comparison,<a href="https://x.com/sama/status/1986514377470845007"> has said</a> that OpenAI expects to end 2025 with an annualized revenue run rate exceeding $20 billion.) But even Amodei doesn’t know if Anthropic will hit $20 billion or $50 billion next year. “It’s very uncertain.”</p>

<p class="has-text-align-none">That uncertainty is concerning, he explained, because data centers take one to two years to build. Decisions on 2027 compute needs have to be made now. Buy too little, and you lose customers to competitors. Buy too much, and you risk bankruptcy. Amodei added, “How much buffer there is in that cone is basically determined by my margins.”</p>

<p class="has-text-align-none">“We want to buy enough that we’re confident even in the 10th percentile scenario,” he said. “There’s always a tail risk. But we’re trying to manage that risk well.” He positioned Anthropic’s enterprise focus, with higher margins and more predictable revenue, as structurally safer than that of consumer-first businesses. “We don’t have to do any code reds.”</p>

<iframe loading="lazy" src="https://sources.news/embed" width="480" height="320" frameborder="0"></iframe>
						]]>
									</content>
			
					</entry>
			<entry>
			
			<author>
				<name>Alex Heath</name>
			</author>
			
			<title type="html"><![CDATA[Amazon’s bet that AI benchmarks don’t matter]]></title>
			<link rel="alternate" type="text/html" href="https://www.theverge.com/column/836902/amazons-ai-benchmarks-dont-matter" />
			<id>https://www.theverge.com/?p=836902</id>
			<updated>2025-12-02T15:41:04-05:00</updated>
			<published>2025-12-02T17:00:00-05:00</published>
			<category scheme="https://www.theverge.com" term="AI" /><category scheme="https://www.theverge.com" term="Column" /><category scheme="https://www.theverge.com" term="Sources" />
							<summary type="html"><![CDATA[This is an excerpt of Sources by Alex Heath, a newsletter about AI and the tech industry, syndicated just for The Verge subscribers once a week. Amazon&#8217;s AI chief has a message for the model benchmark obsessives: Stop looking at the leaderboards. &#8220;I want real-world utility. None of these benchmarks are real,&#8221; Rohit Prasad, Amazon&#8217;s SVP [&#8230;]]]></summary>
			
							<content type="html">
											<![CDATA[

						
<figure>

<img alt="" data-caption="Rohit Prasad, Amazon&#039;s SVP of AGI." data-portal-copyright="" data-has-syndication-rights="1" src="https://platform.theverge.com/wp-content/uploads/sites/2/2025/12/gettyimages-1244423126.jpg?quality=90&#038;strip=all&#038;crop=0,0,100,100" />
	<figcaption>
	Rohit Prasad, Amazon's SVP of AGI.	</figcaption>
</figure>
<p class="has-text-align-none"><em>This is an excerpt of <a href="https://sources.news/" target="_blank" rel="noreferrer noopener">Sources by Alex Heath</a>, a newsletter about AI and the tech industry, syndicated just for The Verge subscribers once a week.</em></p>

<p class="has-text-align-none">Amazon&#8217;s AI chief has a message for the model benchmark obsessives: Stop looking at the leaderboards.</p>

<p class="has-text-align-none">&#8220;I want real-world utility. None of these benchmarks are real,&#8221; Rohit Prasad, Amazon&#8217;s SVP of AGI, told me ahead of today&#8217;s announcements at AWS re:Invent in Las Vegas. &#8220;The only way to do real benchmarking is if everyone conforms to the same training data and the evals are completely held out. That&#8217;s not what&#8217;s happening. The evals are frankly getting noisy, and they&#8217;re not showing the real power of these models.”</p>

<p class="has-text-align-none">It&#8217;s a contrarian stance when every other AI lab is quick to boast about how their new models quickly climb the leaderboards. It’s also convenient for Amazon, given that the previous version of Nova, its flagship model, was sitting at spot 79 on LMArena when Prasad and I spoke last week. Still, dismissing benchmarks only works if Amazon can offer a different story about what progress looks like.</p>

<figure class="wp-block-pullquote"><blockquote><p>“They&#8217;re not showing the real power of these models.”</p></blockquote></figure>

<p class="has-text-align-none">The centerpiece of today&#8217;s re:Invent announcements is Nova Forge, a service that Amazon claims lets companies train custom AI models in ways previously impossible without spending billions of dollars. The problem Forge addresses is real. Most companies trying to customize AI models face three bad options: fine-tune a closed model (but only at the edges), train on open-weight models (but without the original training data and risking capability regression, where the AI becomes an expert on new data but forgets original, broader skills), or build a model from scratch at enormous cost.</p>

<p class="has-text-align-none">Forge offers something else: access to Amazon&#8217;s Nova model checkpoints at the pre-training, mid-training, and post-training stages. Companies can inject their proprietary data early in the process, when the model&#8217;s &#8220;learning capacity is highest,&#8221; as Prasad put it, rather than just tweaking model behavior at the end.</p>

<p class="has-text-align-none">&#8220;What we have done is democratize AI and frontier model development for your use cases at fractions of what it would cost [before],&#8221; Prasad said. Forge was created because Amazon&#8217;s internal teams wanted a tool to inject their domain expertise into a base model without having to build from scratch.&nbsp;</p>

<p class="has-text-align-none">&#8220;We built Forge because our internal teams wanted Forge,&#8221; he said. It&#8217;s a familiar Amazon pattern. AWS itself famously began as infrastructure built for Amazon&#8217;s own retail operation before becoming the company&#8217;s profit engine.</p>

<p class="has-text-align-none">Reddit has been using Forge to build custom safety models trained on 23 years of community moderation data. &#8220;I haven&#8217;t seen anything like it yet,&#8221; Chris Slowe, Reddit&#8217;s CTO and first employee, told me. &#8220;We&#8217;ve had a distinguished engineer who&#8217;s just been like a kid in the candy shop.&#8221;</p>

<p class="has-text-align-none">Slowe said Reddit ran a continued pre-training job last week that&#8217;s &#8220;looking really promising.&#8221; The goal: Replace multiple bespoke safety models with a single Reddit-expert model that understands the nuances of community moderation, including the notoriously subjective rule that appears across subreddits everywhere: &#8220;Don&#8217;t be a jerk.&#8221;</p>

<p class="has-text-align-none">&#8220;Having an expert model, it&#8217;s going to understand the community,&#8221; Slowe said. &#8220;It&#8217;s gonna have a pretty good notion of what jerk means.&#8221;</p>

<figure class="wp-block-pullquote"><blockquote><p>That’s the thread Amazon wants developers to pull on: not raw IQ points, but control and specialization.</p></blockquote></figure>

<p class="has-text-align-none">He explained that Forge enables Reddit to control its models, avoid surprises from API changes, retain ownership of its weights, and avoid sending sensitive data to third-party model providers. He said Reddit is already exploring using the same approach for Reddit Answers and other products.</p>

<p class="has-text-align-none">When I asked Slowe whether it mattered that Nova isn&#8217;t a top-tier model on benchmarks, he was blunt: &#8220;In this context, what matters is the Reddit expertness of the model.&#8221; That’s the thread Amazon wants developers to pull on: not raw IQ points, but control and specialization.</p>

<p class="has-text-align-none">With Forge, Amazon is making a calculated bet that the model race has commoditized and that it can succeed by being the place where companies can build specialized AI for specific business problems. It&#8217;s a very AWS-shaped view of the world: infrastructure over intelligence and customization over raw capability. The strategy also lets Amazon sidestep direct comparisons with OpenAI and Anthropic, both of which it once <a href="https://www.theverge.com/2024/3/29/24116056/amazon-ai-race-anthropic-olympus-claude">hoped to compete with at the model layer</a>.</p>

<p class="has-text-align-none">Whether Forge is genuinely pioneering or just clever positioning depends, of course, on developer adoption. Amazon insists that the model race, as it&#8217;s widely understood, doesn’t matter. If that ends up being true, the scoreboard shifts to something much quieter and harder to game: whether AI models actually deliver real-world utility.</p>

<iframe loading="lazy" src="https://sources.news/embed" width="480" height="320" frameborder="0"></iframe>
						]]>
									</content>
			
					</entry>
			<entry>
			
			<author>
				<name>Alex Heath</name>
			</author>
			
			<title type="html"><![CDATA[AI startups are turning their revenue into recruiting bait]]></title>
			<link rel="alternate" type="text/html" href="https://www.theverge.com/column/826172/ai-startup-arr-numbers-sierra-bret-taylor" />
			<id>https://www.theverge.com/?p=826172</id>
			<updated>2025-11-21T09:21:25-05:00</updated>
			<published>2025-11-21T12:00:00-05:00</published>
			<category scheme="https://www.theverge.com" term="AI" /><category scheme="https://www.theverge.com" term="Column" /><category scheme="https://www.theverge.com" term="Sources" />
							<summary type="html"><![CDATA[This is an excerpt of Sources by Alex Heath, a newsletter about AI and the tech industry, syndicated just for The Verge subscribers once a week. A new trend has quickly emerged for AI startups that want to stand out from the rest: brag about revenue. Take Sierra, Bret Taylor and Clay Bavor’s AI customer support [&#8230;]]]></summary>
			
							<content type="html">
											<![CDATA[

						
<figure>

<img alt="" data-caption="Sierra’s Clay Bavor and Bret Taylor. | Photo: Sierra" data-portal-copyright="Photo: Sierra" data-has-syndication-rights="1" src="https://platform.theverge.com/wp-content/uploads/sites/2/2025/11/bydoratsui_SierraSummit11.6.25_0047-1.jpeg?quality=90&#038;strip=all&#038;crop=0,0,100,100" />
	<figcaption>
	Sierra’s Clay Bavor and Bret Taylor. | Photo: Sierra	</figcaption>
</figure>
<p class="has-text-align-none"><em>This is an excerpt of <a href="https://sources.news/" target="_blank" rel="noreferrer noopener">Sources by Alex Heath</a>, a newsletter about AI and the tech industry, syndicated just for The Verge subscribers once a week.</em></p>

<p class="has-text-align-none">A new trend has quickly emerged for AI startups that want to stand out from the rest: brag about revenue.</p>

<p class="has-text-align-none">Take Sierra, Bret Taylor and Clay Bavor’s AI customer support firm that was recently valued at $10 billion. On paper, you’d think Sierra could have its pick of just about anyone who wants to work in AI — both co-founders are well-known names in Silicon Valley, Taylor is also the chairman of OpenAI, and Sierra has raised more than $600 million in less than two years.</p>

<p class="has-text-align-none">But even Sierra feels the need to put a giant number on the board to compete for talent. Taylor told me on Thursday that the company has reached $100 million in annual recurring revenue, up from about $20 million this time last year. Unlike many AI startups now flexing their ARR, Sierra books its revenue through upfront contracts. The company says its customer support agents have already been used by hundreds of millions of people, many of whom wouldn’t know they’re interacting with an AI to process a return or troubleshoot a bug. Its customers include SoFi, Wayfair, Ramp, Rocket Mortgage, and hundreds more.</p>

<figure class="wp-block-pullquote"><blockquote><p>“I think AI is a category where it’s relatively easy to make a demo and sort of win a popularity contest on social media.”</p></blockquote></figure>

<p class="has-text-align-none">Taylor spent a good chunk of our conversation explaining why he thinks Sierra’s $100 million means more than the typical AI startup ARR number. Sierra follows the same model used by public enterprise software companies like Salesforce and ServiceNow. It signs at least 12-month, often multi-year contracts, bills annually up front, and gives customers 30 days to pay after signing.</p>

<p class="has-text-align-none">By contrast, many AI startups, especially those with more consumer-ish products or usage-based pricing, reach a public ARR figure by multiplying a good month&#8217;s revenue by 12. If growth slows or users churn, that ARR evaporates just as quickly. Taylor’s argument is that Sierra’s number looks more like what public-market investors care about: contracted revenue that’s harder to walk away from.</p>

<p class="has-text-align-none">He wouldn’t name names, but Taylor made it clear that Sierra is trying to separate itself from AI startups that tout ARR off a leaky base of pay-as-you-go users. In those cases, an ARR figure can mask high churn or a product that’s riding a hype wave or temporarily juicing sign-ups with incentives.</p>

<p class="has-text-align-none">“I think AI is a category where it’s relatively easy to make a demo and sort of win a popularity contest on social media,” he said. “But creating a durable revenue stream, especially from serving the Fortune 1000 and regulated industries, is incredibly challenging. I think a lot of people want to work for the leader in the category.”</p>

<p class="has-text-align-none">That milestone puts Sierra among the fastest-growing AI companies, though it’s hard to do apples-to-apples comparisons in a world where private companies can define their own metrics.&nbsp;</p>

<p class="has-text-align-none">“There is no official leaderboard, but we believe we’re fairly far ahead of the other companies in our category,” Taylor said. “We want to make sure recruits know that and potential customers know that because I think it is a signal that we’re doing something right and the product is high quality and our customers like working with us and have invested deeply in us.”</p>

<figure class="wp-block-pullquote"><blockquote><p>Real estate moves are another signal</p></blockquote></figure>

<p class="has-text-align-none">That’s the subtext here: ARR has quietly become one of the most important recruiting signals in the AI startup market. Startups used to hype funding rounds or valuation. They still do, but some are also sharing revenue numbers that, in a different era, would have stayed buried.&nbsp;</p>

<p class="has-text-align-none">Loveable CEO Anton Osika recently <a href="https://techcrunch.com/2025/11/19/as-lovable-hits-200m-arr-its-ceo-credits-staying-in-europe-for-its-success/">shared</a> at a conference that the company doubled its ARR to $200 million in four months, and Cursor this month <a href="https://cursor.com/blog/series-d">announced</a> that it past $1 billion in annualized revenue. For a recruit, these revenue stats are meant to signal that a startup isn’t just riding a hype cycle but has customers and real traction.</p>

<p class="has-text-align-none">Taylor’s mental model for what’s happening now is the late ’90s. “I think the closest analog to this AI wave is the dot-com boom or bubble, depending on your level of pessimism,” he said. Back then, he explained, everyone knew e-commerce was going to be big, but there was a massive difference between working at Buy.com and Amazon.</p>

<p class="has-text-align-none">“As a candidate, you want to work for the company that’s going to end up being the leader,” Taylor said. His pitch is that Sierra is on that path in AI customer support: a company with real contracts from big, often regulated customers.</p>

<p class="has-text-align-none">His hiring plans reflect that ambition. Sierra has roughly 300 employees today. Taylor wouldn’t commit to a precise headcount target for next year, but he acknowledged that “doubling or more” is in scope, driven mostly by international expansion and customer-facing roles.</p>

<p class="has-text-align-none">His real estate moves are another signal. Taylor confirmed that Sierra has signed a lease for roughly 300,000 square feet of office space in San Francisco’s China Basin neighborhood, a block from Oracle Park. The company will vacate its current building and nearly triple its footprint, marking the city’s largest office lease since OpenAI took over Old Navy’s former headquarters near the Chase Center last year.</p>

<p class="has-text-align-none">Taylor is also already thinking about what happens when today’s crop of AI startups matures. He expects the industry to follow a familiar pattern: an early “best-of-breed” phase where specialist tools grow quickly, followed by a platform-consolidation wave. “Reductively, you either earn the right to consolidate or you get consolidated.” Sierra isn’t out shopping for acquisitions yet, he said, but it wants to be in the former camp when that moment comes.</p>

<p class="has-text-align-none">All of this explains why a company like Sierra — backed by blue-chip investors, run by a former Salesforce co-CEO who now chairs OpenAI — is out there trumpeting its early revenue. The AI agent market for customer support is already crowded, with upstarts like Decagon and incumbents like Intercom and Salesforce vying for the same budgets. In that world, a startup announcing nine figures of ARR is a signal of strength aimed at the small pool of people who can work anywhere.</p>

<iframe loading="lazy" src="https://sources.news/embed" width="480" height="320" frameborder="0"></iframe>
						]]>
									</content>
			
					</entry>
			<entry>
			
			<author>
				<name>Alex Heath</name>
			</author>
			
			<title type="html"><![CDATA[What insiders anonymously think about the AI race]]></title>
			<link rel="alternate" type="text/html" href="https://www.theverge.com/column/820664/cerebral-valley-conference-ai-anonymous-survey" />
			<id>https://www.theverge.com/?p=820664</id>
			<updated>2025-11-13T18:27:04-05:00</updated>
			<published>2025-11-13T21:00:00-05:00</published>
			<category scheme="https://www.theverge.com" term="Column" /><category scheme="https://www.theverge.com" term="Sources" />
							<summary type="html"><![CDATA[This is an excerpt of Sources by Alex Heath, a newsletter about AI and the tech industry, syndicated just for The Verge subscribers once a week. I spent yesterday at Eric Newcomer’s Cerebral Valley conference in San Francisco, which is now in its third year. I’ve attended this event for three years in a row because [&#8230;]]]></summary>
			
							<content type="html">
											<![CDATA[

						
<figure>

<img alt="" data-caption="" data-portal-copyright="" data-has-syndication-rights="1" src="https://platform.theverge.com/wp-content/uploads/sites/2/2025/11/STKS522_AGI_D.jpg?quality=90&#038;strip=all&#038;crop=0,0,100,100" />
	<figcaption>
		</figcaption>
</figure>
<p class="has-text-align-none"><em>This is an excerpt of <a href="https://sources.news/" target="_blank" rel="noreferrer noopener">Sources by Alex Heath</a>, a newsletter about AI and the tech industry, syndicated just for The Verge subscribers once a week.</em></p>

<p class="has-text-align-none">I spent yesterday at Eric Newcomer’s Cerebral Valley conference in San Francisco, which is now in its third year. I’ve attended this event for three years <a href="https://www.theverge.com/2024/11/22/24303470/ai-model-llm-progress-hitting-scaling-wall">in a row</a> because Eric does a great job curating the speakers and the audience, and the conversations are more substantive than a typical industry event.</p>

<p class="has-text-align-none">This year was no exception; however, I found the most interesting part of the day to be when the results of an anonymous audience survey were shared onstage. The more than 300 attendees who participated in the survey primarily consisted of AI company founders, followed by investors, other industry professionals (including product leaders and engineers), and members of the media.&nbsp;</p>

<p class="has-text-align-none">Here are the results of the survey in order of how they were shared onstage:</p>

<p class="has-text-align-none"><strong>1. What will be OpenAI&#8217;s annualized revenue be at the end of 2026? </strong></p>

<ol class="wp-block-list"></ol>

<p class="has-text-align-none">Median answer: $30 billion.</p>

<p class="has-text-align-none"><strong>2. What will Nvidia be worth at the end of 2026?</strong></p>

<ol start="2" class="wp-block-list"></ol>

<p class="has-text-align-none">Median answer: $6 trillion.</p>

<p class="has-text-align-none"><strong>3. What year will an independent committee of experts, as dictated by the Microsoft-OpenAI agreement, declare that we have reached AGI?</strong></p>

<p class="has-text-align-none">Top answer: 2030</p>

<p class="has-text-align-none"><strong>4. Which venture capital firm’s AI portfolio are you the most jealous of?</strong></p>

<ol start="3" class="wp-block-list"></ol>

<p class="has-text-align-none">The top three most voted for, from first to last: Andreessen Horowitz, Khosla Ventures, and Sequoia. </p>

<p class="has-text-align-none"><strong>5. If you could put money in any private technology companies today, what would they be?</strong></p>

<ol start="5" class="wp-block-list"></ol>

<p class="has-text-align-none">Top five companies in order from first to last: Anthropic, OpenAI, Cursor, Anduril, SpaceX, and OpenEvidence.</p>

<p class="has-text-align-none"><strong>6.</strong> <strong>What global company’s model will top the LMArena web development leaderboard at the end of 2026?</strong></p>

<ol start="6" class="wp-block-list"></ol>

<p class="has-text-align-none">In order from first to last: OpenAI, Anthropic, Gemini, Grok, Qwen. </p>

<p class="has-text-align-none"><strong>7. If you could short a $1 billion-plus valuation startup, which would it be?</strong></p>

<ol start="7" class="wp-block-list"></ol>

<p class="has-text-align-none">First place was Perplexity. Second place went to OpenAI. Other names shown onstage: Cursor, Figure, Harvey, Mercor, Mistral, and Thinking Machines.</p>

<p class="has-text-align-none">What stood out to me from these results (Newcomer <a href="https://www.newcomer.co/p/cerebral-valley-ai-summit-audience">has published the slides</a> for his paying subscribers):</p>

<ul class="wp-block-list">
<li><strong>A softening on OpenAI:</strong> Given that Sam Altman has said OpenAI plans to end this year with $20 billion of annualized revenue, this group of AI insiders doesn’t expect next year to be as exponential for the business as the leap from 2024 to 2025. The prediction that AGI won’t be declared until 2030 suggests a lack of faith in model progress meaningfully improving in the near term, although that answer could also be clouded by the complexity of how OpenAI and Microsoft must settle on how it’s decided. (I’m still waiting for either company to share information on who its “independent committee of experts” will be and how they’ll decide.) It was also notable that more attendees wanted to buy Anthropic stock than OpenAI’s, despite the consensus being that OpenAI would lead LMArena next year. </li>



<li><strong>Meta wasn’t in the conversation.</strong> It wasn’t named on the list of models likely to lead LMArena next year. The presence of a Chinese model (Alibaba’s Qwen) in the top five signals a shift that’s already underway, as many companies fine-tune open-source Chinese models rather than Llama. Meta has a lot to prove if it wants to re-enter the model race. </li>



<li><strong>Perplexity is controversial.</strong> But everyone working in AI already knows that.</li>
</ul>

<p class="has-text-align-none"><strong>Other takeaways from Cerebral Valley:</strong></p>

<p class="has-text-align-none"><strong>What’s driving </strong><a href="https://www.theverge.com/2024/7/1/24190060/amazon-adept-ai-acquisition-playbook-microsoft-inflection"><strong>reverse acquihires</strong></a><strong>? </strong>I attended a breakout session about AI acquihires, such as Meta’s deal with ScaleAI to hire Alexandr Wang and Google’s deals with Character and Windsurf. I’ve closely covered many of these deals over the past couple of years, but it was interesting to hear the group’s perspective on what drives them. Antitrust scrutiny of Big Tech certainly plays a factor, but some who have been involved in these kinds of transactions also made the point that bigger companies are racing each other to shore up talent and move faster than their competition. They have seemingly “infinite money,” as one member of the group put it, and see it as a game of placing bets on a very finite pool of talent. One AI founder in the group, who fielded multiple offers of this kind, recalled a member of a Big Tech company’s corporate development team asking <em>him</em> how much he wanted his startup to be valued for a deal.&nbsp;</p>

<p class="has-text-align-none"><strong>No one cares about AGI anymore. </strong>At the <a href="https://www.theverge.com/2023/3/30/23746922/command-line-a-visit-to-cerebral-valley">first Cerebral Valley conference</a>, the topic of AGI was a major throughline. A startup founder onstage said that “we’re going to be dead” by the time OpenAI releases GPT-10. This year, multiple onstage conversations noted how AGI barely registered as a discussion topic. Instead, most of the interviews focused on the business applications of AI. Multiple companies represented onstage at the first Cerebral Valley event didn&#8217;t exist and are now worth billions of dollars. There was a strain of AI bubble fear throughout the day, but mostly, everyone seemed dialed in on how they could win market share and provide products that people want to pay for.&nbsp;</p>

<p class="has-text-align-none"><strong>Standout quotes from onstage interviews:</strong></p>

<ul class="wp-block-list">
<li>Replit CEO Amjad Masad: “If you are competing on price, then maybe you don’t have a business.”</li>



<li>Elad Gill: “Most companies should sell at some point. There&#8217;s often a market-maximizing moment where you&#8217;re going to get the best deal you can. A very small number of companies should never ever sell.”</li>



<li>San Francisco Mayor Daniel Lurie: “People are starting to complain about traffic. Thank goodness. I want those complaints. We still have a lot of empty office space.” </li>



<li>Anthropic CPO Mike Krieger: “Time spent is, I can tell you, not on any of the dashboards that I look at. It&#8217;s just not a main consideration.” </li>



<li>xAI co-founder Jimmy Ba: “Knowledge is just crystalized computation from the past.”</li>
</ul>

<iframe loading="lazy" src="https://sources.news/embed" width="480" height="320" frameborder="0"></iframe>
						]]>
									</content>
			
					</entry>
			<entry>
			
			<author>
				<name>Alex Heath</name>
			</author>
			
			<title type="html"><![CDATA[Election night at Kalshi HQ]]></title>
			<link rel="alternate" type="text/html" href="https://www.theverge.com/column/815208/election-night-at-kalshi-hq" />
			<id>https://www.theverge.com/?p=815208</id>
			<updated>2025-11-06T09:18:58-05:00</updated>
			<published>2025-11-05T21:00:00-05:00</published>
			<category scheme="https://www.theverge.com" term="Column" /><category scheme="https://www.theverge.com" term="Sources" />
							<summary type="html"><![CDATA[This is an excerpt of&#160;Sources by Alex Heath, a newsletter about AI and the tech industry, syndicated just for The Verge subscribers once a week. At 8PM on election night in New York City, I arrived at an unmarked office building in the Meatpacking District. Inside, a few dozen young Kalshi employees moved between clusters [&#8230;]]]></summary>
			
							<content type="html">
											<![CDATA[

						
<figure>

<img alt="" data-caption="" data-portal-copyright="" data-has-syndication-rights="1" src="https://platform.theverge.com/wp-content/uploads/sites/2/2025/11/Tarek-Mansour-Sept-2025.jpg?quality=90&#038;strip=all&#038;crop=0,0,100,100" />
	<figcaption>
		</figcaption>
</figure>
<p class="has-text-align-none"><em>This is an excerpt of&nbsp;<a href="https://sources.news/" target="_blank" rel="noreferrer noopener">Sources by Alex Heath</a>, a newsletter about AI and the tech industry, syndicated just for The Verge subscribers once a week.</em></p>

<p class="has-text-align-none">At 8PM on election night in New York City, I arrived at an unmarked office building in the Meatpacking District.</p>

<p class="has-text-align-none">Inside, a few dozen young Kalshi employees moved between clusters of desks, pizza boxes, and a large projector displaying live markets for the day’s key races. The vibe was quiet but focused. On the screen, numbers flickered as bets adjusted in real time.</p>

<p class="has-text-align-none">Near the projector, co-founders Tarek Mansour and Luana Lopes Lara chatted with a CBS News crew filming a segment for the next morning. CBS had just called the Virginia governor’s race. Mansour pointed out that Kalshi’s market had predicted the result almost an hour earlier.</p>

<figure class="wp-block-pullquote"><blockquote><p>“We’re doing a billion dollars in transaction volume a week now.”</p></blockquote></figure>

<p class="has-text-align-none">I expected a trading floor atmosphere. Instead, the office felt subdued. “I think it’s quieter than usual because there’s less volatility on this one,” Mansour told me later from a small conference room. The New York mayor’s race had long been priced as a landslide. Zohran Mamdani had held a roughly 95 percent chance of winning on Kalshi (and its rival Polymarket) even before polls closed. Still, about $100 million in trades on the New York race went through Kalshi that day.</p>

<p class="has-text-align-none">In recent months, I’ve been tracking the rise of prediction markets and particularly Kalshi. Despite being federally licensed and much larger than Polymarket, it’s the latter that dominates the conversation in tech circles. Mansour wants to change that.</p>
<img src="https://platform.theverge.com/wp-content/uploads/sites/2/2025/11/Screenshot-2025-11-05-at-8.37.42%E2%80%AFPM.png?quality=90&#038;strip=all&#038;crop=0,0,100,100" alt="" title="" data-has-syndication-rights="1" data-caption="&lt;em&gt;Kalshi’s betting page for the New York City mayoral election, captured one day after the election.&lt;/em&gt;" data-portal-copyright="" />
<p class="has-text-align-none">“Kalshi is arguably one of — maybe <em>the</em> — fastest-growing companies in America this year,” he told me. “We’re doing a billion dollars in transaction volume a week now.” Last year, the company saw just $300 million for the entire year. Mansour declined to share revenue figures, but even at a 1–2 percent fee per trade, the math suggests that business is booming.</p>

<p class="has-text-align-none">Three factors have fueled that growth this year: securing a federal license to operate, expanding into sports betting, and striking a partnership with Robinhood to power prediction markets. While sports have been a major draw, Mansour’s ambitions go far beyond that.</p>

<p class="has-text-align-none">“I think prediction markets are the next generation of the stock market,” he said. “They have media consequences. Everyone is an expert on something — everyone has opinions. These markets give those opinions a price.”</p>

<figure class="wp-block-pullquote"><blockquote><p>Kalshi called the New Jersey governor’s race 32 minutes before any news outlet</p></blockquote></figure>

<p class="has-text-align-none">He hinted at new partnerships with media outlets and even entertainment event tie-ins. “We’re doing a lot with news networks in the coming months,” he said. “If the truth that comes out of these markets becomes mainstream, we’ve basically achieved our mission.”</p>

<p class="has-text-align-none">Given how new prediction markets are, Kalshi and Polymarket still need to prove that they can remain reliable sources for predicting elections. <em>Fox News</em> took a reputational hit for accidentally <a href="https://www.nytimes.com/2023/03/13/upshot/fox-arizona-election-call.html">calling Arizona for Joe Biden too early</a> in 2020. Meanwhile, Kalshi and Polymarket brag about calling races even before results are in. If one of them gets a key race wrong, it could call into question the legitimacy of prediction markets.</p>

<p class="has-text-align-none">With less than an hour left before polls closed, Mansour showed me Kalshi data from the New York mayoral race. Voters in the city were buying Andrew Cuomo contracts more heavily, but Mamdani dominated elsewhere. He was winning among women and younger traders; Cuomo’s support skewed older and male.</p>

<p class="has-text-align-none">As we spoke, Kalshi called the New Jersey governor’s race at 8:20PM — 32 minutes before any news outlet. Mansour compared Kalshi’s role to that of financial markets: “Should the stock market replace bank analysts? No. Analysts provide input, and the market finds the real price. We’re doing the same thing for events.”</p>

<p class="has-text-align-none">I asked whether people constantly text him for predictions, especially on an election night. He laughed. “Yeah. But I tell them: just look at the market. I don’t have any extra information.”</p>

<p class="has-text-align-none">As 9PM neared, I assumed he’d stay in the office as polls closed. But as I stepped into my Uber, I saw him dart out and get into another car down the street.</p>

<p class="has-text-align-none">He didn’t need to wait. Kalshi called the New York race for Mamdani one minute after polls closed and 36 minutes before any media outlet.</p>

<iframe loading="lazy" src="https://sources.news/embed" width="480" height="320" frameborder="0"></iframe>
						]]>
									</content>
			
					</entry>
	</feed>
