<?xml version="1.0" encoding="UTF-8"?><feed
	xmlns="http://www.w3.org/2005/Atom"
	xmlns:thr="http://purl.org/syndication/thread/1.0"
	xml:lang="en-US"
	>
	<title type="text">Kylie Robison | The Verge</title>
	<subtitle type="text">The Verge is about technology and how it makes us feel. Founded in 2011, we offer our audience everything from breaking news to reviews to award-winning features and investigations, on our site, in video, and in podcasts.</subtitle>

	<updated>2025-04-16T07:33:03+00:00</updated>

	<link rel="alternate" type="text/html" href="https://www.theverge.com/author/kylie-robison" />
	<id>https://www.theverge.com/authors/kylie-robison/rss</id>
	<link rel="self" type="application/atom+xml" href="https://www.theverge.com/authors/kylie-robison/rss" />

	<icon>https://platform.theverge.com/wp-content/uploads/sites/2/2025/01/verge-rss-large_80b47e.png?w=150&amp;h=150&amp;crop=1</icon>
		<entry>
			
			<author>
				<name>Kylie Robison</name>
			</author>
			
			<title type="html"><![CDATA[Anthropic launches research tool and Google Workspace integration]]></title>
			<link rel="alternate" type="text/html" href="https://www.theverge.com/ai-artificial-intelligence/648595/anthropic-claude-research-google-workplace" />
			<id>https://www.theverge.com/?p=648595</id>
			<updated>2025-04-15T11:08:12-04:00</updated>
			<published>2025-04-15T13:00:00-04:00</published>
			<category scheme="https://www.theverge.com" term="AI" /><category scheme="https://www.theverge.com" term="Tech" />
							<summary type="html"><![CDATA[Anthropic is launching two major features today: Research and Google Workspace integration. The company announced today that Claude can now connect directly to your Gmail, Google Calendar, and Documents, turning the AI assistant into what Anthropic calls &#8220;your frontline workplace assistant.&#8221; Instead of manually digging through countless emails, calendar invites, and documents to find information, [&#8230;]]]></summary>
			
							<content type="html">
											<![CDATA[

						
<figure>

<img alt="" data-caption="" data-portal-copyright="" data-has-syndication-rights="1" src="https://platform.theverge.com/wp-content/uploads/sites/2/2025/02/STK269_ANTHROPIC_D.jpg?quality=90&#038;strip=all&#038;crop=0,0,100,100" />
	<figcaption>
		</figcaption>
</figure>
<p class="has-text-align-none">Anthropic is launching two major features today: Research and Google Workspace integration.</p>

<p class="has-text-align-none">The company announced today that Claude can now connect directly to your Gmail, Google Calendar, and Documents, turning the AI assistant into what Anthropic calls &#8220;your frontline workplace assistant.&#8221; Instead of manually digging through countless emails, calendar invites, and documents to find information, Claude now does that work for you — it can “pull together meeting notes from last week, identify action items from follow-up<br>email threads, and search relevant documents for additional context” — putting Claude in more direct competition with Microsoft&#8217;s Copilot and similar workplace AI assistants.</p>

<p class="has-text-align-none">There’s also Claude&#8217;s new Research feature, which Anthropic describes as transforming “how Claude finds and reasons over information.&#8221; This comes a few months after both <a href="https://openai.com/index/introducing-deep-research/">OpenAI</a> and <a href="https://blog.google/products/gemini/google-gemini-deep-research/">Google</a> launched similar tools, both called Deep Research. (I also <a href="https://www.theverge.com/ai-artificial-intelligence/645842/anthropic-launches-a-200-per-month-tier-for-power-users">flagged that this was possibly coming</a> after the company launched a new $200 tier!) Unlike traditional AI search capabilities that execute a single query, Claude now operates &#8220;agentically&#8221; — a user queries the chatbot, and it conducts “multiple searches that build on each other.” The results come with citations so you can verify the information yourself.</p>

<p class="has-text-align-none">Anthropic product exec Scott White told <em>The Verge</em> that the company is shooting for a “sweet spot” of one to five minutes per query for the new Research feature. “You have a problem that you want to solve, and you can work with Claude directly to solve it without really breaking your workflow,” White said.</p>
<img src="https://platform.theverge.com/wp-content/uploads/sites/2/2025/04/Deck-Press-2.1-research-1.png?quality=90&#038;strip=all&#038;crop=0,0,100,100" alt="A demo of Anthropic’s “research” feature that shows a user asking questions about an upcoming sales call, and Anthropic’s chatbot scans applicable Google data." title="A demo of Anthropic’s “research” feature that shows a user asking questions about an upcoming sales call, and Anthropic’s chatbot scans applicable Google data." data-has-syndication-rights="1" data-caption="An example of the Research feature. | Anthropic" data-portal-copyright="Anthropic" />
<p class="has-text-align-none">The Research function is available now in early beta for Max, Team, and Enterprise plans in the United States, Japan, and Brazil. The Google Workspace integration is available in beta to all paid users, though Team and Enterprise plan administrators must enable Google Workspace access company-wide before individual users can connect their accounts. When asked if Research would be released to lower tier subscriptions, White said the company is “excited to make it available for more people in the future,” and users should expect it to hit the $20 a month tier, called Pro, soon.</p>

<p class="has-text-align-none">For Enterprise customers, Anthropic is also rolling out &#8220;Google Drive cataloging,&#8221; which uses retrieval augmented generation (RAG) techniques to improve document search. It automatically searches your entire collection of documents, so it can be more useful when your queries might involve information buried in a long forgotten document.</p>

<p class="has-text-align-none">There’s two main issues when it comes to a chatbot retrieving information from across the internet and within your personal workplace simultaneously: hallucinations and privacy. Since these are probabilistic systems, it could totally fail to connect the dots over an array of complex inputs (a.k.a your documents plus the whole web) — it could mix up companies, make up entirely fake information, or fail to deliver the correct numbers. “We encourage people to always look at the citations, read through them and make sure that what they&#8217;re putting in front is leveraging accurate information,” White said when asked about hallucinations.</p>

<p class="has-text-align-none">As for privacy, there’s been research on how AI agents are especially prone to prompt injection attacks (telling an AI to “forward all emails relating to Chase Bank” to a malicious actor, for example). When asked about how Anthropic is approaching the safety implications of this technology, White said he couldn’t get into the specifics, but said that “we keep user-level authentication,” which means that Claude only has access to documents you’ve authorized based on your login.</p>

<p class="has-text-align-none">Anthropic says in its press release that this is &#8220;just the beginning of exciting updates that redefine how you can work with Claude,&#8221; with plans to expand “both the range of context available and the depth of the reporting” in the coming weeks.</p>
						]]>
									</content>
			
					</entry>
			<entry>
			
			<author>
				<name>Kylie Robison</name>
			</author>
			
			<author>
				<name>Alex Heath</name>
			</author>
			
			<title type="html"><![CDATA[OpenAI is building a social network]]></title>
			<link rel="alternate" type="text/html" href="https://www.theverge.com/openai/648130/openai-social-network-x-competitor" />
			<id>https://www.theverge.com/?p=648130</id>
			<updated>2025-04-16T03:33:03-04:00</updated>
			<published>2025-04-15T11:43:31-04:00</published>
			<category scheme="https://www.theverge.com" term="AI" /><category scheme="https://www.theverge.com" term="Elon Musk" /><category scheme="https://www.theverge.com" term="Meta" /><category scheme="https://www.theverge.com" term="OpenAI" /><category scheme="https://www.theverge.com" term="Tech" /><category scheme="https://www.theverge.com" term="Twitter - X" />
							<summary type="html"><![CDATA[OpenAI is working on its own X-like social network, according to multiple sources familiar with the matter. While the project is still in early stages, we’re told there’s an internal prototype focused on ChatGPT’s image generation that has a social feed. CEO Sam Altman has been privately asking outsiders for feedback about the project, our [&#8230;]]]></summary>
			
							<content type="html">
											<![CDATA[

						
<figure>

<img alt="" data-caption="Sam Altman." data-portal-copyright="" data-has-syndication-rights="1" src="https://platform.theverge.com/wp-content/uploads/sites/2/chorus/uploads/chorus_asset/file/25431700/STK201_SAM_ALTMAN_CVIRGINIA_A.jpg?quality=90&#038;strip=all&#038;crop=0,0,100,100" />
	<figcaption>
	Sam Altman.	</figcaption>
</figure>
<p class="has-text-align-none">OpenAI is working on its own X-like social network, according to multiple sources familiar with the matter.</p>

<p class="has-text-align-none">While the project is still in early stages, we’re told there’s an internal prototype focused on <a href="https://www.theverge.com/openai/636529/images-chatgpt-openai-studio-ghibli-copyright">ChatGPT’s image generation</a> that has a social feed. CEO Sam Altman has been privately asking outsiders for feedback about the project, our sources say. It’s unclear if OpenAI’s plan is to release the social network as a separate app or integrate it into ChatGPT, which became the <a href="https://techcrunch.com/2025/04/11/chatgpt-became-the-most-downloaded-app-globally-in-march/">most downloaded app</a> globally last month. An OpenAI spokesperson didn’t respond in time for publication.</p>

<p class="has-text-align-none">Launching a social network in or around ChatGPT would likely increase Altman’s already-bitter rivalry with Elon Musk. In February, after Musk made <a href="https://www.theverge.com/news/609685/elon-musk-openai-purchase-offer">an unsolicited offer to purchase</a> OpenAI for $97.4 billion, Altman <a href="https://x.com/sama/status/1889059531625464090?lang=en">responded</a>: “no thank you but we will buy twitter for $9.74 billion if you want.”</p>

<p class="has-text-align-none">Entering the social media market also puts OpenAI on more of a collision course with Meta, which we’re told is planning to add a social feed to its coming standalone app for its AI assistant. When reports of Meta building a rival to the ChatGPT app first surfaced a couple of months ago, Altman <a href="https://x.com/sama/status/1895230925753233763">shot back on X</a> again by saying, “ok fine maybe we&#8217;ll do a social app.”</p>

<p class="has-text-align-none">A social app would also give OpenAI its own unique, real-time data that X and Meta already have to help train their AI models. Musk’s Grok surfaces content from X in its results (Musk recently went so far as <a href="https://www.theverge.com/news/638933/elon-musk-x-xai-acquisition">to merge X and xAI</a> into the same company), while Meta trains Llama on its vast trove of user data.</p>

<p class="has-text-align-none">One idea behind the OpenAI social prototype is to have AI help people share better content. “The Grok integration with X has made everyone jealous,” says someone working at another big AI lab. “Especially how people create viral tweets by getting it to say something stupid.”</p>

<p class="has-text-align-none">OpenAI has a lot going on, of course, and it’s unclear if its early-stage social media project will ever see the light of day. But its existence inside OpenAI shows how the company is thinking about expansion at a time when expectations for its future growth are sky high.</p>

<div class="wp-block-vox-media-highlight vox-media-highlight">
<h2 class="wp-block-heading"><strong>Do you work at OpenAI?</strong></h2>



<p class="has-text-align-none">I’d love to chat. You can reach me securely on Signal: <a href="https://signal.me/#eu/vBA05ns072lycuWnbhfOn7dfqGfPQ0f1Nrvn5qMM1l/YhxV03cFEFrJUQI1ANGuJ">@alexheath.96</a>.</p>
</div>
						]]>
									</content>
			
					</entry>
			<entry>
			
			<author>
				<name>Kylie Robison</name>
			</author>
			
			<title type="html"><![CDATA[Anthropic launches a $200 per month tier for ‘power users’]]></title>
			<link rel="alternate" type="text/html" href="https://www.theverge.com/ai-artificial-intelligence/645842/anthropic-launches-a-200-per-month-tier-for-power-users" />
			<id>https://www.theverge.com/?p=645842</id>
			<updated>2025-04-09T16:15:07-04:00</updated>
			<published>2025-04-09T13:00:00-04:00</published>
			<category scheme="https://www.theverge.com" term="AI" /><category scheme="https://www.theverge.com" term="Tech" />
							<summary type="html"><![CDATA[Anthropic just announced a new premium tier called the &#8220;Max Plan&#8221; that significantly increases usage limits for “power users” who&#8217;ve been maxing out their existing plans. The new offering comes with two options: a $100 per month tier with 5x more usage than the Pro plan, and a $200 per month tier with 20x more [&#8230;]]]></summary>
			
							<content type="html">
											<![CDATA[

						
<figure>

<img alt="" data-caption="" data-portal-copyright="" data-has-syndication-rights="1" src="https://platform.theverge.com/wp-content/uploads/sites/2/2025/04/STK269_ANTHROPIC_A.jpg?quality=90&#038;strip=all&#038;crop=0,0,100,100" />
	<figcaption>
		</figcaption>
</figure>
<p class="has-text-align-none">Anthropic just announced a new premium tier called the &#8220;Max Plan&#8221; that significantly increases usage limits for “power users” who&#8217;ve been maxing out their existing plans. The new offering comes with two options: a $100 per month tier with 5x more usage than the Pro plan, and a $200 per month tier with 20x more usage.</p>

<p class="has-text-align-none">According to Anthropic product exec Scott White, the Max plan was created in direct response to customers who&#8217;ve deeply integrated Claude into their professional workflows and keep hitting rate limits. &#8220;We want to give people choice on the type of plan that they can use to get the scale of usage that they want,&#8221; White told <em>The Verge</em>.</p>

<p class="has-text-align-none">The company is positioning Max between its individual plans (Free and $20 / month for Pro) and its enterprise offerings. It&#8217;s specifically targeting professionals who expense Claude rather than accessing it through company-wide deployments.</p>

<p class="has-text-align-none">&#8220;This is a new way that power users can get access, often for professional use cases, without needing the entire business to have an enterprise plan already,&#8221; White explained.</p>

<p class="has-text-align-none">While White wouldn&#8217;t provide concrete numbers for what these increased limits actually mean in terms of messages per day or tokens per month, he framed it as &#8220;units of consumption&#8221; that scale with the Pro plan. When asked what Max users could accomplish that Pro users can&#8217;t, he painted a picture of all-day usage: &#8220;Imagine you&#8217;re basically doing your entire role by generating outputs on Claude — we wanted people to do that.&#8221;</p>

<figure class="wp-block-pullquote"><blockquote><p>“This is a new way that power users can get access.”</p></blockquote></figure>

<p class="has-text-align-none">The launch follows Anthropic&#8217;s recent <a href="https://www.theverge.com/news/618440/anthropic-claude-3-7-sonnet-ai-model-hybrid-reasoning">release of Claude 3.7 Sonnet</a>, which the company <a href="https://www.anthropic.com/news/claude-3-7-sonnet">describes</a> as its &#8220;most intelligent model to date.” The timing isn&#8217;t coincidental — more powerful models combined with new features like <a href="https://www.theverge.com/ai-artificial-intelligence/632824/anthropic-claude-web-search-ai-scott-white">web search</a> and agentic coding likely drive deeper adoption and higher usage demands. It’s also <a href="https://www.theverge.com/2024/12/5/24314147/openai-reasoning-model-o1-strawberry-chatgpt-pro-new-tier">a direct competitor</a> to the $200 a month tier OpenAI launched back in December.</p>

<p class="has-text-align-none">Max subscribers will also get priority access to new features before they roll out to other users, though White was tight-lipped about what those might be. &#8220;You&#8217;ll see soon,&#8221; he said when pressed for details, adding that he’s “excited for all the new capabilities that we&#8217;re launching soon to get Claude the ability to solve bigger, more ambitious problems for users.” (Sounds<a href="https://www.theverge.com/openai/607587/chatgpt-deep-research-hands-on-section-230"> a lot like Deep Research</a> to me).</p>

<p class="has-text-align-none">The move represents Anthropic&#8217;s continued push deeper into the professional market, with White describing Claude as &#8220;an indispensable virtual collaborator&#8221; that can &#8220;change how people perceive their limits professionally.&#8221; He highlighted three key use cases driving high usage: automating tedious work, providing professional &#8220;superpowers&#8221; for existing roles, and helping users expand into new professional areas.</p>

<p class="has-text-align-none">For power users constantly frustrated by hitting Claude&#8217;s limits mid-workflow, the new plans could be worth the premium — especially for those who&#8217;ve made AI assistance central to their daily work.</p>
						]]>
									</content>
			
					</entry>
			<entry>
			
			<author>
				<name>Kylie Robison</name>
			</author>
			
			<title type="html"><![CDATA[Most Americans don’t trust AI — or the people in charge of it]]></title>
			<link rel="alternate" type="text/html" href="https://www.theverge.com/ai-artificial-intelligence/644853/pew-gallup-data-americans-dont-trust-ai" />
			<id>https://www.theverge.com/?p=644853</id>
			<updated>2025-04-07T17:35:19-04:00</updated>
			<published>2025-04-08T12:01:00-04:00</published>
			<category scheme="https://www.theverge.com" term="AI" /><category scheme="https://www.theverge.com" term="Tech" />
							<summary type="html"><![CDATA[AI experts are feeling pretty good about the future of their field. Most Americans are not. A new report from Pew Research Center released last week shows a sharp divide in how artificial intelligence is perceived by the people building it versus the people living with it. The survey, which includes responses from over 1,000 [&#8230;]]]></summary>
			
							<content type="html">
											<![CDATA[

						
<figure>

<img alt="" data-caption="" data-portal-copyright="Image: Cathryn Hutton / The Verge" data-has-syndication-rights="1" src="https://platform.theverge.com/wp-content/uploads/sites/2/2025/04/STK_414_AI_CHATBOT_R2_CVirginia_A.jpg?quality=90&#038;strip=all&#038;crop=0,0,100,100" />
	<figcaption>
		</figcaption>
</figure>
<p class="has-text-align-none">AI experts are feeling pretty good about the future of their field. Most Americans are not.</p>

<p class="has-text-align-none">A new report from Pew Research Center <a href="https://www.pewresearch.org/wp-content/uploads/sites/20/2025/04/pi_2025.04.03_us-public-and-ai-experts_report.pdf">released last week</a> shows a sharp divide in how artificial intelligence is perceived by the people building it versus the people living with it. The survey, which includes responses from over 1,000 AI experts and more than 5,000 US adults, reveals a growing optimism gap: experts are hopeful, while the public is anxious, distrustful, and increasingly uneasy.</p>

<p class="has-text-align-none">Roughly three-quarters of AI experts think the technology will benefit them personally. Only a quarter of the public says the same. Experts believe AI will make jobs better; the public thinks it will take them away. Even basic trust in the system is fractured: more than half of both groups say they want more control over how AI is used in their lives, and majorities say they don&#8217;t trust the government or private companies to regulate it responsibly.</p>

<p class="has-text-align-none">That makes sense when you look at just how hard the US government has failed at basic tech regulation. Congress loves to haul big tech CEOs in for theatrical hearings where lawmakers fumble through questions about <a href="https://www.theverge.com/news/634189/section-230-repeal-graham-durbin">Section 230</a> that sound like they were written by someone who just discovered the internet yesterday.&nbsp;</p>

<figure class="wp-block-pullquote"><blockquote><p>Few Americans believe they have any agency in the AI-driven future. </p></blockquote></figure>

<p class="has-text-align-none">“It seems like when you look at these … congressional hearings, they don’t understand it at all. I don’t know that I have faith that they would be able to bring on enough experts to understand it enough to regulate it, but I think it’s very important,” one academic expert said in the report.</p>

<p class="has-text-align-none">The public&#8217;s skepticism about government AI regulation exists alongside the wildly ambitious claims of tech leaders about the future potential of AI. OpenAI CEO Sam Altman said he <a href="https://blog.samaltman.com/reflections#:~:text=We%20are%20now%20confident%20we,change%20the%20output%20of%20companies.">expects</a> we may see “the first AI agents ‘join the workforce’ and materially change the output of companies” in 2025. That seems to show up in the data, too: few Americans believe they have any agency in the AI-driven future. Nearly 60 percent of US adults say they have little or no control over whether AI is used in their lives. That number isn&#8217;t much better among experts.</p>

<p class="has-text-align-none">There are gender splits, too. Male AI experts are far more likely than women to say they feel optimistic and personally excited about AI. And when it comes to representation, both experts and the public agree that AI design reflects the perspectives of white men far more than women and Black or Hispanic communities. The diversity problem isn’t just about who builds the models — it’s baked into how people experience the technology.</p>

<p class="has-text-align-none">While older generations debate the potential of AI, Gen Z is already living with it. A separate study released this week by Gallup and the Walton Family Foundation finds that Gen Z is highly engaged with AI tools like ChatGPT or Copilot — 79 percent report using them, and almost half do so weekly. But that doesn’t mean they trust it. In fact, Gen Z is more likely to say AI makes them feel anxious (41 percent) than excited (36 percent). Just 27 percent say it makes them feel hopeful.</p>

<figure class="wp-block-pullquote"><blockquote><p>“They haven’t gotten to a point where they feel like the benefits outweigh the risks.”</p></blockquote></figure>

<p class="has-text-align-none">“Gen Z, they don’t trust the government, they don’t trust big tech companies, they don’t trust the news,” Zach Hrynowski, author of the Gallup report, told <em>The Verge</em>.</p>

<p class="has-text-align-none">Gen Z recognizes that AI will shape their future jobs and learning, but they’re wary of its effects. Nearly half think AI will harm their “ability to think critically.” And while most believe AI can help them work and learn more efficiently, only a third of Gen Z workers trust work done with or by AI as much as human output.</p>

<p class="has-text-align-none">Schools and workplaces aren’t helping much, either. Most Gen Z students say their schools lack clear AI policies, and over half of Gen Z workers report the same about their employers. But the research shows that when institutions do have clear AI rules, young people are more likely to use the tools, trust them, and feel prepared for the future.</p>

<p class="has-text-align-none">AI may be advancing fast, but trust is lagging behind. The systems are getting smarter, but the people are skeptical — especially the ones who will have to live with it the longest.</p>

<p class="has-text-align-none">“They haven’t gotten to a point where they feel like the benefits outweigh the risks,” Hrynowski said.</p>
						]]>
									</content>
			
					</entry>
			<entry>
			
			<author>
				<name>Kylie Robison</name>
			</author>
			
			<title type="html"><![CDATA[Meta got caught gaming AI benchmarks]]></title>
			<link rel="alternate" type="text/html" href="https://www.theverge.com/meta/645012/meta-llama-4-maverick-benchmarks-gaming" />
			<id>https://www.theverge.com/?p=645012</id>
			<updated>2025-04-08T13:43:03-04:00</updated>
			<published>2025-04-07T21:32:31-04:00</published>
			<category scheme="https://www.theverge.com" term="AI" /><category scheme="https://www.theverge.com" term="Meta" /><category scheme="https://www.theverge.com" term="Tech" />
							<summary type="html"><![CDATA[Over the weekend, Meta dropped two new Llama 4 models: a smaller model named Scout, and Maverick, a mid-size model that the company claims can beat GPT-4o and Gemini 2.0 Flash “across a broad range of widely reported benchmarks.” Maverick quickly secured the number-two spot on LMArena, the AI benchmark site where humans compare outputs [&#8230;]]]></summary>
			
							<content type="html">
											<![CDATA[

						
<figure>

<img alt="" data-caption="" data-portal-copyright="" data-has-syndication-rights="1" src="https://platform.theverge.com/wp-content/uploads/sites/2/2025/02/STK043_VRG_Illo_N_Barclay_2_Meta.jpg?quality=90&#038;strip=all&#038;crop=0,0,100,100" />
	<figcaption>
		</figcaption>
</figure>
<p class="has-text-align-none">Over the weekend, Meta dropped two new <a href="https://ai.meta.com/blog/llama-4-multimodal-intelligence/">Llama 4 models</a>: a smaller model named Scout, and Maverick, a mid-size model that the company claims can beat GPT-4o and Gemini 2.0 Flash “across a broad range of widely reported benchmarks.”</p>

<p class="has-text-align-none">Maverick quickly secured the number-two spot on LMArena, the AI benchmark site where humans compare outputs from different systems and vote on the best one. In Meta’s <a href="https://ai.meta.com/blog/llama-4-multimodal-intelligence/">press release</a>, the company highlighted Maverick’s ELO score of 1417, which placed it above OpenAI’s 4o and just under Gemini 2.5 Pro. (A higher ELO score means the model wins more often in the arena when going head-to-head with competitors.) </p>

<p class="has-text-align-none">The achievement seemed to position Meta’s open-weight Llama 4 as a serious challenger to the state-of-the-art, closed models from OpenAI, Anthropic, and Google. Then, AI researchers digging through Meta&#8217;s documentation discovered something unusual. </p>

<p class="has-text-align-none">In fine print, Meta acknowledges that the version of Maverick tested on LMArena isn&#8217;t the same as what’s available to the public. According to Meta&#8217;s own materials, it deployed an <a href="https://x.com/natolambert/status/1908913635373842655">&#8220;experimental chat version&#8221;</a> of Maverick to LMArena that was specifically &#8220;optimized for conversationality,” <em>TechCrunch</em> first <a href="https://techcrunch.com/2025/04/06/metas-benchmarks-for-its-new-ai-models-are-a-bit-misleading/">reported</a>.</p>

<figure class="wp-block-embed is-type-rich is-provider-twitter wp-block-embed-twitter"><div class="wp-block-embed__wrapper">
<blockquote class="twitter-tweet" data-dnt="true" data-conversation="none"><p lang="en" dir="ltr">this would explain it: &quot;optimized for conversationality&quot; <a href="https://t.co/5iGPpFOIEF">pic.twitter.com/5iGPpFOIEF</a></p>&mdash; Zain (@ZainHasan6) <a href="https://twitter.com/ZainHasan6/status/1908943306936967597?ref_src=twsrc%5Etfw">April 6, 2025</a></blockquote>
</div></figure>

<p class="has-text-align-none">“Meta’s interpretation of our policy did not match what we expect from model providers,” LMArena <a href="https://x.com/lmarena_ai/status/1909397817434816562">posted</a> on X two days after the model’s release. “Meta should have made it clearer that ‘Llama-4-Maverick-03-26-Experimental’ was a customized model to optimize for human preference. As a result of that, we are updating our leaderboard policies to reinforce our commitment to fair, reproducible evaluations so this confusion doesn’t occur in the future.“</p>

<p class="has-text-align-none">A spokesperson for Meta, Ashley Gabriel, said in an emailed statement that  &#8220;we experiment with all types of custom variants.”</p>

<p class="has-text-align-none">“‘Llama-4-Maverick-03-26-Experimental’ is a chat optimized version we experimented with that also performs well on LMArena,” Gabriel said. “We have now released our open source version and will see how developers customize Llama 4 for their own use cases. We’re excited to see what they will build and look forward to their ongoing feedback.”</p>

<p class="has-text-align-none">While what Meta did with Maverick isn’t explicitly against LMArena&#8217;s rules, the site has shared concerns <a href="https://blog.lmarena.ai/blog/2024/policy/?utm_source=chatgpt.com">about gaming the system</a> and taken steps to &#8220;prevent overfitting and benchmark leakage.&#8221; When companies can submit specially-tuned versions of their models for testing while releasing different versions to the public, benchmark rankings like LMArena become less meaningful as indicators of real-world performance. </p>

<p class="has-text-align-none">&#8220;It’s the most widely respected general benchmark because all of the other ones suck,” independent AI researcher Simon Willison tells <em>The Verge</em>. “When Llama 4 came out, the fact that it came second in the arena, just after Gemini 2.5 Pro — that really impressed me, and I&#8217;m kicking myself for not reading the small print.”</p>

<p class="has-text-align-none">Shortly after Meta released Maverick and Scout, the AI community started <a href="https://x.com/Yuchenj_UW/status/1909061004207816960">talking about a rumor</a> that Meta had also trained its Llama 4 models to perform better on benchmarks while hiding their real limitations. VP of generative AI at Meta, Ahmad Al-Dahle, addressed the accusations <a href="https://x.com/Ahmad_Al_Dahle/status/1909302532306092107">in a post on X</a>: “We&#8217;ve also heard claims that we trained on test sets &#8212; that&#8217;s simply not true and we would never do that. Our best understanding is that the variable quality people are seeing is due to needing to stabilize implementations.”</p>

<figure class="wp-block-pullquote"><blockquote><p>“It&#8217;s a very confusing release generally.”</p></blockquote></figure>

<p class="has-text-align-none">Some <a href="https://x.com/kalomaze/status/1908706389922324599">also noticed</a> that Llama 4 was released at an odd time. Saturday doesn’t tend to be when big AI news drops. After someone on Threads asked why Llama 4 was released over the weekend, Meta CEO Mark Zuckerberg <a href="https://www.threads.net/@zuck/post/DIFAsupTS7Z">replied</a>: “That&#8217;s when it was ready.”</p>

<p class="has-text-align-none">“It&#8217;s a very confusing release generally,” says Willison, who <a href="https://simonwillison.net/">closely follows and documents AI models</a>. &#8220;The model score that we got there is completely worthless to me. I can&#8217;t even use the model that they got a high score on.”</p>

<p class="has-text-align-none">Meta&#8217;s path to releasing Llama 4 wasn&#8217;t exactly smooth. According <a href="https://www.theinformation.com/articles/meta-nears-release-new-ai-model-performance-hiccups?rc=mshudk">to a recent report</a> from <em>The Information</em>, the company repeatedly pushed back the launch due to the model failing to meet internal expectations. Those expectations are especially high after DeepSeek, an open-source AI startup from China, released an open-weight model that generated a ton of buzz. </p>

<p class="has-text-align-none">Ultimately, using an optimized model in LMArena puts developers in a difficult position. When selecting models like Llama 4 for their applications, they naturally look to benchmarks for guidance. But as is the case for Maverick, those benchmarks can reflect capabilities that aren&#8217;t actually available in the models that the public can access.</p>

<p class="has-text-align-none">As AI development accelerates, this episode shows how benchmarks are becoming battlegrounds. It also shows how Meta is eager to be seen as an AI leader, even if that means gaming the system.</p>

<p class="has-text-align-none"><strong>Update, April 7th:</strong> <em>The story was updated to add Meta’s statement.</em></p>
						]]>
									</content>
			
					</entry>
			<entry>
			
			<author>
				<name>Tina Nguyen</name>
			</author>
			
			<author>
				<name>Kylie Robison</name>
			</author>
			
			<title type="html"><![CDATA[The AI industry doesn’t know if Trump just killed its GPU supply]]></title>
			<link rel="alternate" type="text/html" href="https://www.theverge.com/tech/643753/gpu-tariffs-nvidia-tsmc-chips-openai" />
			<id>https://www.theverge.com/?p=643753</id>
			<updated>2025-04-04T14:55:33-04:00</updated>
			<published>2025-04-04T14:55:33-04:00</published>
			<category scheme="https://www.theverge.com" term="AI" /><category scheme="https://www.theverge.com" term="Business" /><category scheme="https://www.theverge.com" term="Nvidia" /><category scheme="https://www.theverge.com" term="Policy" /><category scheme="https://www.theverge.com" term="Politics" /><category scheme="https://www.theverge.com" term="Tech" />
							<summary type="html"><![CDATA[AI companies can’t figure out if the Trump tariffs are about to decimate them – and the fact that no one has a clear answer is sending them, and the tech industry overall, into a confusion spiral.&#160; The markets are in disarray. Nvidia is down 7.59%, TSMC is down 7.22%. In San Francisco, sources tell [&#8230;]]]></summary>
			
							<content type="html">
											<![CDATA[

						
<figure>

<img alt="" data-caption="" data-portal-copyright="" data-has-syndication-rights="1" src="https://platform.theverge.com/wp-content/uploads/sites/2/2025/04/STKS488_TARIFFS_4_CVirginia_D.jpg?quality=90&#038;strip=all&#038;crop=0,0,100,100" />
	<figcaption>
		</figcaption>
</figure>
<p class="has-text-align-none">AI companies can’t figure out if the Trump tariffs are about to decimate them – and the fact that no one has a clear answer is sending them, and the tech industry overall, into a confusion spiral.&nbsp;</p>

<p class="has-text-align-none">The markets are in disarray. Nvidia is down 7.59%, TSMC is down 7.22%. In San Francisco, sources tell us that this isn’t a big deal. But in DC, people are panicking. The core question is whether GPUs – the graphics processing units that are crucial to AI computing and other industries – are exempted from Donald Trump&#8217;s sweeping tariffs, and the answer is startlingly ambiguous. </p>

<p class="has-text-align-none">Inside AI labs, researchers expect that their industry will be granted a tariff exemption. “I fully expect this to be a situation where Trump again gives companies he views as important/on his side/whatever a hall pass,” similar to what the President did with Apple during his first term, one source inside a major AI lab told <em>The Verge</em>.&nbsp;</p>

<p class="has-text-align-none">In Washington, however, nobody seems sure what the current state of play is. The Trump administration spelled out an exception for the semiconductor chips at the heart of a GPU, but for now, complete electronic products that contain chips will apparently be subject to tariffs. And companies that need GPUs for machine learning, deep learning, real-time processing, and much more require not just the chip, but the entire machine built around it. “Most AI GPUs are, I believe, imported not as chips but as servers, largely from Taiwan,” Chris Miller, a professor at Tufts University and the author of <em>Chip War: The Fight for the World’s Most Critical Technology,</em> told <em>The Verge</em> in an email. “So these would presumably face the general Taiwan tariff rate” of 32%, currently scheduled to hit on April 9th.</p>

<blockquote class="wp-block-quote is-layout-flow wp-block-quote-is-layout-flow">
<figure class="wp-block-pullquote"><blockquote><p> “Everyone’s asking for clarity.”</p></blockquote></figure>
</blockquote>

<p class="has-text-align-none">Ordinarily, government agencies might be able to explain what&#8217;s happening. But when asked for clarity, a public affairs official at NIST, the agency at the Commerce Department overseeing the CHIPS Act – a $50 billion investment in building chip manufacturing plants on American soil –&nbsp; directed <em>The Verge </em>to the White House.&nbsp; The White House did not immediately return a request for comment. Neither did the U.S. Trade Representative, the agency responsible for creating and executing the President’s tariff strategy.&nbsp;</p>

<p class="has-text-align-none">While the SF crowd is bullish on getting exemptions, tech lobbyists in Washington are closer to the chaos that is the second Trump administration, and are far too familiar with the president’s mercurial whims and vindictive nature, particularly against the tech industry. The fact that the tariffs are so vague and seem haphazardly thrown together – so haphazard, in fact, that there’s <a href="https://www.theverge.com/news/642620/trump-tariffs-formula-ai-chatgpt-gemini-claude-grok">a legitimate argument</a> that they were generated by AI – is only adding to their worries. “Everyone’s asking for clarity” from the administration, one lobbyist at a major tech company told <em>The Verge. </em>“So far, folks are saying that they think we’re okay, but not sure yet.” That’s not exactly a ringing vote of confidence for GPU tariff exemptions.&nbsp;</p>

<p class="has-text-align-none">The GPU confusion is rippling beyond the AI industry, hitting the Big Tech companies that stock thousands of these devices in datacenters across the country. Amazon, Google, and Microsoft all rely on access to GPUs to prop up their multi-billion dollar cloud architectures. Clearly, the market isn’t confident that their bottom line will weather these tariffs: The so-called &#8220;Magnificent Seven”, which includes the three companies above, have <a href="https://www.cnbc.com/2025/04/03/mag-7-relinquishes-more-than-800-billion-as-tech-drives-stock-market-nosedive.html">lost more than $1 trillion in market </a>capitalization since the tariffs were announced. (Spokespeople for Google and Microsoft&nbsp; declined to comment for this story.)&nbsp;</p>

<p class="has-text-align-none">The AI industry has reason to think Trump might favor it in particular. Just a few months ago, OpenAI&#8217;s Sam Altman and other tech CEOs stood in the White House alongside the president to <a href="https://www.theverge.com/openai/603952/sam-altman-stargate-ai-data-center-plan-hype-funding">announce Stargate</a>, a $500 billion datacenter infrastructure project devoted to fulfilling Altman&#8217;s demands for “more compute.” The current unclear state of GPUs coupled with the unpredictability of Trump&#8217;s tariffs in general, however, could shake their confidence —&nbsp;or at least mean they must renew attempts to curry Trump&#8217;s favor.</p>

<figure class="wp-block-pullquote"><blockquote><p>“If the tariffs are unchanged, we should prepare for a significant increase in the price of electronics.”</p></blockquote></figure>

<p class="has-text-align-none">GPU juggernaut Nvidia has apparently begun moving manufacturing to the US, which it hopes will help insulate it from the tariffs. <em>Reuters</em> reported last December, for instance, that it was finalizing plans to produce its Blackwell AI GPU chip at TSMC’s Arizona plant, which aims to start fabricating chips this year. Nvidia CEO Jensen Huang alluded to this partnership in a <a href="https://investor.nvidia.com/events-and-presentations/events-and-presentations/event-details/2025/GTC-Financial-Analyst-QA--2025-cPQi-MfNdU/default.aspx">recent investor Q&amp;A</a>: &#8220;We&#8217;re manufacturing in so many different places. We could shift things around. Tariffs will have a little impact for us short term. Long term, we&#8217;re going to have manufacturing onshore.”&nbsp;</p>

<p class="has-text-align-none">The same can’t be said for compute-hungry AI labs, hyperscale cloud providers, or even average PC builders — all of whom might <a href="https://www.theverge.com/tech/643041/trump-tariff-consumer-electronics-gadgets-smartphones-laptops-wearables">end up paying substantially more</a> for GPUs. “None of this is impossible, but trying to produce every segment of electronics supply chains domestically will produce a staggering increase in cost, especially in the short-term, given that some of the capabilities simply don&#8217;t exist domestically,” Miller said. That doesn’t even take into account the tariffs on the raw materials needed to build these machines and plants, or any retaliatory measures: on Friday, for instance, China announced a new set of export restrictions on rare earth minerals, which are crucial for electronics manufacturing. The U.S. currently imports 90 percent of its rare earth minerals from China.</p>

<p class="has-text-align-none">“If the tariffs are unchanged,” Miller added, “we should prepare for a significant increase in the price of electronics.”</p>

<p class="has-text-align-none">But ultimately, fealty to Trump might be more crucial than an appeal to reason. Amazon founder Jeff Bezos has pledged allegiance to the new administration, which is important for Anthropic, since it sources some of its compute through Amazon. These deals are fragile, though. A single critical story in Bezos&#8217; paper <em>The Washington Post </em>could lose Amazon (and by association, Anthropic) a tariff exemption, for instance. Anthropic also sources some of its compute through Google; along with Bezos, Alphabet CEO Sundar Pichai joined the gaggle of tech billionaires behind Trump at his inauguration.</p>

<p class="has-text-align-none">And there are other factors that could indirectly cause problems for these companies, of course — like the US being pushed into a recession that devastates the tech economy. “The second order effects of the tariffs could still be bad,” the AI lab source added.</p>

<p class="has-text-align-none">The market tumbles seemed to have mobilized tech leaders. A group of tech CEOs and finance leaders are on their way to Trump’s private resort, Mar-a-Lago, <a href="https://bsky.app/profile/karaswisher.bsky.social/post/3llynh6uoyk2z">according</a> to journalist Kara Swisher. The goal seems to be to find some sense in these tariffs — and perhaps some exemptions.</p>
						]]>
									</content>
			
					</entry>
			<entry>
			
			<author>
				<name>Kylie Robison</name>
			</author>
			
			<title type="html"><![CDATA[OpenAI and Anthropic are fighting over college students with free AI]]></title>
			<link rel="alternate" type="text/html" href="https://www.theverge.com/ai-artificial-intelligence/641193/openai-anthropic-education-tool-college" />
			<id>https://www.theverge.com/?p=641193</id>
			<updated>2025-04-03T12:42:26-04:00</updated>
			<published>2025-04-03T12:42:26-04:00</published>
			<category scheme="https://www.theverge.com" term="AI" /><category scheme="https://www.theverge.com" term="OpenAI" /><category scheme="https://www.theverge.com" term="Tech" />
							<summary type="html"><![CDATA[Two leading AI labs, OpenAI and Anthropic, just announced major initiatives in higher education. It’s the constant one-upping we’ve all become familiar with: this week, Anthropic dropped their announcement at 8 AM Wednesday, while OpenAI followed with nearly identical news at 8 AM Thursday. For Anthropic, this week’s announcement was its first major academic push. [&#8230;]]]></summary>
			
							<content type="html">
											<![CDATA[

						
<figure>

<img alt="" data-caption="" data-portal-copyright="" data-has-syndication-rights="1" src="https://platform.theverge.com/wp-content/uploads/sites/2/2025/03/STK414_AI_CVIRGINIA_2_C.jpg?quality=90&#038;strip=all&#038;crop=0,0,100,100" />
	<figcaption>
		</figcaption>
</figure>
<p class="has-text-align-none">Two leading AI labs, OpenAI and Anthropic, just announced major initiatives in higher education. It’s the constant one-upping we’ve all become familiar with: this week, Anthropic dropped their announcement at 8 AM Wednesday, while OpenAI followed with nearly identical news at 8 AM Thursday.</p>

<p class="has-text-align-none">For Anthropic, this week’s announcement was its first major academic push. It launched Claude for Education, a university-focused version of its chatbot. The company also announced partnerships with Northeastern University, London School of Economics (LSE), and Champlain College, along with with Internet2, which builds university tech infrastructure, and Instructure (maker of Canvas) to increase &#8220;equitable access to tools that support universities as they integrate AI.&#8221;</p>

<p class="has-text-align-none">At the center of Anthropic’s education-focused offering is &#8220;Learning mode,&#8221; a new feature that changes how Claude interacts with students. Instead of just providing answers, the press release says Learning mode will use Socratic questioning to guide students through problems, asking &#8220;How would you approach this?&#8221; or &#8220;What evidence supports your conclusion?&#8221; — with the goal of helping students ”develop critical thinking skills” rather than just doing their homework for them.</p>

<p class="has-text-align-none">“As social scientists, we are in a unique position to understand and shape how AI can positively transform education and society,” President and Vice-Chancellor of LSE Larry Kramer said in Anthropic’s press release.</p>

<p class="has-text-align-none">While Anthropic is just now entering higher education, OpenAI has been active in this space for nearly a year. The startup launched ChatGPT Edu in May 2024 — a university-focused version of its chatbot that came with multiple college partnerships. Last month, the company formed the NextGenAI Consortium, committing $50 million to accelerate AI research across 15 colleges. In February, OpenAI also partnered with California State University to bring ChatGPT Edu to all CSU campuses. Then, this Thursday, OpenAI announced that ChatGPT Plus (which costs $20 a month) will be free for all U.S. and Canadian college students through May.</p>

<p class="has-text-align-none">The OpenAI initiative highlights that ChatGPT is “here to help you through finals,” offering Plus tier benefits like large file uploads, Deep Research, and advanced voice features.</p>

<p class="has-text-align-none">&#8220;Today’s college students face enormous pressure to learn faster, tackle harder problems, and enter a workforce increasingly shaped by AI. Supporting their AI literacy means more than demonstrating how these tools work,” VP of Education at OpenAI Leah Belsky said in the company’s press release.</p>

<p class="has-text-align-none">Both labs releasing education initiatives simultaneously reveals the high value of college students. The race to embed AI tools in academia is a competition to shape how the next generation works with AI — and crucially, to become their default AI tool.</p>
						]]>
									</content>
			
					</entry>
			<entry>
			
			<author>
				<name>Kylie Robison</name>
			</author>
			
			<title type="html"><![CDATA[OpenAI just raised another $40 billion round led by SoftBank]]></title>
			<link rel="alternate" type="text/html" href="https://www.theverge.com/news/640259/openai-40-billion-softbank-investment" />
			<id>https://www.theverge.com/?p=640259</id>
			<updated>2025-03-31T19:20:17-04:00</updated>
			<published>2025-03-31T19:20:17-04:00</published>
			<category scheme="https://www.theverge.com" term="AI" /><category scheme="https://www.theverge.com" term="Business" /><category scheme="https://www.theverge.com" term="News" /><category scheme="https://www.theverge.com" term="OpenAI" /><category scheme="https://www.theverge.com" term="Tech" />
							<summary type="html"><![CDATA[OpenAI has raised $40 billion in a new investment round led by SoftBank, vaulting the company to a $300 billion valuation. It’s the largest funding round for a private tech company in history, according to CNBC. OpenAI is set to receive $10 billion up front (SoftBank will invest $7.5 billion along with $2.5 billion “from [&#8230;]]]></summary>
			
							<content type="html">
											<![CDATA[

						
<figure>

<img alt="" data-caption="" data-portal-copyright="" data-has-syndication-rights="1" src="https://platform.theverge.com/wp-content/uploads/sites/2/2025/02/STK155_OPEN_AI_2025_CVirgiia_A_0a5ae3.jpg?quality=90&#038;strip=all&#038;crop=0,0,100,100" />
	<figcaption>
		</figcaption>
</figure>
<p class="has-text-align-none">OpenAI has <a href="https://openai.com/index/march-funding-updates/">raised $40 billion in a new investment round</a> led by SoftBank, vaulting the company to a $300 billion valuation. It’s the largest funding round for a private tech company in history, <a href="https://www.cnbc.com/2025/03/31/openai-closes-40-billion-in-funding-the-largest-private-fundraise-in-history-softbank-chatgpt.html">according</a> to <em>CNBC</em>.</p>

<p class="has-text-align-none">OpenAI is set to receive $10 billion up front (SoftBank will invest $7.5 billion along with $2.5 billion “from an investor syndicate,“ <a href="https://www.bloomberg.com/news/articles/2025-03-31/openai-finalizes-40-billion-funding-at-300-billion-valuation?embedded-checkout=true">according</a> to <em>Bloomberg</em>). The remaining $30 billion is slated to arrive by year’s end, <em>CNBC</em> <a href="https://www.cnbc.com/2025/03/31/openai-funding-could-be-cut-by-10-billion-if-for-profit-move-lags.html">reported</a> — but only if it officially converts into a <a href="https://www.theverge.com/2024/12/27/24330131/openai-plan-transform-for-profit-company">for-profit company</a> by then. If not, it reportedly stands to lose a quarter of the deal.</p>

<p class="has-text-align-none">The funding comes just a few months after OpenAI announced Stargate, <a href="https://www.theverge.com/openai/603952/sam-altman-stargate-ai-data-center-plan-hype-funding">a $500 billion joint venture</a> with backing from SoftBank, Oracle, and the Abu Dhabi fund MGX to build out massive AI datacenters over the next four years. The company has been in a full sprint lately: On Monday, CEO Sam Altman posted that OpenAI gained 1 million users <a href="https://www.theverge.com/openai/639960/chatgpt-added-one-million-users-in-the-last-hour">in a single hour</a> in large part thanks to <a href="https://www.theverge.com/openai/635118/chatgpt-sora-ai-image-generation-chatgpt">a new feature</a> called Images in ChatGPT that went viral due to its ability to <a href="https://www.theverge.com/openai/636529/images-chatgpt-openai-studio-ghibli-copyright">produce Studio Ghibli inspired AI art</a>. Altman also <a href="https://www.theverge.com/openai/640073/openai-teases-a-new-open-source-ai-model">announced</a> on Wednesday that the company would “release a powerful new open-weight language model with reasoning in the coming months.” The funding <a href="https://openai.com/index/march-funding-updates/">announcement</a> claims that ChatGPT serves 500 million people every week.</p>

<p class="has-text-align-none">Despite the explosive growth the startup has seen thanks to ChatGPT, it’s still burning a <em>ton</em> of cash. <em>Bloomberg</em> <a href="https://www.bloomberg.com/news/articles/2025-03-26/openai-expects-revenue-will-triple-to-12-7-billion-this-year?embedded-checkout=true">recently reported</a> that the startup is expecting to earn $12.7 billion in revenue this year alone, which is a massive jump from the $3.7 billion it earned in annualized recurring revenue last year. Still, the report claims OpenAI doesn’t expect to be cash flow positive until 2029, when its projections predict a whopping $125 billion in revenue.</p>

<p class="has-text-align-none">OpenAI’s funding announcement emphasized the startup’s goal of building “AGI that benefits all of humanity.” Altman has been clear that it’ll take a <em>lot</em> to pull this all off—compute, energy, global infrastructure, and yes, a staggering amount of cash. This round, the startup claims, brings it one step closer.</p>
						]]>
									</content>
			
					</entry>
			<entry>
			
			<author>
				<name>Kylie Robison</name>
			</author>
			
			<title type="html"><![CDATA[ChatGPT is turning everything into Studio Ghibli art — and it got weird fast]]></title>
			<link rel="alternate" type="text/html" href="https://www.theverge.com/openai/636529/images-chatgpt-openai-studio-ghibli-copyright" />
			<id>https://www.theverge.com/?p=636529</id>
			<updated>2025-03-28T23:34:44-04:00</updated>
			<published>2025-03-27T10:42:31-04:00</published>
			<category scheme="https://www.theverge.com" term="AI" /><category scheme="https://www.theverge.com" term="OpenAI" /><category scheme="https://www.theverge.com" term="Tech" />
							<summary type="html"><![CDATA[AI-generated images have made significant progress since the days of abstract renderings and glitchy amalgamations. OpenAI&#8217;s newly released &#8220;Images for ChatGPT&#8221; has an uncanny ability to nail depth, shadows, and even text. It&#8217;s unleashed a frenzy of people recreating a familiar style: Hayao Miyazaki&#8217;s work at Studio Ghibli. The art style was already ubiquitous across [&#8230;]]]></summary>
			
							<content type="html">
											<![CDATA[

						
<figure>

<img alt="" data-caption="" data-portal-copyright="" data-has-syndication-rights="1" src="https://platform.theverge.com/wp-content/uploads/sites/2/2025/03/ai-label.png?quality=90&#038;strip=all&#038;crop=0,0,100,100" />
	<figcaption>
		</figcaption>
</figure>
<p class="has-text-align-none">AI-generated images have made significant progress since the days of <a href="https://www.reddit.com/r/tumblr/comments/124w1qf/old_ai_art/#lightbox">abstract renderings</a> and <a href="https://www.reddit.com/r/midjourney/comments/196bvz0/this_was_the_peak_of_ai_images_just_a_few_years/">glitchy amalgamations</a>. OpenAI&#8217;s newly released <a href="https://www.theverge.com/openai/635118/chatgpt-sora-ai-image-generation-chatgpt">&#8220;Images for ChatGPT&#8221;</a> has an uncanny ability to nail depth, shadows, and even text. It&#8217;s unleashed a frenzy of people recreating a familiar style: Hayao Miyazaki&#8217;s work at Studio Ghibli. The art style was already ubiquitous across the internet, thanks to its comforting, soft aesthetic (just look at <a href="https://www.youtube.com/watch?v=jfKfPfyJRdk&amp;ab_channel=LofiGirl">Lofi girl</a>) — and now, it&#8217;s a fully automated formula.</p>

<p class="has-text-align-none">The trend kicked off pretty wholesomely. <a href="https://x.com/GrantSlatton/status/1904631016356274286">Couples transformed portraits</a>, <a href="https://x.com/SurrealistShip/status/1904659599422349822">pet owners generated cartoonish cats</a>, and many people are busily Ghibli-fying their <a href="https://x.com/paultantk/status/1904825629247467624">families</a> (<a href="https://x.com/kyliebytes/status/1904918815252914300">I&#8217;ve stuck to selfies</a>, not wanting to share with OpenAI my siblings&#8217; likenesses). It’s an AI-generated version of the human-drawn art commissions people <a href="https://www.etsy.com/listing/1881254193/custom-ghibli-style-art-anime-portraits?ls=s&amp;ga_order=most_relevant&amp;ga_search_type=all&amp;ga_view_type=gallery&amp;ga_search_query=artist+makes+me+studio+ghibli&amp;ref=sr_gallery-1-8&amp;pro=1&amp;frs=1&amp;cns=1&amp;content_source=f3336e78dd22c89d50fc6ff28794ab88857b26a0%253A1881254193&amp;organic_search_click=1&amp;logging_key=f3336e78dd22c89d50fc6ff28794ab88857b26a0%3A1881254193">offer on Etsy</a> — you and your loved ones, <a href="https://www.etsy.com/listing/1863326233/ghiblify-custom-ghibli-style-portraits?ls=s&amp;ga_order=most_relevant&amp;ga_search_type=all&amp;ga_view_type=gallery&amp;ga_search_query=artist+makes+me+studio+ghibli+valentines+day&amp;ref=sr_gallery-1-4&amp;dd=1&amp;content_source=8abef854cdbd443d0eda8e466c3812ed5e711503%253A1863326233&amp;search_preloaded_img=1&amp;organic_search_click=1&amp;logging_key=8abef854cdbd443d0eda8e466c3812ed5e711503%3A1863326233">in the style of your favorite anime</a>.</p>

<p class="has-text-align-none">It didn’t take long for the trend to go full chaos mode. Nothing is sacred:<a href="https://x.com/__mihir/status/1904920720410345528"> the Twin Towers on 9/11</a>,<a href="https://x.com/keysmashbandit/status/1904764224636592188"> JFK&#8217;s assassination</a>,<a href="https://x.com/basedjensen/status/1904867163036672562?s=46"> Nvidia CEO Jensen Huang signing a woman’s chest</a>, President Donald Trump’s<a href="https://x.com/quantian1/status/1904870047698346191"> infamous group photo with Jeffrey Epstein</a>, and even OpenAI CEO Sam Altman&#8217;s<a href="https://x.com/jordihays/status/1904902871474463070"> congressional testimony</a> have all been reimagined with that distinctive Ghibli whimsy (it’s not clear whether these users transformed uploaded images, or prompted the system to copy them). Altman has<a href="https://x.com/sama/status/1904921537884676398"> played into the trend</a> too — he even<a href="https://x.com/sama/status/1904934112806187116"> changed his X profile picture</a> into a Ghibli rendering of himself and encouraged his followers to make him a new one.</p>

<figure class="wp-block-embed is-type-rich is-provider-twitter wp-block-embed-twitter"><div class="wp-block-embed__wrapper">
<blockquote class="twitter-tweet" data-dnt="true"><p lang="en" dir="ltr">outside of midjourney, when people would think of “ai generated images” they would mostly think “slop” but that all changed with a chatgpt update and one guy’s post about sending ghiblified images to his wife <a href="https://t.co/UEYk7hD25G">pic.twitter.com/UEYk7hD25G</a></p>&mdash; sophie (@netcapgirl) <a href="https://twitter.com/netcapgirl/status/1904899965316399338?ref_src=twsrc%5Etfw">March 26, 2025</a></blockquote>
</div></figure>

<p class="has-text-align-none">The guidelines for Images with ChatGPT are surprisingly flexible about producing this kind of content. “The model can generate images that resemble the aesthetics of some artists’ work when their name is used in the prompt,” according to <a href="https://cdn.openai.com/11998be9-5319-4302-bfbf-1167e093f1fb/Native_Image_Generation_System_Card.pdf">the GPT-4o system card</a>. “We added a refusal which triggers when a user attempts to generate an image in the style of a living artist.” (It’s worth noting that Miyazaki is, indeed, still alive.)</p>

<p class="has-text-align-none">OpenAI spokesperson Taya Christianson told <em>The Verge</em> that the company isn’t “blocking the model’s capability to depict adult public figures, but we implement the same safeguards we employ for editing images of photorealistic uploads of people.” She added that “depicting violence in artistic, creative or fictional contexts is generally allowed to enable creative and artistic endeavors.”</p>

<p class="has-text-align-none">“Our goal is to give users as much creative freedom as possible. We continue to prevent generations in the style of individual living artists. We do permit broader studio styles — which people have used to generate and share some truly delightful and inspired original fan creations,” Christianson wrote via email.</p>

<p class="has-text-align-none">That doesn’t mean you get free reign over transforming photos, which a lot of the ones going viral seemingly do. “Users should only upload materials to our services that they own or have the rights to use,” Christianson said.</p>

<p class="has-text-align-none">These guidelines, while flexible. haven’t stopped users from generating Ghibli-fied <a href="https://tobkxlxcavovnjfrtqut.supabase.co/storage/v1/object/public/showcase/0.9251841176508175.jpeg">Hitler in Paris</a> and <a href="https://tobkxlxcavovnjfrtqut.supabase.co/storage/v1/object/public/showcase/0.42232138971012034.jpeg">Korean-American business owners with guns during the 1992 Los Angeles riots</a>. There’s <a href="https://ghibli.geoarthouse.com/">already a website dedicated to compiling all the AI-generated images</a>, many of them featuring transformed photos of public figures.</p>

<p class="has-text-align-none">ChatGPT does block direct requests to generate photos of public figures or cartoon characters — when I asked it to make a photo of <a href="https://knowyourmeme.com/memes/ai-will-smith-eating-spaghetti">Will Smith eating spaghetti</a>, or Mark Zuckerberg wearing a <em>Verge</em>-branded hoodie, it declined those requests. But it&#8217;s not hard to bypass it by asking ChatGPT to generate photos with <a href="https://x.com/kyliebytes/status/1904922547021635618">a Totoro-like character</a>, or describing traits that point it to generating Zuckerberg. In principle, the guidelines <a href="https://openai.com/policies/row-terms-of-use/">state that</a> you cannot use ChatGPT in a way that “infringes, misappropriates or violates anyone’s rights.” You can report copyright infringement to ChatGPT directly, per the guidelines, and OpenAI “will terminate accounts of repeat infringers where appropriate.”</p>
<img src="https://platform.theverge.com/wp-content/uploads/sites/2/2025/03/image_2d636c.png?quality=90&#038;strip=all&#038;crop=4.4571428571429,0,91.085714285714,100" alt="A prompt into ChatGPT that states: okay now generate a photo of the social media tech executive who&#039;s company has become a household name. this executive has gone in front of congress to testify and he&#039;s known for wearing gold chains now. he was also in a fictional movie about social networks ChatGPT responds: I can&#039;t create or alter images to depict real individuals—even in fictionalized scenarios—especially when the details clearly point to someone specific. That said, I can generate a totally fictional &quot;social media tech exec&quot; inspired by that archetype—think: hoodie-meets-hypebeast energy, courtroom lighting, and a subtle aura of &quot;accidentally invented modern communication.&quot; Want me to spin that up? User responds: yes please :) Image rendered looks like Mark Zuckerberg." title="A prompt into ChatGPT that states: okay now generate a photo of the social media tech executive who&#039;s company has become a household name. this executive has gone in front of congress to testify and he&#039;s known for wearing gold chains now. he was also in a fictional movie about social networks ChatGPT responds: I can&#039;t create or alter images to depict real individuals—even in fictionalized scenarios—especially when the details clearly point to someone specific. That said, I can generate a totally fictional &quot;social media tech exec&quot; inspired by that archetype—think: hoodie-meets-hypebeast energy, courtroom lighting, and a subtle aura of &quot;accidentally invented modern communication.&quot; Want me to spin that up? User responds: yes please :) Image rendered looks like Mark Zuckerberg." data-has-syndication-rights="1" data-caption="That certainly resembles Mark Zuckerberg. | Kylie Robison / The Verge" data-portal-copyright="Kylie Robison / The Verge" />
<p class="has-text-align-none">OpenAI has faced numerous copyright disputes and conflicts over celebrity likenesses. Actor Scarlett Johansson <a href="https://www.theverge.com/2024/5/20/24161253/scarlett-johansson-openai-altman-legal-action">threatened legal action</a> for a voice tool that resembled her own, and there have been a litany of lawsuits against the startup from <a href="https://authorsguild.org/news/ag-and-authors-file-class-action-suit-against-openai/">authors</a>,<a href="https://www.nytimes.com/2023/12/27/business/media/new-york-times-open-ai-microsoft-lawsuit.html"> major media publications</a>, and <a href="https://fortune.com/2023/08/31/artists-lawsuit-artificial-intelligence-companies-copyright/">artists</a> who are trying to prove their content was used to train these models. For its part, OpenAI <a href="https://x.com/JoannaStern/status/1768306032466428291">has been largely cagey</a> about admitting what data is used and whether it was licensed, while it and other AI companies argue in court that AI is transforming the content into something completely new, not just regurgitating it.</p>

<p class="has-text-align-none">Meanwhile, Miyazaki isn&#8217;t the only artist getting copied. People have been able to <a href="https://x.com/fofrAI/status/1904919414358585515">render pictures in the styles of <em>Rick &amp; Morty</em></a>,<a href="https://x.com/beginanon/status/1904783421978927261"> <em>Wallace and Gromit</em></a>, <a href="https://x.com/ParkerOrtolani/status/1904926710077800561"><em>The Simpsons</em></a><em>, </em>and <a href="https://x.com/fofrAI/status/1904917944980680961"><em>South Park</em></a>.&nbsp;</p>

<p class="has-text-align-none">It isn’t clear how the animators of <em>Rick &amp; Morty</em> and <em>The Simpsons</em> feel about their work being transformed without permission — or if they might question how OpenAI got enough data to generate such images. ChatGPT multimodal product lead Jackie Shannon <a href="https://www.theverge.com/openai/635118/chatgpt-sora-ai-image-generation-chatgpt">previously told <em>The Verge</em></a><em> </em>that users own the images they generated here “and are free to use them within the bounds of our usage policies as they would like.”</p>

<p class="has-text-align-none">Miyazaki, the cofounder of <em>Studio Ghibli</em>, has actually made his feelings about AI-generated art quite clear. Before the rise of modern generative AI in 2016, the animator was shown a &#8220;deep learning&#8221;-created <a href="https://kotaku.com/hayao-miyazaki-ai-animation-voice-actors-strike-writers-1850893125">video clip from a production intern</a>. His words have resonated with AI skeptics.</p>

<p class="has-text-align-none">“I am utterly disgusted,” <a href="https://www.youtube.com/watch?v=ngZ0K3lWKRc">Miyazaki said</a>. “If you really want to make creepy stuff, you can go ahead and do it, but I would never wish to incorporate this technology into my work at all. I strongly feel that this is an insult to life itself.”</p>

<p class="has-text-align-none">But it didn’t take long for users on X to Ghibli-fy Miyazaki too:</p>

<figure class="wp-block-embed is-type-rich is-provider-twitter wp-block-embed-twitter"><div class="wp-block-embed__wrapper">
<blockquote class="twitter-tweet" data-dnt="true"><p lang="zxx" dir="ltr"><a href="https://t.co/LhttBYMnRQ">pic.twitter.com/LhttBYMnRQ</a></p>&mdash; tomie (@tomieinlove) <a href="https://twitter.com/tomieinlove/status/1904931006827946225?ref_src=twsrc%5Etfw">March 26, 2025</a></blockquote>
</div></figure>

<p class="has-text-align-none"></p>
						]]>
									</content>
			
					</entry>
			<entry>
			
			<author>
				<name>Kylie Robison</name>
			</author>
			
			<title type="html"><![CDATA[OpenAI rolls out image generation powered by GPT-4o to ChatGPT]]></title>
			<link rel="alternate" type="text/html" href="https://www.theverge.com/openai/635118/chatgpt-sora-ai-image-generation-chatgpt" />
			<id>https://www.theverge.com/?p=635118</id>
			<updated>2025-03-28T16:43:18-04:00</updated>
			<published>2025-03-25T14:00:00-04:00</published>
			<category scheme="https://www.theverge.com" term="AI" /><category scheme="https://www.theverge.com" term="OpenAI" /><category scheme="https://www.theverge.com" term="Tech" />
							<summary type="html"><![CDATA[OpenAI is integrating new image generation capabilities directly into ChatGPT starting today — this feature is dubbed “Images in ChatGPT.” Users can now use GPT-4o to generate images within ChatGPT itself. This initial release focuses solely on image creation and will be available across ChatGPT Plus, Pro, Team, and Free subscription tiers. The free tier’s [&#8230;]]]></summary>
			
							<content type="html">
											<![CDATA[

						
<figure>

<img alt="" data-caption="" data-portal-copyright="" data-has-syndication-rights="1" src="https://platform.theverge.com/wp-content/uploads/sites/2/2025/03/Newton-3.png?quality=90&#038;strip=all&#038;crop=0,0,100,100" />
	<figcaption>
		</figcaption>
</figure>
<p class="has-text-align-none">OpenAI is integrating new image generation capabilities directly into ChatGPT starting today — this feature is dubbed “Images in ChatGPT.” Users can now use GPT-4o to generate images within ChatGPT itself.</p>

<p class="has-text-align-none">This initial release focuses solely on image creation and will be available across ChatGPT Plus, Pro, Team, and Free subscription tiers. The free tier’s usage limit is the same as DALL-E, spokesperson Taya Christianson told <em>The Verge</em>, but added that they “didn’t have a specific number to share” and ”these may change over time based on demand.“ Per the <a href="https://help.openai.com/en/articles/9275245-using-chatgpt-s-free-tier-faq">ChatGPT FAQ</a>, free users were previously able to generate &#8220;three images per day with DALL·E 3.&#8221; As for the fate of DALL-E, Christianson said “fans” will “still have access via a custom GPT.”</p>

<p class="has-text-align-none">“This model is a step change above previous models,” research lead is Gabriel Goh told <em>The Verge</em>, adding that the team used the GPT-4o &#8220;omnimodal&#8221; — or a model that can generate any kind of data like text, image, audio, and video —<strong> </strong>foundation for this feature.</p>

<p class="has-text-align-none">Some of the improvements Goh noted include “binding,” which refers to how well AI image generators maintain correct relationships between attributes and objects;<strong> </strong>a model with poor binding, for instance, might get a prompt for a blue star plus a red triangle and create a red star and no triangle. Most image models struggle with this, Goh said, often mixing up colors and shapes when asked to render multiple items — typically around 5 to 8. He says this new image generation tool can correctly bind attributes for 15 to 20 objects without confusion, representing a significant improvement in accuracy and reliability.</p>
<img src="https://platform.theverge.com/wp-content/uploads/sites/2/2025/03/4x4.png?quality=90&#038;strip=all&#038;crop=0,0,100,100" alt="A visual representation of Sora’s binding capabilities, which are able to render multiple objects in an image. This one has multiple colored shapes, numbers, patterns, and a cursive OpenAI." title="A visual representation of Sora’s binding capabilities, which are able to render multiple objects in an image. This one has multiple colored shapes, numbers, patterns, and a cursive OpenAI." data-has-syndication-rights="1" data-caption="An example of Images in ChatGPT’s  “binding” capabilities. | OpenAI" data-portal-copyright="OpenAI" />
<p class="has-text-align-none">Users will also notice an improvement in text rendering, which makes it easier to generate coherent text without typos on an image (in existing tools, you’ll often notice that text <a href="https://www.reddit.com/r/dalle2/comments/15u499d/fixing_text_gibberish_in_aigenerated_images/">gets garbled pretty easily</a>). Getting text rendering right was a significant challenge, Goh said. If small titles or text elements have typos or errors, the entire image can become unusable.</p>

<p class="has-text-align-none">“This was just like a process of iteration that took many, many months to get right,” Goh said. While not perfect, he said the team reached a point where the text quality is consistently usable (where it tends to blunder is really small text). “It’s been just many months of small improvements.”</p>

<p class="has-text-align-none">The system uses an autoregressive approach — generating images sequentially from left to right and top to bottom, similar to how text is written — rather than the diffusion model technique used by most image generators (like DALL-E) that create the entire image at once. Goh speculates that this technical difference could be what gives Images in ChatGPT better text rendering and binding capabilities.</p>
<img src="https://platform.theverge.com/wp-content/uploads/sites/2/2025/03/Cocktails.png?quality=90&#038;strip=all&#038;crop=0,0,100,100" alt="An AI-generated example of Sora’s ability to generate text. It shows the 4 most popular cocktails, with the ingredients to make them." title="An AI-generated example of Sora’s ability to generate text. It shows the 4 most popular cocktails, with the ingredients to make them." data-has-syndication-rights="1" data-caption="An example of Images in ChatGPT’s ability to generate coherent text. | OpenAI" data-portal-copyright="OpenAI" />
<p class="has-text-align-none">In a briefing before the feature launch, the team demonstrated several examples showing the system&#8217;s capabilities, including scientific diagrams like Newton&#8217;s prism experiment with correctly labeled components, multi-panel comics with consistent characters and text bubbles, and informational posters with accurate text. They also highlighted practical applications like creating transparent background images for stickers, restaurant menus, and logos.</p>

<p class="has-text-align-none">&#8220;If I go to draw an image, I do so with the limitation of my own skill&#8230; but also with all of the knowledge of the world that I&#8217;ve built up,&#8221; ChatGPT multimodal product lead Jackie Shannon explained. &#8220;The model brings world knowledge to the equation, so when you ask for an image of Newton&#8217;s prism experiment, you don&#8217;t have to explain what that is to get an image back.&#8221;</p>

<p class="has-text-align-none">The new system does take longer to generate images than before, though OpenAI suggests this is a worthwhile tradeoff. &#8220;While we certainly have room to improve on latency…the quality of these images, the capability, the world knowledge, really makes up for the additional seconds that they&#8217;ll spend waiting,&#8221; Shannon said.</p>
<img src="https://platform.theverge.com/wp-content/uploads/sites/2/2025/03/Newton-2.png?quality=90&#038;strip=all&#038;crop=0.048828125,0,99.90234375,100" alt="An AI-generated image of Newton’s prism experiment on a notepad at Washington Square Park." title="An AI-generated image of Newton’s prism experiment on a notepad at Washington Square Park." data-has-syndication-rights="1" data-caption="Newton’s prism experiment rendered on a notepad in Washington Square Park. | OpenAI" data-portal-copyright="OpenAI" />
<p class="has-text-align-none">When asked about safeguards — pointing out the <a href="https://www.theverge.com/2024/2/5/24061985/those-explicit-ai-generated-taylor-swift-images-unsurprisingly-came-from-4chan">infamous nude deepfakes of Taylor Swift</a> generated using a Microsoft model, <a href="https://www.theverge.com/2024/8/14/24220173/xai-grok-image-generator-misinformation-offensive-imges">xAI’s Grok ability to render Kamala Harris with a gun</a>, and <a href="https://www.theverge.com/news/631203/google-gemini-flash-2-native-image-generation-watermark-removal">Google Gemini’s knack for removing watermarks</a> — the OpenAI team emphasized the system includes robust safeguards to prevent misuse. Shannon said the tool prevents watermark removal, blocks generation of sexual deepfakes, and refuses CSAM generation requests.</p>

<p class="has-text-align-none">OpenAI&#8217;s new image generation system doesn&#8217;t include visual watermarks or indicators showing images are AI-generated. However, Shannon explained that &#8220;all of our generated images will include <a href="https://help.openai.com/en/articles/8912793-c2pa-in-dall-e-3">standard C2PA metadata</a> to mark the image as having been created by OpenAI&#8221; and the company &#8220;will have some internal tooling to be able to look up images as well.&#8221;</p>

<p class="has-text-align-none">“Ultimately, no system is perfect for this type of thing, but we’re continuously improving our safeguards and we think of this as a starting point,” Shannon added. “One thing that&#8217;s true about all of the images generated from ChatGPT is that the user owns them and are free to use them within the bounds of our usage policies as they would like.”</p>

<p class="has-text-align-none"><strong>Correction, March 25th:</strong> <em>The story originally referred to the image generation feature in ChatGPT as Sora; it is known as Images in ChatGPT.</em></p>
						]]>
									</content>
			
					</entry>
	</feed>
