<?xml version="1.0" encoding="UTF-8"?><feed
	xmlns="http://www.w3.org/2005/Atom"
	xmlns:thr="http://purl.org/syndication/thread/1.0"
	xml:lang="en-US"
	>
	<title type="text">Robert Hart | The Verge</title>
	<subtitle type="text">The Verge is about technology and how it makes us feel. Founded in 2011, we offer our audience everything from breaking news to reviews to award-winning features and investigations, on our site, in video, and in podcasts.</subtitle>

	<updated>2026-04-21T10:42:54+00:00</updated>

	<link rel="alternate" type="text/html" href="https://www.theverge.com/author/robert-hart" />
	<id>https://www.theverge.com/authors/robert-hart/rss</id>
	<link rel="self" type="application/atom+xml" href="https://www.theverge.com/authors/robert-hart/rss" />

	<icon>https://platform.theverge.com/wp-content/uploads/sites/2/2025/01/verge-rss-large_80b47e.png?w=150&amp;h=150&amp;crop=1</icon>
		<entry>
			
			<author>
				<name>Robert Hart</name>
			</author>
			
			<title type="html"><![CDATA[Yelp is making its AI chatbot way more useful]]></title>
			<link rel="alternate" type="text/html" href="https://www.theverge.com/ai-artificial-intelligence/915626/yelp-ai-assistant-chatbot-major-upgrade" />
			<id>https://www.theverge.com/?p=915626</id>
			<updated>2026-04-21T06:42:54-04:00</updated>
			<published>2026-04-21T07:00:00-04:00</published>
			<category scheme="https://www.theverge.com" term="AI" /><category scheme="https://www.theverge.com" term="News" /><category scheme="https://www.theverge.com" term="Tech" />
							<summary type="html"><![CDATA[Yelp is giving its chatbot assistant a major upgrade, turning the platform into something closer to a digital concierge with a suite of new features designed for “getting things done.” The move, one of several AI-focused updates in recent months, is part of a broader industry push to make AI more relevant and practically useful [&#8230;]]]></summary>
			
							<content type="html">
											<![CDATA[

						
<figure>

<img alt="" data-caption="" data-portal-copyright="Image: Yelp" data-has-syndication-rights="1" src="https://platform.theverge.com/wp-content/uploads/sites/2/2026/04/Yelp-Assistant_-Making-Beauty-Appointments-via-Vagaro.jpg?quality=90&#038;strip=all&#038;crop=0,0,100,100" />
	<figcaption>
		</figcaption>
</figure>
<p class="has-text-align-none">Yelp is giving its chatbot assistant a major upgrade, turning the platform into something closer to a digital concierge with a suite of new features designed for “getting things done.” The move, one of <a href="https://www.theverge.com/news/714944/yelp-ai-stitched-videos">several</a> AI-focused <a href="https://www.theverge.com/news/802529/yelp-ai-host-receptionist">updates</a> in recent months, is part of a broader industry push to make AI more relevant and practically useful to consumers while turning huge troves of user-generated data into a competitive edge. </p>

<p class="has-text-align-none"><br>In a press release, Yelp says the Yelp Assistant chatbot will be at “the center of the app experience,” where it can answer questions, make recommendations, and even handle bookings in a single conversation. The bot will be available through a new “Assistant” tab spanning every category on Yelp, a significant expansion from its <a href="https://www.theverge.com/2024/4/30/24144812/yelp-assistant-ai-chatbot-services-search">2024 debut</a> as a tool for hiring service professionals.</p>
<div class="youtube-embed"><iframe title="Yelp&#039;s 2026 Spring Release: The new Yelp Assistant, booking integrations, and enhanced Menu Vision" src="https://www.youtube.com/embed/bP74xqkossw?rel=0" allowfullscreen allow="accelerometer *; clipboard-write *; encrypted-media *; gyroscope *; picture-in-picture *; web-share *;"></iframe></div>
<p class="has-text-align-none">Yelp is also broadening Assistant’s reach with a set of app integrations that let users order takeout or delivery through DoorDash, Grubhub, and other delivery services, request quotes from professionals in the area like auto and pet care, and book appointments with beauty, wellness, fitness, and healthcare providers through Vagaro and Zocdoc. Support for Yelp Waitlist is “coming soon,” as is Calendly integration for scheduling appointments.&nbsp;&nbsp;</p>

<p class="has-text-align-none">Craig Saldanha, Yelp’s chief product officer, described the update as the company’s “most significant AI product evolution yet,” adding that it is “only the beginning of a more conversational, personalized and action-oriented Yelp experience.”</p>
						]]>
									</content>
			
					</entry>
			<entry>
			
			<author>
				<name>Robert Hart</name>
			</author>
			
			<title type="html"><![CDATA[OpenAI’s big Codex update is a direct shot at Claude Code]]></title>
			<link rel="alternate" type="text/html" href="https://www.theverge.com/ai-artificial-intelligence/913034/openai-codex-updates-use-macos" />
			<id>https://www.theverge.com/?p=913034</id>
			<updated>2026-04-16T19:42:40-04:00</updated>
			<published>2026-04-16T13:00:00-04:00</published>
			<category scheme="https://www.theverge.com" term="AI" /><category scheme="https://www.theverge.com" term="News" /><category scheme="https://www.theverge.com" term="OpenAI" /><category scheme="https://www.theverge.com" term="Tech" />
							<summary type="html"><![CDATA[OpenAI is beefing up its agentic coding and development system, Codex, with a suite of updates that let it use your computer, generate images, and remember from past experiences.&#160;The package of updates comes as OpenAI’s rivalry with Anthropic intensifies, following the stellar successes of Claude Code and OpenAI aggressively shifting resources to catch up. Codex [&#8230;]]]></summary>
			
							<content type="html">
											<![CDATA[

						
<figure>

<img alt="" data-caption="Codex can control apps on your desktop like Tic Tac Toe. | Image: OpenAI" data-portal-copyright="Image: OpenAI" data-has-syndication-rights="1" src="https://platform.theverge.com/wp-content/uploads/sites/2/2026/04/Screenshot-2026-04-16-at-13.01.11.png?quality=90&#038;strip=all&#038;crop=0,5.8356559824806,100,89.31166633486" />
	<figcaption>
	Codex can control apps on your desktop like Tic Tac Toe. | Image: OpenAI	</figcaption>
</figure>
<p class="has-text-align-none">OpenAI is beefing up its agentic coding and development system, Codex, with <a href="https://openai.com/index/codex-for-almost-everything/">a suite of updates</a> that let it use your computer, generate images, and remember from past experiences.&nbsp;The package of updates comes as OpenAI’s rivalry with Anthropic intensifies, following the <a href="https://www.theverge.com/report/874308/anthropic-claude-code-opus-hype-moment">stellar successes of Claude Code</a> and OpenAI <a href="https://www.theverge.com/ai-artificial-intelligence/911118/openai-memo-cro-ai-competition-anthropic">aggressively shifting resources</a> to catch up.</p>

<p class="has-text-align-none">Codex will now be able to operate desktop apps on your computer, OpenAI <a href="https://openai.com/index/codex-for-almost-everything/">says in a blog post announcing the update</a>. It can work in the background, meaning it won’t interfere with your own work in other apps, and multiple agents can work in parallel. For developers, OpenAI says, “this is helpful for testing and iterating on frontend changes, testing apps, or working in apps that don’t expose an API.”&nbsp;</p>

<p class="has-text-align-none">The feature will start rolling out to Codex desktop app users signed in with ChatGPT today and will initially be limited to macOS. OpenAI did not indicate a timeline for when use will expand to other operating systems. EU users will also have to wait, it said, adding that the update will roll out to users there “soon.”</p>

<p class="has-text-align-none">Codex is also getting the ability to generate and iterate on images with gpt-image-1.5, new plug-ins for tools like GitLab, Atlassian Rovo, and Microsoft Suite, and native web browsing through an in-app browser, “where you can comment directly on pages to provide precise instructions to the agent.” OpenAI also said it will be easier to automate tasks, with users able to reuse existing conversation threads and Codex now able to schedule future work for itself and wake up automatically to continue on a long-term task.&nbsp;</p>

<p class="has-text-align-none">Codex will also be getting a memory feature, allowing it to remember useful context from past experience, such as personal preferences, corrections, and information that took time to gather. OpenAI said it hopes the opt-in feature, which will be released as a preview, will help complete future tasks faster and to a quality that previously required detailed custom instructions. The personalization features will roll out to Enterprise, Edu, and EU users “soon.”&nbsp;<br></p>
						]]>
									</content>
			
					</entry>
			<entry>
			
			<author>
				<name>Robert Hart</name>
			</author>
			
			<title type="html"><![CDATA[Character.AI’s new Books mode turns reading into roleplay]]></title>
			<link rel="alternate" type="text/html" href="https://www.theverge.com/ai-artificial-intelligence/912997/character-ai-books-mode" />
			<id>https://www.theverge.com/?p=912997</id>
			<updated>2026-04-16T10:34:28-04:00</updated>
			<published>2026-04-16T10:00:00-04:00</published>
			<category scheme="https://www.theverge.com" term="AI" /><category scheme="https://www.theverge.com" term="News" />
							<summary type="html"><![CDATA[Mired in controversy and legal woes over concerns about its chatbots’ interactions with users, particularly teens, Character.AI seems to be playing it safer with a new “Books” mode. The new format lets users step inside familiar worlds for a more structured roleplaying experience, one the company hopes will broaden perceptions of what AI roleplay can [&#8230;]]]></summary>
			
							<content type="html">
											<![CDATA[

						
<figure>

<img alt="" data-caption="" data-portal-copyright="Image: Character.AI" data-has-syndication-rights="1" src="https://platform.theverge.com/wp-content/uploads/sites/2/2026/04/c.aiBooks-Header-Image.png?quality=90&#038;strip=all&#038;crop=0,0,100,100" />
	<figcaption>
		</figcaption>
</figure>
<p class="has-text-align-none">Mired in controversy and legal woes over concerns about its chatbots’ interactions with users, particularly teens, Character.AI seems to be playing it safer with a new “Books” mode. The new format lets users step inside familiar worlds for a more structured roleplaying experience, one the company hopes will broaden perceptions of what AI roleplay can be beyond <a href="https://www.theverge.com/2024/12/12/24319050/character-ai-chatbots-teen-model-training-parental-controls">romancing minors</a>, <a href="https://www.theverge.com/ai-artificial-intelligence/892978/ai-chatbots-investigation-help-teens-plan-violence">encouraging violence</a>, and <a href="https://www.theverge.com/2024/12/10/24317839/character-ai-lawsuit-teen-harmful-messages-mental-health">promoting self-harm</a>.</p>

<p class="has-text-align-none">In a blog post, Character.AI said Books is launching with a catalog of more than 20 classic public domain titles sourced from Project Gutenberg, including <em>Alice in Wonderland</em>, <em>Pride and Prejudice</em>, <em>Dracula</em>, <em>Frankenstein</em>, <em>Romeo and Juliet</em>, and<em> The Great Gatsby</em>. “Every book lets you choose who you want to be,” the company said, allowing users to step into the narrative as an existing character or as one of their own Character AI personas.</p>

<p class="has-text-align-none">There are a few ways to play through each story. The purist “book arc mode” follows the original narrative, plot points, and stakes while weaving the user into the story. There’s also a looser, “off-script mode” that lets users interact with the world and characters more freely. Character.AI said a “more guided experience, TapTale, is coming soon,” offering pre-written prompts users can pick to drive the story forward in addition to freeform typing.&nbsp;&nbsp;</p>

<p class="has-text-align-none">For those wanting to push things even further, Books will also let users rework a book’s premise entirely through what Character calls alternative universe remixes. Think<em> Alice in Wonderland</em> as a romcom set in space, or <em>The Wizard of Oz</em> with Toto running the show. Users will be able to share their alternative universes and explore those made by other people.</p>

<div class="image-slider">
	<div class="image-slider">
		
<img src="https://platform.theverge.com/wp-content/uploads/sites/2/2026/04/c.aiBooks-Community-AUs.png?quality=90&#038;strip=all&#038;crop=7.8125,0,84.375,100" alt="" title="" data-has-syndication-rights="1" data-caption="&lt;em&gt;Alice in ‘Everyone becomes employees building AI’&lt;/em&gt; &lt;em&gt;land&lt;/em&gt;. | Image: Character.AI" data-portal-copyright="Image: Character.AI" />

<img src="https://platform.theverge.com/wp-content/uploads/sites/2/2026/04/c.aiBooks-Character-Selection.png?quality=90&#038;strip=all&#038;crop=7.8125,0,84.375,100" alt="" title="" data-has-syndication-rights="1" data-caption="&lt;em&gt;Choose your character.&lt;/em&gt; | Image: Character.AI" data-portal-copyright="Image: Character.AI" />

<img src="https://platform.theverge.com/wp-content/uploads/sites/2/2026/04/c.aiBooks-Chat.png?quality=90&#038;strip=all&#038;crop=7.8125,0,84.375,100" alt="" title="" data-has-syndication-rights="1" data-caption="&lt;em&gt;Down the rabbit hole.&lt;/em&gt; | Image: Character.AI" data-portal-copyright="Image: Character.AI" />
	</div>
</div>

<p class="has-text-align-none">The feature is available to everyone through Character’s mobile app or web-based prototype hub, Labs. Even free users can try it out, though the company said they’ll only get a “handful of free turns.”</p>

<p class="has-text-align-none"><br>It’s not clear whether minors will be able to use the more guided features in Books. Character, facing <a href="https://www.theverge.com/2024/10/23/24277962/character-ai-google-wrongful-death-lawsuit">lawsuits</a><a href="https://www.theverge.com/2024/12/10/24317839/character-ai-lawsuit-teen-harmful-messages-mental-health"> </a><a href="https://www.theverge.com/2024/12/10/24317839/character-ai-lawsuit-teen-harmful-messages-mental-health">accusing</a> it of harming teens’ mental health, <a href="https://www.theverge.com/ai-artificial-intelligence/808081/character-ai-under-18-chat-ban">shut down</a> open-ended chat features for minors last year and <a href="https://www.theverge.com/news/829892/character-ai-stories-launch-teens">introduced more structured experiences called Stories</a>.</p>

<p class="has-text-align-none"></p>
						]]>
									</content>
			
					</entry>
			<entry>
			
			<author>
				<name>Robert Hart</name>
			</author>
			
			<title type="html"><![CDATA[Grok’s sexual deepfakes almost got it banned from Apple’s App Store. Almost. ]]></title>
			<link rel="alternate" type="text/html" href="https://www.theverge.com/ai-artificial-intelligence/912297/apple-app-store-ban-grok-x-deepfakes" />
			<id>https://www.theverge.com/?p=912297</id>
			<updated>2026-04-15T07:21:43-04:00</updated>
			<published>2026-04-15T06:55:22-04:00</published>
			<category scheme="https://www.theverge.com" term="AI" /><category scheme="https://www.theverge.com" term="Apple" /><category scheme="https://www.theverge.com" term="News" /><category scheme="https://www.theverge.com" term="Tech" /><category scheme="https://www.theverge.com" term="xAI" />
							<summary type="html"><![CDATA[Apple quietly threatened to kick Elon Musk’s AI app, Grok, from its App Store in January over its failure to curb the surge of nonconsensual sexual deepfakes flooding X, according to NBC News. It was a muted show of force from one of tech’s most powerful gatekeepers, made behind closed doors even as the undressing [&#8230;]]]></summary>
			
							<content type="html">
											<![CDATA[

						
<figure>

<img alt="" data-caption="" data-portal-copyright="Image: The Verge" data-has-syndication-rights="1" src="https://platform.theverge.com/wp-content/uploads/sites/2/2026/04/STK468_APPLE_ANTITRUST_CVIRGINIA_G.jpg?quality=90&#038;strip=all&#038;crop=0,0,100,100" />
	<figcaption>
		</figcaption>
</figure>
<p class="has-text-align-none">Apple quietly threatened to kick Elon Musk’s AI app, Grok, from its App Store in January over its failure to curb the surge of nonconsensual sexual deepfakes flooding X, <a href="https://www.nbcnews.com/tech/tech-news/apple-threat-remove-grok-app-store-deepfake-letter-musk-x-ai-rcna331677">according</a> to <em>NBC News</em>. It was a muted show of force from one of tech’s most powerful gatekeepers, made behind closed doors even as the <a href="https://www.theverge.com/news/859715/x-grok-ai-deepfakes">undressing crisis unfolded in full public view</a> and <a href="https://www.theverge.com/news/862460/apple-google-app-stores-ditch-grok-x-open-letters">criticism over Apple’s cowardice</a> mounted.</p>

<p class="has-text-align-none">In a letter <a href="https://www.nbcnews.com/tech/tech-news/apple-threat-remove-grok-app-store-deepfake-letter-musk-x-ai-rcna331677">obtained</a> by <em>NBC News</em>, Apple told US senators it “contacted the teams behind both X and Grok after it received complaints and saw news coverage of the scandal” and demanded that the developers “create a plan to improve content moderation.” At the time, xAI’s chatbot Grok was freely accessible on X and as a standalone app, with flimsy safeguards that allowed users to easily generate and share sexualized deepfakes and “undress” images of real people, disproportionately women and <a href="https://www.theverge.com/ai-artificial-intelligence/855832/grok-undressing-children-csam-law-x-elon-musk">some of them apparently minors</a>.&nbsp;</p>

<p class="has-text-align-none">As we <a href="https://www.theverge.com/policy/859902/apple-google-run-by-cowards">reported at the time,</a> these were flagrant and unambiguous violations of App Store guidelines it often applies with an iron fist. Apple, which profits from having apps like X and Grok on its digital store, has not spoken publicly about the issue or its behind-the-scenes intervention. Google, through its Google Play app store, profits similarly and has also not commented publicly on the matter.&nbsp;&nbsp;</p>

<p class="has-text-align-none">Apple said it reviewed proposed changes to the X and Grok apps. While the company concluded X had “substantially resolved its violations,” Grok “remained out of compliance.” Apple said it warned the developer that “additional changes to remedy the violation would be required, or the app could be removed from the App Store.” Only after further back-and-forth did Apple determine Grok had “substantially improved” and approved its submission. </p>

<p class="has-text-align-none">Throughout this covert back-and-forth, Grok and X appear to have remained live on the App Store, a drawn-out process that may help explain the confusing, haphazard rollout of moderation changes announced in real time. This included <a href="https://www.theverge.com/news/859309/grok-undressing-limit-access-gaslighting">limiting Grok on X to paying subscribers</a> and <a href="https://www.theverge.com/news/861894/grok-still-undressing-in-uk">attempting to stop Grok from undressing women</a>. Our investigations revealed that neither were particularly effective beyond making the tool a bit harder to access. Later interventions, like X <a href="https://www.theverge.com/tech/891352/x-grok-xai-edit-blocker-photo-toggle">letting users block Grok from editing their photos</a>, are also easily circumvented.</p>

<p class="has-text-align-none">Despite Apple’s approval and xAI’s claims it has tightened safeguards, Grok still appears to be able to generate sexualized deepfakes with relative ease. Cybersecurity sources told me they have been able to create explicit images of celebrities and political figures using the tool, and I have been able to <a href="https://www.theverge.com/report/872062/grok-still-undressing-men">produce similar images of myself</a> and other consenting adults. <em>NBC </em>also <a href="https://www.nbcnews.com/tech/tech-news/musks-ai-chatbot-grok-xai-making-sexual-deepfakes-imagine-rcna265855">reported</a> similar findings yesterday.</p>
						]]>
									</content>
			
					</entry>
			<entry>
			
			<author>
				<name>Robert Hart</name>
			</author>
			
			<title type="html"><![CDATA[Gen Z’s love-hate relationship with AI]]></title>
			<link rel="alternate" type="text/html" href="https://www.theverge.com/ai-artificial-intelligence/909687/gen-z-doesnt-like-ai-gallup" />
			<id>https://www.theverge.com/?p=909687</id>
			<updated>2026-04-10T08:11:17-04:00</updated>
			<published>2026-04-10T07:23:28-04:00</published>
			<category scheme="https://www.theverge.com" term="AI" /><category scheme="https://www.theverge.com" term="News" />
							<summary type="html"><![CDATA[Gen Z is increasingly disillusioned with AI — just not enough to stop using it.&#160; A new Gallup report released this week, based on responses from nearly 1,600 people ages 14 to 29 across the US, suggests the hype is wearing off for the digital-native generation as AI becomes more embedded in school and work. [&#8230;]]]></summary>
			
							<content type="html">
											<![CDATA[

						
<figure>

<img alt="" data-caption="" data-portal-copyright="Image: The Verge" data-has-syndication-rights="1" src="https://platform.theverge.com/wp-content/uploads/sites/2/2026/04/STK414_AI_CVIRGINIA_2_A.jpg?quality=90&#038;strip=all&#038;crop=0,0,100,100" />
	<figcaption>
		</figcaption>
</figure>
<p class="has-text-align-none">Gen Z is increasingly disillusioned with AI — just not enough to stop using it.&nbsp;</p>

<p class="has-text-align-none">A new Gallup <a href="https://www.gallup.com/analytics/651674/gen-z-research.aspx">report released this week</a>, based on responses from nearly 1,600 people ages 14 to 29 across the US, suggests the hype is wearing off for the digital-native generation as AI becomes more embedded in school and work. Enthusiasm is falling and resentment is growing, even as many young people feel they still need to use the technology.&nbsp;</p>

<p class="has-text-align-none">Gallup’s poll, conducted in February and March this year, found Gen Z’s feelings on AI have cooled significantly since last year. Only 18 percent said they were hopeful about the technology and 22 percent said they were excited, down from 27 percent and 36 percent, respectively. At the same time, anger is growing: 31 percent of respondents said they feel angry about AI, up from 22 percent last year. Anxiety about AI has remained relatively steady at around 40 percent.&nbsp;</p>
<img src="https://platform.theverge.com/wp-content/uploads/sites/2/2026/04/Screenshot-2026-04-10-at-12.10.41-1.png?quality=90&#038;strip=all&#038;crop=0,0,100,100" alt="" title="" data-has-syndication-rights="1" data-caption="&lt;em&gt;AI has Gen Z feeling…&lt;/em&gt; | Image: &lt;a href=&quot;https://www.gallup.com/analytics/651674/gen-z-research.aspx&quot;&gt;Gallup&lt;/a&gt;" data-portal-copyright="Image: &lt;a href=&quot;https://www.gallup.com/analytics/651674/gen-z-research.aspx&quot;&gt;Gallup&lt;/a&gt;" />
<p class="has-text-align-none">The perceived cost of using AI for work or school has also shifted. Almost half of Gen Z workers said they now think the risks of using AI in the workplace outweigh the benefits — up 11 points from last year — even as a majority of 56 percent acknowledged the tools will help them complete work faster. That comes with a cost, too: Eight in 10 Gen Z-ers said using AI to do work faster will make learning harder in the future.&nbsp;&nbsp;</p>
<img src="https://platform.theverge.com/wp-content/uploads/sites/2/2026/04/Screenshot-2026-04-10-at-12.08.06.png?quality=90&#038;strip=all&#038;crop=0,0,100,100" alt="" title="" data-has-syndication-rights="1" data-caption="&lt;em&gt;Excitement down, anger up.&lt;/em&gt; | Image: &lt;a href=&quot;https://www.gallup.com/analytics/651674/gen-z-research.aspx&quot;&gt;Gallup&lt;/a&gt;" data-portal-copyright="Image: &lt;a href=&quot;https://www.gallup.com/analytics/651674/gen-z-research.aspx&quot;&gt;Gallup&lt;/a&gt;" />
<p class="has-text-align-none">All this hasn’t stopped Gen Z from using AI, although Gallup said “growth has slowed to a crawl.” Just over half of Gen Z-ers said they use AI at least weekly, up four points from 47 percent last year, and around half said they think they’ll need it for higher education or their future careers.</p>

<p class="has-text-align-none">“Gen Z isn’t rejecting AI outright, but they are reassessing its role in their lives,” said Stephanie Marken, senior partner at Gallup. “What we’re seeing in the data is a generation that recognizes AI’s utility but is increasingly concerned about its long-term impact on learning, trust, and career readiness.&#8221; </p>

<p class="has-text-align-none"><br>The findings come as AI settles into a more mature technology, with clearer consequences for a generation entering a difficult job market marked by <a href="https://www.theverge.com/tech/885710/jack-dorsey-block-layoffs-job-cuts-ai">mass</a><a href="https://www.theverge.com/tech/900946/meta-layoffs-hundreds-employees"> layoffs</a> while also navigating schools and <a href="https://www.theverge.com/tech/903954/art-schools-generative-ai-education-creative-jobs">colleges</a><a href="https://www.theverge.com/podcast/815434/ai-education-schools-research-cheating-chatgpt-jobs-grades"> </a><a href="https://www.theverge.com/podcast/815434/ai-education-schools-research-cheating-chatgpt-jobs-grades">struggling to adapt</a> to the rapid proliferation of AI. All as <a href="https://www.theverge.com/ai-artificial-intelligence/644853/pew-gallup-data-americans-dont-trust-ai">distrust about AI grows</a> between the general public and the companies racing to build it.</p>

<p class="has-text-align-none"></p>
						]]>
									</content>
			
					</entry>
			<entry>
			
			<author>
				<name>Robert Hart</name>
			</author>
			
			<title type="html"><![CDATA[Google makes it easy to deepfake yourself]]></title>
			<link rel="alternate" type="text/html" href="https://www.theverge.com/ai-artificial-intelligence/909104/youtube-shorts-make-ai-avatar" />
			<id>https://www.theverge.com/?p=909104</id>
			<updated>2026-04-09T06:53:49-04:00</updated>
			<published>2026-04-09T06:53:49-04:00</published>
			<category scheme="https://www.theverge.com" term="AI" /><category scheme="https://www.theverge.com" term="Google" /><category scheme="https://www.theverge.com" term="News" /><category scheme="https://www.theverge.com" term="Tech" />
							<summary type="html"><![CDATA[YouTube Shorts is rolling out a new AI-powered feature giving creators an easy way to realistically clone themselves on camera. The launch, hinted at earlier this year, reflects the platform’s fraught relationship with AI-generated content, adding more generative features while struggling to contain AI slop, deepfake scams, and impersonations.&#160; YouTube says the new tool will [&#8230;]]]></summary>
			
							<content type="html">
											<![CDATA[

						
<figure>

<img alt="" data-caption="" data-portal-copyright="Image: The Verge" data-has-syndication-rights="1" src="https://platform.theverge.com/wp-content/uploads/sites/2/2026/04/acastro_STK092_03.jpg?quality=90&#038;strip=all&#038;crop=0,0,100,100" />
	<figcaption>
		</figcaption>
</figure>
<p class="has-text-align-none">YouTube Shorts is <a href="https://support.google.com/youtube/answer/16985237#zippy=%2Cwill-videos-generated-using-an-avatar-contain-ai-disclosures%2Chow-do-i-submit-feedback-about-avatars-on-youtube%2Chow-is-my-data-collected-used%2Climit-or-remove-remixes-of-videos-using-your-avatar%2Cretake-or-delete-your-avatar-from-the-youtube-create-app%2Cretake-or-delete-your-avatar-from-the-youtube-app">rolling out</a> a new AI-powered feature giving creators an easy way to realistically clone themselves on camera. The launch, <a href="https://www.theverge.com/news/864610/youtube-shorts-ai-likenesses-neal-mohan-2026">hinted at earlier this year</a>, reflects the platform’s fraught relationship with AI-generated content, adding more generative features while struggling to <a href="https://www.theverge.com/news/869684/youtube-top-ai-channels-removed-kapwing">contain AI slop</a>, deepfake scams, and impersonations.&nbsp;</p>

<p class="has-text-align-none">YouTube says the new tool will let users create a digital version of themselves, called an avatar, that can be inserted into existing Shorts videos or used to generate entirely new ones. The company said avatars will “look and sound like you,” framing them as a safer and more secure way to use AI to create new content.&nbsp;</p>

<p class="has-text-align-none">Creating an avatar is a bit more involved than simply pressing a button, but it sounds fairly straightforward. In a <a href="https://support.google.com/youtube/answer/16985237#zippy=%2Cwill-videos-generated-using-an-avatar-contain-ai-disclosures%2Chow-do-i-submit-feedback-about-avatars-on-youtube%2Chow-is-my-data-collected-used%2Climit-or-remove-remixes-of-videos-using-your-avatar%2Cretake-or-delete-your-avatar-from-the-youtube-create-app%2Cretake-or-delete-your-avatar-from-the-youtube-app">blog post</a> outlining the process, YouTube said users must first record a “live selfie” capturing their face and voice while following a series of prompts. For the best results, the company recommends good lighting, a quiet area, a background free of other people or images of faces, and holding the phone at eye level.&nbsp;</p>

<p class="has-text-align-none">Once avatars are made, users can select “make a video with my avatar” while creating a video to generate a clip from prompts, which can be up to eight seconds long, <a href="https://9to5google.com/2026/04/08/youtube-shorts-ai-avatar/">according to <em>9to5google</em></a>. Users can also add their avatar to “eligible Shorts” in their feed, though YouTube did not specify what makes a Short eligible.</p>

<p class="has-text-align-none">The AI avatar feature comes with fairly tight restrictions. They can only be used in the creator’s own original videos, who also control whether their Shorts can be remixed. The creator can delete their avatar or videos where it appears at any time, YouTube says. Avatars that aren’t used to create new content for three years will be automatically deleted.   </p>

<p class="has-text-align-none">All avatar videos will also be clearly flagged as AI-generated, YouTube says. This includes visible watermarking and digital labels like <a href="https://www.theverge.com/news/672013/google-synthid-detector-ai-generated-content-watermark-i-o-2025">SynthID</a> and C2PA, the latter a <a href="https://www.theverge.com/ai-artificial-intelligence/882956/ai-deepfake-detection-labels-c2pa-instagram-youtube">broadly supported but questionably useful authentication marker</a> used to identify AI-generated content.&nbsp;</p>

<p class="has-text-align-none">Not everyone will be able to use the feature immediately. YouTube says the tool “will be rolling out gradually,” though it did not give a timeline or indication of where it will be available first. Creators must also be at least 18 and own an existing YouTube channel, the company says.&nbsp;</p>

<p class="has-text-align-none">The avatar feature adds to YouTube’s expanding suite of AI tools for creators, including <a href="https://www.theverge.com/news/612031/youtube-ai-generated-video-shorts-veo-2-dream-screen">AI-generated video clips on Shorts</a>, <a href="https://www.theverge.com/news/654039/youtube-expands-its-auto-dubbing-feature-again">AI auto-dubbing</a>, and a <a href="https://www.theverge.com/news/778469/youtube-creators-ai-analytics-ask-studio-dubbing">channel analytics</a> chatbot. Many of them are powered by Google’s Gemini AI models, which already allow users to transform <a href="https://www.theverge.com/news/868510/google-photos-image-to-video-text-prompt-support">photos into video</a>, <a href="https://www.theverge.com/ai-artificial-intelligence/880584/google-gemini-ai-music-maker-lyria-3-beta">make music</a>, and <a href="https://www.theverge.com/report/826003/googles-nano-banana-pro-generates-excellent-conspiracy-fuel">create realistic images from scratch</a>.&nbsp;</p>

<p class="has-text-align-none"><br>Its arrival comes as one of Google’s main AI rivals, OpenAI, pulls back from video generation. The startup said it was <a href="https://www.theverge.com/ai-artificial-intelligence/899850/openai-sora-ai-chatgpt">sunsetting</a> its Sora video tool last month after a year of <a href="https://www.theverge.com/ai-artificial-intelligence/902368/openai-sora-dead-ai-video-generation-competition">struggling</a> to get the wannabe social platform off the ground. It was costly and faced a parade of copyright challenges, deepfake controversies, and slop that made it an unattractive bet for investors ahead of an anticipated IPO this year.</p>
						]]>
									</content>
			
					</entry>
			<entry>
			
			<author>
				<name>Robert Hart</name>
			</author>
			
			<title type="html"><![CDATA[Gemini is making it faster for distressed users to reach mental health resources ]]></title>
			<link rel="alternate" type="text/html" href="https://www.theverge.com/ai-artificial-intelligence/907842/google-gemini-mental-health-interface-update" />
			<id>https://www.theverge.com/?p=907842</id>
			<updated>2026-04-07T06:09:57-04:00</updated>
			<published>2026-04-07T06:09:57-04:00</published>
			<category scheme="https://www.theverge.com" term="AI" /><category scheme="https://www.theverge.com" term="Health" /><category scheme="https://www.theverge.com" term="News" /><category scheme="https://www.theverge.com" term="Science" />
							<summary type="html"><![CDATA[Google says it has updated Gemini to better direct users to get mental health resources during moments of crisis. The change comes as the tech giant faces a wrongful death lawsuit alleging its chatbot “coached” a man to die by suicide, the latest in a string of lawsuits alleging tangible harm from AI products. When [&#8230;]]]></summary>
			
							<content type="html">
											<![CDATA[

						
<figure>

<img alt="" data-caption="" data-portal-copyright="Image: The Verge" data-has-syndication-rights="1" src="https://platform.theverge.com/wp-content/uploads/sites/2/2026/04/STK255_Google_Gemini_B_474198.jpg?quality=90&#038;strip=all&#038;crop=0,0,100,100" />
	<figcaption>
		</figcaption>
</figure>
<p class="has-text-align-none">Google says it has updated Gemini to better direct users to get mental health resources during moments of crisis. The change comes as the tech giant faces a <a href="https://www.theverge.com/tech/889152/google-gemini-ai-wrongful-death-lawsuit">wrongful death lawsuit</a> alleging its chatbot “coached” a man to die by suicide, the latest in a string of lawsuits alleging <a href="https://www.theverge.com/news/858102/characterai-google-teen-suicide-settlement">tangible</a> <a href="https://www.theverge.com/news/831207/openai-chatgpt-lawsuit-parental-controls-tos">harm</a> from AI products.</p>

<p class="has-text-align-none">When a conversation indicates a user is in a potential crisis related to suicide or self-harm, Gemini already launches a “Help is available” module that <a href="https://www.theverge.com/report/841610/ai-chatbot-suicide-safety-failure">directs users to mental health crisis resources</a>, like a suicide hotline or crisis text line. Google says the update — really more of a redesign — will streamline this into a “one-touch” interface that will make it easier for users to get help quickly.&nbsp;&nbsp;</p>

<p class="has-text-align-none">The help module also contains more empathetic responses designed “to encourage people to seek help,” Google says. Once activated, “the option to reach out for professional help will remain clearly available” for the remainder of the conversation.&nbsp;</p>
<img src="https://platform.theverge.com/wp-content/uploads/sites/2/2026/04/App_Zoom_1-ezgif.com-video-to-gif-converter.gif?quality=90&#038;strip=all&#038;crop=0,0,100,100" alt="" title="" data-has-syndication-rights="1" data-caption="&lt;em&gt;Gemini’s new crisis interface.&lt;/em&gt; " data-portal-copyright="" />
<p class="has-text-align-none">Google says it engaged with clinical experts for the redesign and is committed to supporting users in crisis. It also announced $30 million in funding globally over the next three years “to help global hotlines.”&nbsp;</p>

<p class="has-text-align-none"><br>Like other leading chatbot providers, Google stressed that Gemini “is not a substitute for professional clinical care, therapy, or crisis support,” but acknowledged many people are <a href="https://www.theverge.com/report/866683/chatgpt-health-sharing-data">using it for health information</a>, including during moments of crisis.</p>

<p class="has-text-align-none">The update comes amid broader scrutiny over how adequate the industry’s safeguards actually are. Reports and investigations, including our <a href="https://www.theverge.com/report/841610/ai-chatbot-suicide-safety-failure">probe</a> into the provision of crisis resources, frequently flag cases where chatbots fail vulnerable users, by helping them <a href="https://www.theverge.com/news/818508/chatbot-eating-disorder-mental-health">hide eating disorders</a> or <a href="https://www.theverge.com/ai-artificial-intelligence/892978/ai-chatbots-investigation-help-teens-plan-violence">plan shootings</a>. Google often fares better than many rivals in these tests, but is not perfect. Other AI companies, including <a href="https://www.theverge.com/news/718407/openai-chatgpt-mental-health-guardrails-break-reminders">OpenAI</a> and <a href="https://www.theverge.com/news/718407/openai-chatgpt-mental-health-guardrails-break-reminders">Anthropic</a>, have also taken steps to improve their detection and support of vulnerable users.<br></p>
						]]>
									</content>
			
					</entry>
			<entry>
			
			<author>
				<name>Robert Hart</name>
			</author>
			
			<title type="html"><![CDATA[Chatbots are now prescribing psychiatric drugs]]></title>
			<link rel="alternate" type="text/html" href="https://www.theverge.com/ai-artificial-intelligence/906525/ai-chatbot-prescribe-refill-psychiatric-drugs" />
			<id>https://www.theverge.com/?p=906525</id>
			<updated>2026-04-03T09:09:04-04:00</updated>
			<published>2026-04-03T07:43:21-04:00</published>
			<category scheme="https://www.theverge.com" term="AI" /><category scheme="https://www.theverge.com" term="Health" /><category scheme="https://www.theverge.com" term="Science" /><category scheme="https://www.theverge.com" term="Tech" />
							<summary type="html"><![CDATA[Utah is allowing an AI system to prescribe psychiatric drugs without a doctor. It’s only the second time the state — and the country — has delegated this kind of clinical authority to AI. State officials say it could bring costs down and ease care shortages, but physicians warn the system is opaque, risky, and [&#8230;]]]></summary>
			
							<content type="html">
											<![CDATA[

						
<figure>

<img alt="An illustration of a robot psychiatrist on an orang background" data-caption="" data-portal-copyright="Image: Cath Virginia / The Verge" data-has-syndication-rights="1" src="https://platform.theverge.com/wp-content/uploads/sites/2/2026/04/STKS524_AI_HEALTH_E.jpg?quality=90&#038;strip=all&#038;crop=0,0,100,100" />
	<figcaption>
		</figcaption>
</figure>
<p class="has-text-align-none">Utah is allowing an AI system to prescribe psychiatric drugs without a doctor. It’s only the second time the state — and the country — has delegated <a href="https://commerce.utah.gov/ai/agreements/doctronic/">this kind</a> of clinical authority to AI. State officials say it could bring costs down and ease care shortages, but physicians warn the system is opaque, risky, and unlikely to expand mental health care to those who need it.</p>

<p class="has-text-align-none">The one-year pilot, <a href="https://commerce.utah.gov/ai/agreements/ai-legion-health/">announced last week</a>, will allow Legion Health’s AI chatbot to renew certain prescriptions for psychiatric medications, in some cases. The San Francisco startup promises Utah-based patients “fast, simple refills” through a $19-a-month subscription. The program starts at some point in April, though the company is only operating a waitlist at the moment.</p>

<figure class="wp-block-pullquote"><blockquote><p>The AI chatbot will renew certain prescriptions for psychiatric medications, in some cases. </p></blockquote></figure>

<p class="has-text-align-none">The program is deliberately narrow in scope, limited both in terms of the medications it covers and the conditions patients must meet to qualify. According to Legion’s <a href="https://commerce.utah.gov/wp-content/uploads/2026/03/Legion-Agreement.pdf">agreement</a> with Utah’s Office of Artificial Intelligence Policy, the chatbot can renew only 15 lower-risk maintenance medications that have already been prescribed by a clinician. That includes fluoxetine (Prozac), sertraline (Zoloft), bupropion (Wellbutrin), mirtazapine, and hydroxyzine, commonly used to treat anxiety and depression. Patients must also be considered stable: Anyone with a recent dose or medication change or a psychiatric hospitalization in the last year is excluded, and patients must check in with a healthcare provider every 10 refills or after six months, whichever comes first.&nbsp;</p>

<p class="has-text-align-none">The system cannot issue new prescriptions or handle medications that require closer clinical oversight, including drugs that need blood-test monitoring. Controlled substances are also barred, ruling out many ADHD medications. The exclusion of benzodiazepines, used for anxiety; antipsychotics, used for conditions like schizophrenia and bipolar disorder; and lithium — widely considered the gold-standard treatment for bipolar disorder — leaves many more complex psychiatric cases outside the pilot’s scope.&nbsp;&nbsp;&nbsp;</p>

<p class="has-text-align-none">To use the system, patients must opt-in, verify their identity, and prove they already have a prescription, such as by providing a photo of the label or pill bottle. They are then asked about their symptoms, as well as side effects and efficacy of the medication. They’re asked questions about suicidal thoughts, self-harm, severe reactions, and pregnancy in order to log red flags. If any answers fall outside of the pilot’s low-risk criteria, the cases are supposed to be escalated to a clinician before any refill is issued. Patients and pharmacists can also request human review.&nbsp;</p>

<p class="has-text-align-none">“By safely automating the renewal process for maintenance medications, we are allowing patients to get the care they need much more quickly and affordably,” state officials <a href="https://commerce.utah.gov/ai/agreements/ai-legion-health/">said</a> when announcing the pilot. Over time, they said, the program could free healthcare providers to “focus their time on more complex, higher-risk patient needs” and help address shortages that have left 500,000 Utah residents without access to mental health care. Legion cofounder and CEO Yash Patel has cast the program in even grander terms, <a href="https://www.linkedin.com/feed/update/urn:li:activity:7443373221556465665/?utm_source=share&amp;utm_medium=member_desktop&amp;rcm=ACoAAA_c7FUBY7bz7pBz3WXlrYwfv6FWQOewhxc">describing</a> it as a global first that will dramatically expand access to healthcare and mark “the beginning of something much bigger than refills.”&nbsp;</p>

<p class="has-text-align-none">Psychiatrists are less convinced. Brent Kious, a psychiatrist and professor at the University of Utah School of Medicine, told <em>The Verge</em> he thinks the “advantages of an AI-based refill system may be overstated.” He suspects the tool “will not increase access for those who are most in need of care.” The target patient would already have to be on a treatment plan with their psychiatrist to use the service.&nbsp;</p>

<figure class="wp-block-pullquote"><blockquote><p>“It would be better if there were greater transparency, more science, and more rigorous testing before people are asked to use this.”</p></blockquote></figure>

<p class="has-text-align-none">Kious suggests the automation could contribute to what he called an “epidemic of over-treatment” in psychiatry, with some patients staying on medication longer than they need to. John Torous, director of digital psychiatry at Beth Israel Deaconess Medical Center and professor of psychiatry at Harvard Medical School, raised a related concern, noting that some people benefit from staying on psychiatric medications long-term, while others may benefit from reducing or stopping them. “They require more active management, changes, and careful consideration,” he said. That’s harder to do if you’re outsourcing refill check-ins to a chatbot.</p>

<p class="has-text-align-none">A bigger worry is whether a chatbot can safely automate even the most routine parts of psychiatric care. Torous said prescribing involves more than just checking for drug interactions, and questioned whether any AI system today “can understand the unique context and factors that go into a person&#8217;s medication plan.” Kious made a similar point: “This is something that could be safe in principle, but it all depends on the details.” Those concerns are compounded by how new these systems are — and how opaque they remain to outsiders. “It feels a bit like alchemy right now,” he said. “It would be better if there were greater transparency, more science, and more rigorous testing before people are asked to use this.”</p>

<p class="has-text-align-none">There are more immediate safety concerns, too. Kious said the chatbot could miss something during screening: It may not ask the right questions, a patient may not recognize a side effect,&nbsp; or they may answer inaccurately. Some may simply tell the system what it wants to hear in order to speed up care. He stressed that this is not unique to chatbots; much of psychiatry relies on self-report. But human clinicians usually have access to other information as well, he said, adding that when he sees patients, he pays attention not just to what they say, but also to what they do not say and how they present themselves. And while patients can also mislead human providers, Kious said a chatbot system may make it easier for patients to adjust their answers until they produce the desired outcome.&nbsp;</p>

<p class="has-text-align-none">Torous said there are more overt safety risks as well, which will be familiar to anyone following how chatbots fare in the real world. Legion’s chatbot is Utah’s second experiment with AI prescribing, joining an ongoing, broader pilot focused on primary care with Doctronic that <a href="https://www.politico.com/news/2026/01/06/artificial-intelligence-prescribing-medications-utah-00709122">launched</a> last December. Within weeks of going live, <a href="https://mindgard.ai/blog/doctronic-is-now-accepting-new-patients-and-unsafe-instructions">security researchers</a> had managed to push Doctronic’s system into spreading vaccine conspiracy theories, generating instructions for cooking meth, and tripling a patient’s opioid dosage. State officials say the more focused program with Legion is designed specifically to target “the state’s mental health shortage.”</p>

<p class="has-text-align-none">Legion says the pilot is operating under tight guardrails. In addition to what it calls “conservative eligibility gates,” its agreement with Utah requires it to provide detailed monthly reports and have the first 1,250 requests closely reviewed by human physicians, with periodic sampling of around 5 to 10 percent of requests thereafter.&nbsp;</p>

<p class="has-text-align-none">Legion cofounder and president Arthur MacWaters told <em>The Verge</em> that “risks exist in any remote care model, whether AI-assisted or fully human-led” and stressed the company’s “workflow does not rely on a single self-reported answer to unlock treatment.” He said key safeguards include the pilot’s narrow limits on medications and patient eligibility, built-in AI safety screens, pharmacist involvement, and the ability to escalate to a clinician. “We see this as critical to expand access to hundreds of thousands of people in Utah who live in mental health shortage areas, as well as an important proving ground for AI in medicine.”&nbsp;</p>

<p class="has-text-align-none">MacWaters would not comment on additional use cases, medications, or expansions to other states, but said the firm is “excited for what the future holds.” He would not offer a timeline on Legion’s expansion plans either, though both MacWaters and Legion have signalled broader ambitions beyond Utah publicly: Legion’s refill site says the service will be available “nationwide 2026” and MacWaters has <a href="https://x.com/ArthurMacwaters/status/2037479464012259492?s=20">suggested</a> it “will be in every state very very quickly.”&nbsp;</p>

<p class="has-text-align-none">For the psychiatrists I spoke to, it all seems to raise a rather basic question: What problem is Legion really solving? Established patients often don’t even need an appointment to get a refill, Kious said, explaining that most psychiatrists are probably “happy to refill prescriptions for free and without an appointment” unless they are worried about the patient or the medication carries a meaningful risk. Those are the very cases Legion’s AI is barred from handling.&nbsp;</p>

<p class="has-text-align-none">“I would personally avoid it for now,” Torous said, adding that if you’ve found a good treatment plan that works for you, it’s probably best to stick with that clinician.&nbsp;</p>
						]]>
									</content>
			
					</entry>
			<entry>
			
			<author>
				<name>Robert Hart</name>
			</author>
			
			<title type="html"><![CDATA[It’s not easy to get depression-detecting AI through the FDA]]></title>
			<link rel="alternate" type="text/html" href="https://www.theverge.com/ai-artificial-intelligence/905864/depression-detecting-ai-kintsugi-clinical-ai-startup-shut-down" />
			<id>https://www.theverge.com/?p=905864</id>
			<updated>2026-04-02T11:33:23-04:00</updated>
			<published>2026-04-02T11:33:23-04:00</published>
			<category scheme="https://www.theverge.com" term="AI" /><category scheme="https://www.theverge.com" term="Health" /><category scheme="https://www.theverge.com" term="Report" /><category scheme="https://www.theverge.com" term="Science" />
							<summary type="html"><![CDATA[For the past seven years, the California-based startup Kintsugi has been developing AI designed to detect signs of depression and anxiety from a person’s speech. But after failing to secure FDA clearance in time, the company is shutting down and releasing most of its technology as open-source. Some elements may even find a second life [&#8230;]]]></summary>
			
							<content type="html">
											<![CDATA[

						
<figure>

<img alt="A vintage computer on a background of 1s and 0s with a brain on the screen representing AI" data-caption="" data-portal-copyright="Image: Cath Virginia / The Verge" data-has-syndication-rights="1" src="https://platform.theverge.com/wp-content/uploads/sites/2/2025/09/STK_414_AI_CHATBOT_R2_CVirginia_D.jpg?quality=90&#038;strip=all&#038;crop=0,0,100,100" />
	<figcaption>
		</figcaption>
</figure>
<p class="has-text-align-none">For the past seven years, the California-based startup Kintsugi has been developing AI designed to detect signs of depression and anxiety from a person’s speech. But after failing to secure FDA clearance in time, the company is shutting down and releasing most of its technology as open-source. Some elements may even find a second life beyond healthcare, like detecting deepfake audio.&nbsp;</p>

<p class="has-text-align-none">Mental health assessments still largely rely on patient questionnaires and clinical interviews, rather than the lab tests or scans common in physical medicine. Instead of focusing on what someone is saying, Kintsugi’s software analyzes how it is being said. The idea isn’t new — speech patterns like pauses, sentence structure, or speed are known indicators of various mental health issues — but Kintsugi says its AI can pick up subtle shifts that may be less obvious to human observers, though it has not publicly detailed exactly which features drive its models&#8217; predictions. In <a href="https://www.annfammed.org/content/early/2025/01/07/afm.240091">peer-reviewed research</a>, the company reported results broadly in line with established self-report screening tools for depression using short speech samples.&nbsp;</p>

<figure class="wp-block-pullquote"><blockquote><p>The company pitched the technology as a complement — or potential alternative — to self-reported screening tools.</p></blockquote></figure>

<p class="has-text-align-none">The company pitched the technology as a complement — or potential alternative — to self-reported screening tools like the Patient Health Questionnaire-9, or PHQ-9, a staple of primary care and psychiatry. These tools are supposed to be used alongside formal clinical assessment, and although they are widely validated, <a href="https://jamanetwork.com/journals/jamanetworkopen/fullarticle/2795293#:~:text=Results%20There%20were%2052%20944,CI%2C%200.47%2D0.64%5D).">screening rates can be low</a>, they depend on patients accurately describing symptoms, and they <a href="https://www.statnews.com/2023/02/21/depression-test-phq9-zoloft-pfizer-mental-health/">may not capture the full set of symptoms</a> associated with mental health disorders. Kintsugi argued its voice-based model could provide a more objective signal, expand screening to more patients, and be deployed at scale across health systems, insurers, and employer programs. Doing so, however, would require FDA clearance.</p>

<p class="has-text-align-none">Kintsugi had been seeking FDA clearance through the agency’s “<a href="https://www.fda.gov/medical-devices/premarket-submissions-selecting-and-preparing-correct-submission/de-novo-classification-request">De Novo</a>” pathway, a route meant for novel, low-risk medical devices without an existing equivalent on the market. While intended to streamline approval for new kinds of products, it is still a process that can require years of data collection and regulatory review. Kintsugi’s founder and CEO Grace Chang told <em>The Verge</em> a lot of time was spent teaching the regulator about AI. The framework also fits AI poorly; much is designed with more traditional devices in mind — think hip implants, surgical tools, pacemakers — whose design remains largely fixed once approved. For AI systems, that can mean locking a model that would otherwise continue to be optimized and updated over time.</p>

<figure class="wp-block-pullquote"><blockquote><p>The FDA fits AI poorly; much is designed with more traditional devices in mind.</p></blockquote></figure>

<p class="has-text-align-none">Despite the Trump administration’s hard push to cut red tape and get AI products into the real world as soon as possible, Chang said regulatory experts tell her that “there’s nothing that helps them do that except loud yelling from the top.” The approval process was further slowed by <a href="https://www.theverge.com/report/807850/government-shutdown-fcc-clearance-delaying-tech-products">federal government shutdowns</a>. The startup ran out of funding waiting for its final submission.</p>

<p class="has-text-align-none">Efforts to raise additional funds faltered as the company’s runway shortened. Rather than accept “predatory” short-term offers to meet payroll — Chang said one proposal offered around $50,000 a week in exchange for $1 million in equity — the team decided to open-source most of its technology so others might continue the work. Investors were not happy.&nbsp;</p>

<p class="has-text-align-none">Open-sourcing a mental health screening model also raises concerns about misuse. Tools designed to flag signs of depression or anxiety could, in theory, be deployed outside clinical settings, such as by employers or insurers, without the safeguards typically required in healthcare. Obviously that shouldn’t happen, but once released publicly there is little to prevent the technology from being used in ways its creators did not intend.&nbsp;</p>

<p class="has-text-align-none">There are other complications, too. Nicholas Cummins, a senior lecturer in speech analysis and responsible AI in health at King’s College London, told <em>The Verge</em> that open-source releases often lack the detailed “paper trail” regulators expect, including a clear record of how a model was trained, validated, and tested for safety. Without that, he said, bringing a product built on the technology through FDA approval could prove difficult.</p>

<figure class="wp-block-pullquote"><blockquote><p>Open-sourcing a mental health screening model also raises concerns about misuse.</p></blockquote></figure>

<p class="has-text-align-none">More likely, Cummins suggested, companies would treat the model as a starting point and layer their own data and validation processes on top. Even then, he cautioned voice-based systems remain imperfect and carry a “reasonable” risk of errors, he warned, especially for conditions like depression, which manifest differently across individuals, languages, and cultural contexts and depend heavily on the diversity and structure of speech data used in training.</p>

<p class="has-text-align-none">Chang did not dismiss concerns about potential misuse, but said “it’s less of a concern in practice than it might appear in theory.” The organizations with the greatest incentives to abuse the technology, she argued, are also those that “face the highest barriers to actually deploying it.” In Chang’s view, “the more realistic risk is underuse, not misuse.”</p>

<p class="has-text-align-none">While Kintsugi’s mental health screening technology has been open-sourced, Chang said not all of the company’s technology has been released publicly. In part, this is for security concerns, she said, as chief among it is <a href="https://www.kintsugihealth.com/api/voice-api-signal">technology</a> that can detect synthetic or manipulated voices.&nbsp;</p>

<p class="has-text-align-none">Chang said the capability emerged when the team experimented with AI-generated speech to strengthen its mental health models. The synthetic audio lacked the vocal signals the model was trained to recognize, revealing that it could be used to distinguish between human and AI-generated voices. It is a growing challenge given the proliferation of <a href="https://www.theverge.com/ai-artificial-intelligence/882956/ai-deepfake-detection-labels-c2pa-instagram-youtube">AI slop</a> and <a href="https://www.theverge.com/tech/663252/ai-bots-are-scamming-community-colleges">fraudulent deepfakes</a> and one that has yet to be reliably solved. It is a potentially lucrative opportunity, and, thankfully for Kintsugi, an area that is not subject to FDA oversight.&nbsp;</p>

<p class="has-text-align-none">Chang declined to speculate on her next move or whether Kintsugi’s security-focused technology might resurface, but she said she wishes someone else would build on the company’s work and carry it through the final stages of the FDA process. But without broader changes, Kintsugi’s shutdown is unlikely to be the last example of startup timelines clashing with medical regulation, and Chang said she hopes that reality doesn’t deter other founders from trying.</p>
						]]>
									</content>
			
					</entry>
			<entry>
			
			<author>
				<name>Robert Hart</name>
			</author>
			
			<title type="html"><![CDATA[Baidu’s robotaxis froze in traffic, creating chaos]]></title>
			<link rel="alternate" type="text/html" href="https://www.theverge.com/ai-artificial-intelligence/905012/baidu-apollo-robotaxi-freeze-china" />
			<id>https://www.theverge.com/?p=905012</id>
			<updated>2026-04-01T07:29:26-04:00</updated>
			<published>2026-04-01T06:39:52-04:00</published>
			<category scheme="https://www.theverge.com" term="AI" /><category scheme="https://www.theverge.com" term="Autonomous Cars" /><category scheme="https://www.theverge.com" term="News" /><category scheme="https://www.theverge.com" term="Transportation" />
							<summary type="html"><![CDATA[Numerous robotaxis operated by Chinese tech giant Baidu froze in a major city on Tuesday, reportedly trapping passengers inside, stranding them on highways, and causing at least one accident in snarled traffic.  Police in Wuhan confirmed receiving multiple reports of Baidu’s Apollo Go robotaxis stopping in the middle of streets and being unable to move. [&#8230;]]]></summary>
			
							<content type="html">
											<![CDATA[

						
<figure>

<img alt="" data-caption="A Baidu Apollo robotaxi in Wuhan, China. | Image: Bloomberg via Getty Images" data-portal-copyright="Image: Bloomberg via Getty Images" data-has-syndication-rights="1" src="https://platform.theverge.com/wp-content/uploads/sites/2/2026/04/gettyimages-2152484525.jpg?quality=90&#038;strip=all&#038;crop=0,0,100,100" />
	<figcaption>
	A Baidu Apollo robotaxi in Wuhan, China. | Image: Bloomberg via Getty Images	</figcaption>
</figure>
<p class="has-text-align-none">Numerous robotaxis operated by Chinese tech giant Baidu froze in a major city on Tuesday, <a href="https://www.wired.com/story/robotaxi-outage-in-china-leaves-passengers-stuck-in-cars-on-highways/">reportedly</a> trapping passengers inside, stranding them on highways, and causing at least one accident in snarled traffic. </p>

<p class="has-text-align-none">Police in Wuhan <a href="https://www.wsj.com/business/autos/baidus-apollo-go-robotaxis-stall-in-chinas-wuhan-a9b143ad?gaa_at=eafs&amp;gaa_n=AWEtsqdXqKfhF249d_igjLuqLB_jaDXOeS-bCd2J2YpiqfM32h2GexPQC-gvHS-S1T0%3D&amp;gaa_ts=69cce8c4&amp;gaa_sig=zzFxk7kh7UvuBHX6zaCAuT1WfVKyb9b5r3tHHTtsXl_SGmComAeS5mCdWQrqstZCDQFPhWznjBJxrto19n1C5A%3D%3D">confirmed</a> receiving multiple reports of Baidu’s Apollo Go robotaxis stopping in the middle of streets and being unable to move. Police said no injuries have been reported and that preliminary investigations suggest an unspecified “system failure” is responsible for the outage.&nbsp;</p>

<figure class="wp-block-embed is-type-rich is-provider-twitter wp-block-embed-twitter"><div class="wp-block-embed__wrapper">
<blockquote class="twitter-tweet" data-dnt="true"><p lang="en" dir="ltr">NEW: Dozens of robotaxis by Baidu stopped on the road in Wuhan, causing crashes on highways and trapping passengers in the cars—some for more than an hour. One passenger told me it took her 30 minutes to even connect to a customer representative. <br><br>Here’s a video of a crash. <a href="https://t.co/fTitNMv8kj">pic.twitter.com/fTitNMv8kj</a></p>&mdash; Zeyi Yang 杨泽毅 (@ZeyiYang) <a href="https://twitter.com/ZeyiYang/status/2039153730533405102?ref_src=twsrc%5Etfw">April 1, 2026</a></blockquote>
</div></figure>

<p class="has-text-align-none">Wuhan is a major robotaxi hub for Baidu, which has <a href="https://www.france24.com/en/live-news/20260401-chinese-robotaxis-stall-in-apparent-malfunction-police">reportedly</a> deployed more than 500 driverless cars on its roads. It’s unclear how many vehicles malfunctioned. Local news reports <a href="https://www.reuters.com/world/asia-pacific/baidu-robotaxi-outage-wuhan-caused-by-system-failure-police-say-2026-04-01/">cited</a> by <em>Reuters</em> suggest at least 100 robotaxis were affected. Baidu did not immediately respond to <em>The Verge</em>’s request for comment.</p>

<p class="has-text-align-none"><br>The incident has reignited debate over the safety of self-driving cars in China, one of the world’s most enthusiastic adopters of the technology, amid an <a href="https://www.theverge.com/2024/11/22/24303299/baidu-apollo-go-rt6-robotaxi-unit-economics-waymo">aggressive global expansion</a>. Baidu is a major operator, <a href="https://ir.baidu.com/static-files/7004dcea-6dcf-4a4d-be0d-f6fa1eb2695f">deploying</a> robotaxis in 26 cities worldwide, including partnering with Uber in <a href="https://www.theverge.com/news/707347/uber-baidu-robotaxi-deal-autonomous-ridehail">London</a> and <a href="https://www.theverge.com/transportation/876288/uber-to-do-baidu-robotaxis-in-dubai">Dubai</a>.</p>
						]]>
									</content>
			
					</entry>
	</feed>
