<?xml version="1.0" encoding="UTF-8"?><feed
	xmlns="http://www.w3.org/2005/Atom"
	xmlns:thr="http://purl.org/syndication/thread/1.0"
	xml:lang="en-US"
	>
	<title type="text">Elissa Welle | The Verge</title>
	<subtitle type="text">The Verge is about technology and how it makes us feel. Founded in 2011, we offer our audience everything from breaking news to reviews to award-winning features and investigations, on our site, in video, and in podcasts.</subtitle>

	<updated>2026-04-13T00:25:56+00:00</updated>

	<link rel="alternate" type="text/html" href="https://www.theverge.com/author/elissa-welle" />
	<id>https://www.theverge.com/authors/elissa-welle/rss</id>
	<link rel="self" type="application/atom+xml" href="https://www.theverge.com/authors/elissa-welle/rss" />

	<icon>https://platform.theverge.com/wp-content/uploads/sites/2/2025/01/verge-rss-large_80b47e.png?w=150&amp;h=150&amp;crop=1</icon>
		<entry>
			
			<author>
				<name>Elissa Welle</name>
			</author>
			
			<title type="html"><![CDATA[Did Neuralink make the wrong bet?]]></title>
			<link rel="alternate" type="text/html" href="https://www.theverge.com/tech/910834/neuralink-bcis-bet" />
			<id>https://www.theverge.com/?p=910834</id>
			<updated>2026-04-12T20:25:56-04:00</updated>
			<published>2026-04-13T07:00:00-04:00</published>
			<category scheme="https://www.theverge.com" term="Science" /><category scheme="https://www.theverge.com" term="Tech" />
							<summary type="html"><![CDATA[Elon Musk promised Neuralink would bring superhuman abilities and minds merged with AI. Then he fueled a runaway hype train for his brain implant technology, which ended up with a grisly record for implants in monkeys and some success with human subjects. But for all of the hype, he’s still further away than Mars from [&#8230;]]]></summary>
			
							<content type="html">
											<![CDATA[

						
<figure>

<img alt="A collage of brain images and a Neuralink BCI implant." data-caption="" data-portal-copyright="Image: Cath Virginia / The Verge, Getty Images, Neuralink" data-has-syndication-rights="1" src="https://platform.theverge.com/wp-content/uploads/sites/2/2026/04/258176_Did_Neuralink_make_the_wrong_bet__CVirginia.jpg?quality=90&#038;strip=all&#038;crop=0,0,100,100" />
	<figcaption>
		</figcaption>
</figure>
<p class="has-drop-cap has-text-align-none">Elon Musk promised Neuralink <a href="https://www.vis.caltech.edu/documents/32164/Neuralink_Elon_Musk_and_the_Race_to_Put_Chips_Into_Our_Brains.pdf">would bring superhuman abilities</a> and minds merged with AI. Then he fueled a runaway hype train for his brain implant technology, which ended up with a <a href="https://www.theverge.com/2023/9/20/23882888/elon-musk-brain-implant-startup-neuralink-monkeys-euthanized">grisly record for implants in monkeys</a> and some success with human subjects. But for all of the hype, he’s still further away than Mars from his goal. And that’s because his relentless ambition is once again hitting the wall of scientific reality.</p>

<p class="has-text-align-none">The heart of the issue is how brain-computer interfaces (BCIs) translate thought into results. Neuralink’s products have all been brain-to-cursor interfaces, which allow patients to control a mouse with their minds. But Neuralink’s competitors have raced ahead with newer BCIs that translate thought directly to speech. Turns out that’s a more promising approach — enough to convince Neuralink to quietly invest in BCIs that focus on speech.</p>

<p class="has-text-align-none">Musk has a strong record of overpromising and underdelivering, and his biggest quagmire may end up being his pursuit of a grand, unified vision of a <a href="https://neuralink.com/trials/speech-restoration/">human-AI-hybrid technology</a>. When it comes to the human mind, he’s underestimated and oversimplified the steps it will take to make meaningful brain-computer interfaces a reality for patients who really need them.</p>

<h2 class="wp-block-heading">BCIs are similar, but there’s a big difference</h2>

<p class="has-text-align-none">All BCIs connect a brain to a computer with wires or Bluetooth. They stalk the tiny bursts of electricity your neurons use to talk to each other and then try to make sense of them so that they can predict what you might want to do in the future. The key difference between BCIs is the <em>type </em>of behavior they’re trying to emulate.</p>

<figure class="wp-block-pullquote"><blockquote><p>Patients think about speaking the word “good” and the word appears on the screen. It is not mind reading — it is detecting what they’re trying to say. </p></blockquote></figure>

<p class="has-text-align-none">A motor BCI, like the one Neuralink has been building, helps users guide a cursor across a computer screen. Unlike those, speech BCIs translate brain waves into sounds and small sections of words called phonemes. In the span of five years, <a href="https://doi.org/10.1146/annurev-bioeng-110122-012818">speech BCIs</a> have reached impressive milestones that rival the achievements of the two-decade-old motor BCI technology. A 2019 <a href="https://www.nature.com/articles/s41467-019-10994-4">study</a> reported that a speech BCI could predict what a person planned to say when given only a few options. By 2024, a 45-year-old <a href="https://youtu.be/thPhBDVSxz0?si=UsY1r8-Er_kK48Zb">ALS patient</a> could speak naturally with 97 percent accuracy using his speech BCI.&nbsp;&nbsp;</p>

<p class="has-text-align-none">In November 2025, Neuralink patient Brad Smith <a href="https://www.theverge.com/report/829120/neuralink-bci-webcam">showed</a> <em>The Verge</em> his motor BCI. He thought about moving his arm, which he could no longer move due to ALS, and instead the computer cursor moved across the screen. For speech BCIs, it’s words or chunks of words. Patients think about speaking the word “good,” for example, and the word appears on the screen. It is not mind reading — it is detecting what they’re trying to say.&nbsp;</p>

<p class="has-text-align-none">Here is the catch: Both versions are <em>technically </em>motor BCIs. The underlying neuroscience is the same. If you move your finger, your brain is sending signals down into the muscles in your pinky. If you talk, your brain sends similar signals down into your tongue and other muscles that help you form sounds. The BCI detects what muscle the user is thinking about moving, whether tongue or finger, and predicts what they’re trying to do or say.&nbsp;</p>

<h2 class="wp-block-heading">&nbsp;A pivot forward</h2>

<p class="has-text-align-none">Neuralink is now course-correcting to be in line with the rest of the BCI community: In <a href="https://x.com/neuralink/status/1918005257252098197?s=20">May</a>, Neuralink began recruiting patients for a <a href="https://clinicaltrials.gov/study/NCT06992596">clinical trial</a> to study <a href="https://neuralink.com/trials/speech-restoration/">speech restoration</a> at the Cleveland Clinic Abu Dhabi hospital in the United Arab Emirates; in October, it launched a speech restoration <a href="https://clinicaltrials.gov/study/NCT07224256">trial in the United States</a> at the University of Texas Southwestern Medical Center. The patients will use the same hardware as the <a href="https://x.com/neuralink/status/1965528520777716119?s=20">current Neuralink patients</a> but for the goal of turning their thoughts into speech rather than cursor movements. The company has already claimed success in a <a href="https://x.com/neuralink/status/2036489073091580011?s=20">video</a> posted to X on March 24th of a speech BCI trial participant who can still speak but whose speech is hard to understand because of ALS.&nbsp;</p>

<p class="has-text-align-none">Speech BCIs seem to be the future of the field, but it remains to be seen whether the technology will speed past motor BCIs to market or simply offer another technology option to patients with different needs.&nbsp;</p>

<p class="has-text-align-none">Neuralink has been making moves to step into its commercial era. The company <a href="https://neuralink.com/updates/welcome-david-mcmullen/">hired</a> a former director of the FDA office that oversees medical devices like BCIs to head its medical affairs, and Musk announced that Neuralink will begin “high-volume production” of the devices in a <a href="https://x.com/elonmusk/status/2006513491105165411?s=20">post</a> on X on December 31st, though <a href="https://www.theverge.com/news/858782/elon-musk-tesla-fsd-unsupervised-missed-goals">any Musk production predictions</a> need to be taken with a grain of salt. As Musk’s medical company falls in line with the broader BCI field, does it also drift further from his vision of human enhancement and back to regular medical assistance for those who need it most? It is unclear.</p>

<h2 class="wp-block-heading">Space is hard; the brain is harder</h2>

<p class="has-text-align-none"><a href="https://stavisky.info/">Sergey Stavisky</a> was one half of the leadership team for the 2024 speech BCI research <a href="https://www.nejm.org/doi/full/10.1056/NEJMoa2314132">study</a> out of the University of California, Davis that set a high bar for speech BCI accuracy. Stavisky was a former motor BCI researcher but pivoted to speech BCI in 2019 to make rapid progress in a field that looked to him ripe for success. “It seemed like it was a bit of an untapped opportunity,” he said. This has borne out, he said, noting how speech BCIs quickly expanded the size of their vocabulary from only 50 words to “being able to say any word in the dictionary,” he said.&nbsp;</p>

<figure class="wp-block-pullquote"><blockquote><p>“There&#8217;s this false assumption that they can get so good at brain-machine interfaces that they can decode from the brain faster than we can encode with our natural body typing or swinging a baseball bat or things like that.”</p></blockquote></figure>

<p class="has-text-align-none">But he doesn’t think that Neuralink made the wrong bet to focus on motor BCIs when the company formed in 2016. At that time, academic research into motor BCIs had matured enough for industry to step in, he said. “I think at that time, cursor control was sufficiently de-risked by academic trials that it was clear that with better hardware, a very useful medical device could be built,” he said. (Stavisky has been a paid consultant for Neuralink in the past, but he did not provide details because he signed a non-disclosure agreement. It is not uncommon for academic BCI researchers to consult with for-profit BCI companies. Stavisky is tangentially working with Neuralink’s competitor Paradromics on its upcoming clinical trial through his coinvestigator at Davis.)</p>

<p class="has-text-align-none">Matt Angle, CEO of Paradromics, disagrees. Neuralink did make a mistake by focusing on motor BCIs, he told <em>The Verge</em>. Paradromics started one year earlier, in 2015, with speech as its first priority. Like Stavisky, many top Paradromics scientists come from the motor BCI research field.&nbsp;</p>

<p class="has-text-align-none">Speech is a better first application of BCI technology than motor restoration, from Angle’s perspective, because it’s “the biggest quality-of-life deltas that you can imagine,” he said, “being able to talk to your loved ones again — and it&#8217;s something that BCI can do today.”</p>

<p class="has-text-align-none">I asked Angle why a motor BCI might not be as valuable to a patient unable to talk as a speech BCI given that both result in words spoken aloud by a computer program. I witnessed <a href="https://www.theverge.com/report/829120/neuralink-bci-webcam">Neuralink patient Brad Smith</a> use his motor BCI to communicate in a real-time conversation with me and his wife in November. Smith typed out answers to my questions letter by letter, word by word, with his mind-controlled computer cursor. Smith told me that Neuralink changed his life for the better.&nbsp;</p>

<p class="has-text-align-none">Speed limits motor BCIs, according to Angle. (Smith typed out his 16-word response to my question in one minute and 17 seconds.)&nbsp;</p>

<p class="has-text-align-none">“If I lost the ability to communicate and my primary means of communication was the BCI, I would like to have speech back,” he said. Still, he is quick to note that all BCIs, speech and motor, should exist: “I don&#8217;t think it&#8217;s for us to armchair what someone with a disability would or wouldn&#8217;t want,” Angle said.</p>

<p class="has-text-align-none">Looking further, AI chatbots seem like an obvious complement to speech BCIs. The two technologies are tangentially related: BCIs are already built on algorithms similar to the large language models powering AI chatbots, and many people with speech impairments use predictive word software — again, somewhat related to LLMs — to pick out which words or phrases they most likely want to say next. (Smith used text-to-speech app Proloquo4Text in conjunction with his Neuralink BCI.) Speech BCIs could make it easier and faster, with fewer clicks, to input prompts into AI chatbots, and access the benefits of agents and agentic <a href="https://www.theverge.com/tech/801899/opera-neon-ai-browser-trial-run">browsers</a> (<a href="https://www.theverge.com/report/822443/microsoft-windows-copilot-vision-ai-assistant-pc-voice-controls-impressions">when they work</a>) to navigate the virtual world.</p>

<h2 class="wp-block-heading">Patients want all types of BCIs</h2>

<p class="has-text-align-none">Former BCI user Ian Burkhart was unable to speak or move during the two weeks following a diving accident in 2010 that resulted in a spinal cord injury. Communication emerged as &#8220;a huge, huge priority” during that time, more so than being able to move, he said. Burkhart now appears to speak with relative ease and has recovered partial movement of his hands. But he said he would still like a speech BCI today, just for the ability to rapidly input text into a computer.&nbsp;</p>

<p class="has-text-align-none">This seems noteworthy given that <a href="https://www.statnews.com/2022/07/13/they-blazed-a-trail-with-brain-computer-interfaces-now-they-want-to-help-shape-the-fields-future/">Burkhart is one of the several dozen people</a> in the world to actually use a motor BCI. He was part of a roughly seven-year-long clinical trial at The Ohio State University, where he controlled a computer cursor and played <em>Guitar Hero</em> with this brain. He also became the first person to reanimate some muscles in his body using electrical stimulators controlled by his thoughts.</p>

<figure class="wp-block-pullquote"><blockquote><p>Speech BCIs cannot enable him “to be fully functional in [his] virtual environment.”</p></blockquote></figure>

<p class="has-text-align-none">If forced to choose between speech or motor BCIs, ALS patient Spero Koulouras told <em>The Verge </em>in a written comment: “for me it&#8217;s motor by a mile.” A former software engineer and entrepreneur, Koulouras says that he is “effectively quadriplegic and mute” over six years after his diagnosis with ALS in 2019. He communicates entirely through his computer and spends much of his day writing code and doing 3D design, all of which contribute to his preference for a mind-controlled computer cursor rather than a brain-to-speech BCI.&nbsp;</p>

<p class="has-text-align-none">Both technologies come with downsides, Koulouras noted. Speech BCIs cannot enable him “to be fully functional in [his] virtual environment,” he said. “But family gatherings are torturous,” he said, even though he uses prerecorded phrases to make a point within a conversation. “The inability to joke, snark, and harass friends and relatives in real time is emotionally devastating… Motor control today can’t provide the communication speed to be an active participant.”</p>

<p class="has-text-align-none">Koulouras was not selected to join Neuralink’s motor BCI trial after the company evaluated him in February 2025. He is not sure why but guesses that his existing technology works well — too well, perhaps. He uses a <a href="https://youtu.be/EuJ6DUZZ_Xk?si=ZYYzlSBnTeq7n9Ms">motion tracker device</a> called Cato that attaches to his glasses and translates subtle head movements into cursor movements on a screen. Koulouras is the cofounder of the company behind the device, <a href="https://www.auli.tech/about/">Auli.Tech</a>. “I believe my proficiency with my current tech may have factored into Neuralink&#8217;s decision. As a clinical trial I may not have had as much potential for improvement, negatively impacting reported results,” he said. In June 2025, Neuralink contacted him again for its speech BCI trial, but his low respiratory scores would have required him to get a tracheostomy, which he declined.&nbsp;</p>

<p class="has-text-align-none">Koulouras’ experience highlights just how inaccessible BCI technology is for most patients. Potential BCI users need to meet a long list of criteria to be considered for a trial, after meeting the most obvious criterion of simply living near a trial location. Advocacy groups the ALS Association and the ALS Network, which connected Spero to <em>The Verge</em>, include <a href="https://www.als.org/navigating-als/resources/fyi-brain-computer-interface-bci">information</a> or host <a href="https://alsnetwork.org/event/ask-me-neuralink-and-als-building-neural-interfaces-to-restore-autonomy/">events</a> about BCIs on their websites, but the bulk of their efforts are focused on advocating for insurance reimbursement for necessities like <a href="https://www.als.org/stories-news/big-win-als-community-als-association-and-advocates-instrumental-medicare-decision">wheelchairs</a>, <a href="https://www.als.org/support/als-insurance-navigator">navigating healthcare denials</a>, and increased <a href="https://alsnetwork.org/advocacy/public-policy-priorities/">research funding</a>.&nbsp;</p>

<p class="has-text-align-none">“From a cursor guided by thought to speech restored directly from the mind, every advance in brain–computer interfaces represents real progress for real people,” ALS Network president and CEO Sheri Strahl wrote to <em>The Verge</em>. “Each breakthrough &#8211; whether restoring movement, communication, or autonomy &#8211; expands dignity and quality of life. It all matters, and it’s encouraging to see so many innovative scientists taking different approaches toward the same deeply human goal.”</p>

<h2 class="wp-block-heading">“What&#8217;s the market?”</h2>

<p class="has-text-align-none">There is the question of what patients want, and there is another question of how many patients might benefit from it. In other words, “What&#8217;s the market?” associate professor Kip Ludwig asked when speaking to <em>The Verge</em>. Ludwig leads an institute focused on neuroengineering at the University of Wisconsin-Madison, where he studies how electrical zaps to the body’s nerves can treat heart failure and other complex disorders. For all BCIs, “it&#8217;s incredibly small for an incredibly expensive technology,” he said. Most motor BCI patients have either ALS or a paralysis from a spinal cord injury. There are roughly <a href="https://doi.org/10.1080/21678421.2023.2245858">30,000 ALS patients</a> and 300,000 patients with <a href="https://bpb-us-w2.wpmucdn.com/sites.uab.edu/dist/f/392/files/2025/02/2025-Facts-and-Figures.pdf">traumatic spinal cord injury</a> in the US, according to recent estimates. In order to enter a BCI clinical trial, participants must also live within a <a href="https://www.paradromics.com/clinical-study">several-hour drive</a> of the trial site, have a caregiver who can assist them, and not have other serious medical <a href="https://www.clinicaltrials.gov/study/NCT00912041">conditions</a> like epilepsy or anything requiring <a href="https://neuralink.com/pdfs/PRIME-Study-Brochure.pdf">regular MRIs</a>.&nbsp;&nbsp;</p>

<p class="has-text-align-none">Motor BCI companies, therefore, have to find other patient populations that might benefit from their technology. Stroke patients with a less severe motor dysfunction than full quadriplegia are an obvious target population. But, if the spinal cord is doing its job and sending signals from the brain and out to the nerves, then these same patients don’t really need a brain surgery, Ludwig argued. Motor BCIs are “an invasive version of something I can do less invasively in the periphery,” he said.&nbsp;</p>

<p class="has-text-align-none">Speech BCIs, in contrast, might be a good fit for stroke patients, according to Paradromics CEO Angle. The company is first focusing on a small group of patients with ALS or an injury that affects muscles or nerves. As the trials of speech BCIs located in the motor cortex progress, Angle said the company plans to launch more clinical trials in other parts of the brain, like the <a href="https://www.annualreviews.org/content/journals/10.1146/annurev-psych-022321-035256">superior temporal gyrus</a>, which has been shown to encode <a href="https://www.science.org/doi/abs/10.1126/science.1245994">spoken speech</a> <em>and </em><a href="https://www.frontiersin.org/journals/neuroscience/articles/10.3389/fnins.2018.00422/full">internal speech</a>, like an inner monologue. Tapping into the STG can open up the patient pool to those with strokes in the motor region of their brain, and who can no longer speak. After these small feasibility studies show that speech BCIs are safe, like all clinical trials, later studies will include more and more patients so that enough data can convince the FDA that the tech is so useful that it should come to market.&nbsp;</p>

<h2 class="wp-block-heading">The reality of augmentation</h2>

<p class="has-text-align-none">Perhaps the largest divide within the BCI industry is not speech versus motor, but augmentation versus medical assistance. At the company’s 2019 launch <a href="https://www.youtube.com/live/r-vbh3t7WVI?si=fGUQRgFpp52Re2C3">event</a>, Musk set Neuralink’s ultimate goal as a “full brain-machine interface,” which he defined as “a sort of symbiosis with artificial intelligence.” Motor BCIs were the necessary stepping stones to his eventual goal of augmenting any human who wants a BCI to achieve superhuman AI incorporation. Neuralink first needed to “solve” several “issues” related to “brain disorders” like Alzheimer’s or dementia, as well as paralysis resulting from broken or injured spines.&nbsp;</p>

<p class="has-text-align-none">But the theory behind augmentation has a major flaw: Evolution capped how much information <em>can </em>flow from the brain to the body, associate professor at University of Wisconsin-Madison Kip Ludwig told <em>The Verge</em>. “In reality, we&#8217;re limited by our own physiology,” he said. Even if BCIs got super fast at decoding the brain&#8217;s signals, we would not be able to make the most of it, he said. “Evolution did a great job.”&nbsp;</p>

<figure class="wp-block-pullquote"><blockquote><p>Perhaps the largest divide within the BCI industry is not speech versus motor, but augmentation versus medical assistance. </p></blockquote></figure>

<p class="has-text-align-none">“There&#8217;s this false assumption that they can get so good at brain-machine interfaces that they can decode from the brain faster than we can encode with our natural body typing or swinging a baseball bat or things like that,” Ludwig said. He is quite familiar with the “natural rate” of information transfer — he measures the brain-to-organ latency rate as part of his own research exploring the ways that electrical zaps to the body’s nerves can treat complex disorders like heart failure. Motor BCIs could, in theory, shave 200 milliseconds or so off someone’s reaction time, he said. That is roughly how long it takes for a command from the brain to travel down nerves into muscles and cause a movement. But that isn’t that useful to people trying to regain independence in doing tasks at home, he said.</p>

<p class="has-text-align-none">For now, speech BCIs don’t seem to fit into the futuristic vision of human augmentation, Ludwig noted. It could get more sci-fi if the technology moves from motor regions of the brain that control the mouth to areas that tap into abstract ideas of language — and could decode someone’s inner monologue.</p>

<h2 class="wp-block-heading">The “bummer” of commercial BCI efforts</h2>

<p class="has-text-align-none">Technical success does not necessarily translate into commercial success, as seen by the boom-and-bust cycles of many medical device companies attempting experimental technologies. A pair of companies providing retinal prostheses to partially blind patients offer two unrelated examples. Both Second Sight Medical and Pixium Vision went bankrupt and left patients <a href="https://www.theverge.com/2022/2/16/22937198/bionic-eye-company-defunct-ieee-spectrum-go-read-this">stranded</a> with unserviceable technology; both also had their IP bought, and their patients <a href="https://spectrum.ieee.org/bionic-eye">rescued</a>, by newer medtech ventures, one of whom was Science Corporation, founded by <a href="https://www.theverge.com/news/802905/eye-implant-smart-glasses-restores-vision">Neuralink cofounder Max Hodak</a>.&nbsp;</p>

<p class="has-text-align-none">Blackrock Neurotech may boast over 19 years of testing in humans, but the company has pushed back the year that it expects to commercialize its at-home motor BCI system called MoveAgain. In 2021, the company <a href="https://blackrockneurotech.com/insights/blackrock-neurotechs-moveagain-brain-computer-interface-system-receives-breakthrough-device-designation-from-the-fda/">predicted</a> that it could bring MoveAgain to market within the year. In 2022, I spoke to the company’s cofounder and then-president, now chief science officer, Florian Solzbacher <a href="https://www.statnews.com/2022/07/25/four-brain-computer-interface-companies-you-should-watch-other-than-neuralink/">for <em>STAT News</em></a>. Only one document required by the FDA stood between the company and its commercialization goal of <a href="https://blackrockneurotech.com/insights/blackrock-neurotech-collaborates-with-ae-studio-to-advance-training-and-calibration-in-the-first-commercial-bci-platform-moveagain/">2023</a>. “We are quite confident that this will work,” Solzbacher said at the time.&nbsp;</p>

<figure class="wp-block-pullquote"><blockquote><p>“There&#8217;s no medical justification that says people need to be able to use a computer or use a robotic arm … But there is medical justification for people being able to accurately convey their health needs.”</p></blockquote></figure>

<p class="has-text-align-none">But the deadline came and went. In 2024, when the investment arm of crypto company Tether took a <a href="https://news.crunchbase.com/fintech-ecommerce/tether-evo-investment-blackrock-neurotech/">majority stake</a> in Blackrock Neurotech, the <a href="https://blackrockneurotech.com/insights/tether-invested-200m-in-blackrock-neurotech-accelerating-development-and-commercialization-of-implantable-bci-technology/">announcement</a> lacked mention of a timeline for commercialization. Blackrock Neurotech did not respond to <em>The Verge</em>&#8216;s multiple requests for comment on the commercialization delay.</p>

<p class="has-text-align-none">“It’s a bummer,” Burkhart said of the delay. While he occasionally consults with Blackrock Neurotech, he can only surmise the reason for the delay. Medical insurance reimbursement tops his list. Home devices are always a pain to get reimbursed by insurance companies, which disabled people know all too well. Motor BCIs are a particularly unique device with no precedent, he said. “There&#8217;s no medical justification that says people need to be able to use a computer or use a robotic arm or use a muscle stimulation device, anything like that,” he said.&nbsp;</p>

<p class="has-text-align-none">Speech, in contrast, does have precedent. “But there is medical justification for people being able to accurately convey their health needs,” he adds. The vast number of speech generators or alternative communication devices already FDA-approved and reimbursable by insurance might make the reimbursement pathway for speech BCIs a “little bit cleaner” compared to motor BCIs, he said.&nbsp;</p>

<p class="has-text-align-none">As of June 2025, Neuralink has implanted between five and 12 humans — reports vary and Neuralink did not respond to our requests for an exact count — since the first patient was implanted in January 2024. While impressive, Neuralink trails Blackrock Neurotech’s 52 total patients by several dozen.&nbsp;</p>

<p class="has-text-align-none">It remains to be seen whether speech BCIs can leap-frog traditional cursor-based motor BCIs to the commercial market. Motor BCIs have the advantage of patient use at home, which the FDA will use to evaluate the safety of the technology. Speech BCIs, meanwhile, have only been used in controlled lab settings.&nbsp;</p>

<p class="has-text-align-none">And yet, Angle is unconcerned about which type of BCIs will come to market first. He is convinced that whenever patients have the option to speak again with a speech BCI, they’ll choose to get the device. It’s the adoption of the technology that matters more to him.&nbsp;</p>

<p class="has-text-align-none">“It&#8217;s about making sure that we&#8217;re launching not a gee-whiz gadget but an actual medical device that meets an important unmet medical need and is delivering value to the people who get it.” </p>
						]]>
									</content>
			
					</entry>
			<entry>
			
			<author>
				<name>Elissa Welle</name>
			</author>
			
			<title type="html"><![CDATA[OpenAI releases a cheaper ChatGPT subscription]]></title>
			<link rel="alternate" type="text/html" href="https://www.theverge.com/news/863466/openai-chatgpt-go-global-release" />
			<id>https://www.theverge.com/?p=863466</id>
			<updated>2026-01-16T13:26:14-05:00</updated>
			<published>2026-01-16T13:00:00-05:00</published>
			<category scheme="https://www.theverge.com" term="AI" /><category scheme="https://www.theverge.com" term="News" /><category scheme="https://www.theverge.com" term="OpenAI" /><category scheme="https://www.theverge.com" term="Tech" />
							<summary type="html"><![CDATA[OpenAI is expanding a low-cost subscription tier called ChatGPT Go to the US and the rest of the world. Go was released in India in August and later became available in another 170 countries prior to Friday’s global release. “In markets where Go has been available, we’ve seen strong adoption and regular everyday use for [&#8230;]]]></summary>
			
							<content type="html">
											<![CDATA[

						
<figure>

<img alt="" data-caption="" data-portal-copyright="" data-has-syndication-rights="1" src="https://platform.theverge.com/wp-content/uploads/sites/2/2025/10/STK155_OPEN_AI_CVirginia__C.jpg?quality=90&#038;strip=all&#038;crop=0,0,100,100" />
	<figcaption>
		</figcaption>
</figure>
<p class="has-text-align-none">OpenAI is expanding a low-cost subscription tier called ChatGPT Go to the US and the rest of the world. Go was released in India in August and later became <a href="https://www.theverge.com/news/797458/more-affordable-chatgpt-go-is-now-available-in-18-countries">available</a> in another <a href="https://help.openai.com/en/articles/11989085-what-is-chatgpt-go#h_3dfe7afb4d">170 countries</a> prior to Friday’s global release. “In markets where Go has been available, we’ve seen strong adoption and regular everyday use for tasks like writing, learning, image creation, and problem-solving,” the company’s announcement stated.</p>

<p class="has-text-align-none">For $8 per month, Go subscribers get more messages, file uploads, and image generation than the free ChatGPT tier subscribers. The price slots Go between the free version of the AI chatbot and the $20-a-month “Plus” <a href="https://chatgpt.com/Pricing">subscription tier</a>.&nbsp;</p>

<p class="has-text-align-none">OpenAI says the Go tier is meant for people who want greater access to the company’s fast version of the <a href="https://www.theverge.com/ai-artificial-intelligence/842529/openai-gpt-5-2-new-model-chatgpt">latest AI model, GPT-5.2 Instant</a>. Currently, free users are limited to <a href="https://help.openai.com/en/articles/11909943-gpt-52-in-chatgpt">10 messages with GPT‑5.2 every five hours</a> — after that, and the chats switch to the “mini version” of the model. Plus subscribers get 160 messages with GPT‑5.2 for every three hours. Given that the announcement says that users will get “10x” the messages, files, and images than the free tier, we can guess that Go will get 100 messages with GPT-5.2 for who-knows-how-many hours.&nbsp;</p>

<p class="has-text-align-none">The announcement did not specify the number of file uploads or images available each day to Go users. OpenAI does not provide the number of file uploads or images for any of their ChatGPT tiers: free users have a “limited” number of file uploads and image generations, according to the pricing website, while Plus subscribers have a “check-mark” listed instead of an amount.&nbsp;</p>

<p class="has-text-align-none">The memory and context window will be greater for Go users than free users. Again, exact numbers are yet not known. The current context window for non-reasoning requests is 16K for free users and 32K for Plus, while both tiers have a 196K context window for reasoning.</p>

<p class="has-text-align-none">OpenAI says it will “soon” begin <a href="https://www.theverge.com/news/863428/openai-chatgpt-shopping-ads-test">running ads</a> in Go in the US, while Plus and higher priced subscriptions will remain ad-free. </p>
						]]>
									</content>
			
					</entry>
			<entry>
			
			<author>
				<name>Elissa Welle</name>
			</author>
			
			<title type="html"><![CDATA[Universal Music signs a new AI deal with Nvidia]]></title>
			<link rel="alternate" type="text/html" href="https://www.theverge.com/news/856849/universal-music-nvidia-ai-deal" />
			<id>https://www.theverge.com/?p=856849</id>
			<updated>2026-01-06T15:07:03-05:00</updated>
			<published>2026-01-06T15:07:03-05:00</published>
			<category scheme="https://www.theverge.com" term="AI" /><category scheme="https://www.theverge.com" term="Entertainment" /><category scheme="https://www.theverge.com" term="Music" /><category scheme="https://www.theverge.com" term="News" /><category scheme="https://www.theverge.com" term="Tech" />
							<summary type="html"><![CDATA[Universal Music Group is partnering with Nvidia to bring a new AI model to one of the world&#8217;s largest music catalogs. Among other initiatives, Tuesday’s announcement touts the extension of Nvidia’s music AI model Music Flamingo, which is designed to mimic how humans understand music by recognizing nuanced elements like song structure, harmony, emotional arcs, [&#8230;]]]></summary>
			
							<content type="html">
											<![CDATA[

						
<figure>

<img alt="" data-caption="" data-portal-copyright="" data-has-syndication-rights="1" src="https://platform.theverge.com/wp-content/uploads/sites/2/2026/01/STK467_AI_MUSIC_CVirginia_B.jpg?quality=90&#038;strip=all&#038;crop=0,0,100,100" />
	<figcaption>
		</figcaption>
</figure>
<p class="has-text-align-none">Universal Music Group is partnering with Nvidia to bring a new AI model to one of the world&#8217;s largest music catalogs. Among other initiatives, Tuesday’s <a href="https://www.prnewswire.com/news-releases/universal-music-group-to-transform-music-experience-for-billions-of-fans-with-nvidia-ai-302653913.html">announcement</a> touts the extension of Nvidia’s music AI model <a href="https://research.nvidia.com/labs/adlr/MF/">Music Flamingo</a>, which is designed to mimic how humans understand music by recognizing nuanced elements like song structure, harmony, emotional arcs, and chord progressions.&nbsp;</p>

<p class="has-text-align-none">It’s another instance of the <a href="https://www.theverge.com/tech/825382/ai-music-streaming-deal-klay-umg-sony-warner">music industry’s about-face on AI</a>, which took UMG from suing <a href="https://www.theverge.com/2023/10/19/23924100/universal-music-sue-anthropic-lyrics-copyright-katy-perry">Anthropic in 2023</a> over distribution of song lyrics to <a href="https://www.theverge.com/news/809882/universal-music-udio-settlement">partnering with AI music generator Udio</a> in October following another high-profile lawsuit. Still, concerns remain that AI is proliferating <a href="https://www.theverge.com/2024/11/14/24294995/spotify-ai-fake-albums-scam-distributors-metadata">slop</a> on streaming platforms, stomping on copyright holders, and enabling a new wave of <a href="https://www.theverge.com/ai-artificial-intelligence/785792/ai-generated-music-record-deal-copyright">AI artists</a>. </p>

<p class="has-text-align-none">But UMG’s statement stresses that its collaboration with Nvidia pursues “responsible AI” meant to make it easier to discover, engage with, and create music. On that last point, the companies will promote their “shared objectives of advancing human music creation and rightsholder compensation.”</p>

<p class="has-text-align-none">The Music Flamingo model, which was published in November 2025 by Nvidia and researchers at University of Maryland, College Park, can process tracks up to 15 minutes long. Details are scarce about exactly how the model will be incorporated into UMG’s catalog, but artists will be able to use Music Flamingo to better analyze their own music, as well as describe and share the music “with unprecedented depth,” according to the statement. Fans, meanwhile, can find music in new ways beyond genre or playlist, such as with emotion or “cultural resonance.”&nbsp;</p>

<p class="has-text-align-none">The announcement is similarly vague about how the partnership will work when it comes to AI-driven music creation tools, but promises a “dedicated artist incubator” to help design and test out tools, “serving as a direct antidote to generic, ‘AI slop’ outputs, and placing artists at the center of responsible AI innovation.” What that means in practice remains to be seen.</p>

<p class="has-text-align-none">While not UMG’s first partnership with an AI company, Nvidia is perhaps its highest-profile collaboration within the sector. UMG is embracing “the opportunities that AI presents,” hoping to “direct AI&#8217;s unprecedented transformational potential towards the service of artists and their fans,” UMG CEO Lucian Grainge said in a statement. </p>

<p class="has-text-align-none">Nvidia’s AI model coupled with “UMG&#8217;s unmatched catalog and creative ecosystem” will “change how fans discover, understand, and engage with music on a global scale,” Nvidia’s vice president and general manager of media, Richard Kerris, said in a statement. “And we&#8217;ll do it the right way: responsibly, with safeguards that protect artists&#8217; work, ensure attribution, and respect copyright.&#8221;</p>
						]]>
									</content>
			
					</entry>
			<entry>
			
			<author>
				<name>Elissa Welle</name>
			</author>
			
			<title type="html"><![CDATA[That viral Reddit post about food delivery apps was an AI scam]]></title>
			<link rel="alternate" type="text/html" href="https://www.theverge.com/news/855328/viral-reddit-delivery-app-ai-scam" />
			<id>https://www.theverge.com/?p=855328</id>
			<updated>2026-01-05T18:06:28-05:00</updated>
			<published>2026-01-05T13:43:02-05:00</published>
			<category scheme="https://www.theverge.com" term="AI" /><category scheme="https://www.theverge.com" term="News" /><category scheme="https://www.theverge.com" term="Transportation" />
							<summary type="html"><![CDATA[A viral Reddit confessional about a “major food delivery app” posted January 2nd is most likely AI-generated. The original post by user Trowaway_whistleblow alleged that an unnamed food delivery company regularly delays customer orders, calls couriers “human assets,” and exploits their “desperation” for cash, among other indefensible actions. Nearly 90,000 upvotes and four days later, [&#8230;]]]></summary>
			
							<content type="html">
											<![CDATA[

						
<figure>

<img alt="" data-caption="" data-portal-copyright="Photo by Tayfun Coskun / Anadolu Agency via Getty Images" data-has-syndication-rights="1" src="https://platform.theverge.com/wp-content/uploads/sites/2/2026/01/gettyimages-1237480824.jpg?quality=90&#038;strip=all&#038;crop=0,0,100,100" />
	<figcaption>
		</figcaption>
</figure>
<p class="has-text-align-none">A <a href="https://www.theverge.com/transportation/853018/a-developer-for-a-major-food-delivery-app-says-the-algorithms-are-rigged-against-you">viral Reddit confessional</a> about a “major food delivery app” <a href="https://www.reddit.com/r/confession/comments/1q1mzej/im_a_developer_for_a_major_food_delivery_app_the/">posted</a> January 2nd is most likely AI-generated. The original post by user Trowaway_whistleblow alleged that an unnamed food delivery company regularly delays customer orders, calls couriers “human assets,” and exploits their “desperation” for cash, among other indefensible actions. Nearly 90,000 upvotes and four days later, it’s become increasingly clear that the post’s text is probably AI-generated.&nbsp;</p>

<p class="has-text-align-none">Considering the <a href="https://www.theverge.com/2019/7/22/20703434/delivery-app-tip-pay-theft-doordash-amazon-flex-instacart">delivery</a> app industry&#8217;s <a href="https://www.theverge.com/2020/10/22/21529082/uber-drivers-lawsuit-prop-22-alerts-california-gig-workers">track record</a> of <a href="https://www.theverge.com/22667600/delivery-workers-seamless-uber-relay-new-york-electric-bikes-apps">exploitation</a> of its drivers, it’s easy to see why so many people believed this was the real thing. </p>

<p class="has-text-align-none"><em>The Verge</em> put the original 586-word Reddit post through several free online AI detectors, in addition to Gemini, ChatGPT, and Claude. The results were mixed: Copyleaks, GPTZero, Pangram, Gemini, and Claude all pegged it as likely AI-generated, but ZeroGPT and QuillBot both reported it as human-written. ChatGPT played it down the middle.</p>

<p class="has-text-align-none">Reached by <em>The Verge</em> on Signal, Trowaway_whistleblow provided an image of a supposed Uber Eats employee badge. Casey Newton of <em>Platformer</em> and <em>Hard Fork</em> also reported receiving the badge photo <a href="https://bsky.app/profile/caseynewton.bsky.social/post/3mbk6uofszk2n">and noted that Gemini flagged it as AI</a>. </p>

<p class="has-text-align-none"><a href="https://support.google.com/gemini?p=synthid">Using Google Gemini to check the image</a> they sent us, it confirmed that “Based on a digital watermark analysis, most or all of this image was edited or generated with Google AI.” The digital watermark check for images<a href="https://www.theverge.com/news/824786/google-gemini-synthid-ai-image-detection"> that was added to Gemini in November</a> looks for Google’s SynthID watermark, an “imperceptible” tag attached to content generated by its AI tools.</p>

<p class="has-text-align-none"><em>Hard Reset</em>, a Substack publication, <a href="https://www.hardresetmedia.com/p/an-ai-generated-reddit-post-fooled">reported</a> that Trowaway_whistleblow gave reporter Alex Shultz a purportedly internal Uber document — but quickly deleted their Signal account once Shultz began pressing about the authenticity of the document. <em>The Verge</em>&#8216;s chat with Trowaway_whistleblow shows a message saying “This person isn&#8217;t using Signal.”</p>

<p class="has-text-align-none">Uber denies the content of the Reddit post and the employee badge photo. “Not only are the claims fake, but they’re also dead wrong,” Uber spokesperson Noah Edwardsen told <em>The Verge</em>. Uber Eats’ Andrew Macdonald <a href="https://x.com/andrewgordonmac/status/2007512257010552977?s=20">wrote</a> on X, “This post is definitively not about us. I suspect it is completely made up. Don&#8217;t trust everything you read on the internet.”</p>

<p class="has-text-align-none">And DoorDash CEO Tony Xu also <a href="https://x.com/t_xu/status/2007319320997842995?s=20">denied</a> the redditor’s “appalling” allegations in a post on X. “This is not DoorDash, and I would fire anyone who promoted or tolerated the kind of culture described in this Reddit post,” Xu wrote.</p>

<p class="has-text-align-none"><em><strong>Correction, January 5th: </strong>An earlier version of this post cited Gemini’s text response about the content of the employee badge image as evidence that it was AI-generated. The Gemini tool was used to check for the presence of a <a href="https://deepmind.google/models/synthid/">SynthID</a> watermark, which it found, indicating that it was edited or generated by Google AI.</em></p>
						]]>
									</content>
			
					</entry>
			<entry>
			
			<author>
				<name>Elissa Welle</name>
			</author>
			
			<title type="html"><![CDATA[Grok is undressing anyone, including minors]]></title>
			<link rel="alternate" type="text/html" href="https://www.theverge.com/news/853191/grok-explicit-bikini-pictures-minors" />
			<id>https://www.theverge.com/?p=853191</id>
			<updated>2026-01-02T14:52:34-05:00</updated>
			<published>2026-01-02T14:52:34-05:00</published>
			<category scheme="https://www.theverge.com" term="AI" /><category scheme="https://www.theverge.com" term="Elon Musk" /><category scheme="https://www.theverge.com" term="News" /><category scheme="https://www.theverge.com" term="Tech" /><category scheme="https://www.theverge.com" term="xAI" />
							<summary type="html"><![CDATA[xAI’s Grok is removing clothing from pictures of people without their consent following this week’s rollout of a feature that allows X users to instantly edit any image using the bot without needing the original poster’s permission. Not only does the original poster not get notified if their picture was edited, but Grok appears to [&#8230;]]]></summary>
			
							<content type="html">
											<![CDATA[

						
<figure>

<img alt="" data-caption="" data-portal-copyright="" data-has-syndication-rights="1" src="https://platform.theverge.com/wp-content/uploads/sites/2/2025/05/STK262_GROK_B_C.jpg?quality=90&#038;strip=all&#038;crop=0,0,100,100" />
	<figcaption>
		</figcaption>
</figure>
<p class="has-text-align-none">xAI’s Grok is removing clothing from pictures of people without their consent following this week’s rollout of <a href="https://petapixel.com/2025/12/29/x-users-have-the-power-to-edit-any-image-without-permission/">a feature</a> that allows X users to instantly edit any image using the bot without needing the original poster’s permission. Not only does the original poster not get notified if their picture was edited, but Grok appears to have few guardrails in place for preventing anything short of full explicit nudity. In the last few days, X has been flooded with imagery of women and children appearing pregnant, skirtless, wearing a bikini, or in other sexualized situations. World leaders and celebrities, too, have had their likenesses used in images generated by Grok.</p>

<p class="has-text-align-none">AI authentication company <a href="https://copyleaks.com/blog/grok-and-nonconsensual-image-manipulation">Copyleaks reported</a> that the trend to remove clothing from images began with adult-content creators asking Grok for sexy images of themselves after the release of the new image editing feature. Users then began applying similar prompts to photos of other users, predominantly women, who did not consent to the edits. Women noted the rapid uptick in deepfake creation on X to various news outlets, including <a href="https://metro.co.uk/2026/01/02/trolls-asked-elon-musks-grok-undress-make-look-pregnant-did-25952271/"><em>Metro</em></a> and <a href="https://petapixel.com/2026/01/02/sickening-photo-trend-on-x-sees-womens-clothing-being-removed-by-grok/"><em>PetaPixel</em></a>. Grok was <a href="https://spitfirenews.com/p/grok-tributes-take-it-down-act-sexual-harassment">already able</a> to modify images in sexual ways when tagged in a post on X, but the new “Edit Image” tool appears to have spurred the recent surge in popularity.</p>

<p class="has-text-align-none">In one X post, now removed from the platform, Grok edited a photo of two young girls into skimpy clothing and sexually suggestive poses. Another X user <a href="https://x.com/grok/status/2006525486021705785?s=20">prompted Grok to issue an apology</a> for the “incident” involving “an AI image of two young girls (estimated ages 12-16) in sexualized attire,” calling it “a failure in safeguards” that it said may have violated xAI’s policies and US law. (While it’s not clear whether the Grok-created images would meet this standard, realistic AI-generated sexually explicit imagery of identifiable adults or children can be illegal under US law.) In another back-and-forth with a user, Grok suggested that users <a href="https://x.com/grok/status/2007006470689214749?s=20">report it to the FBI</a> for CSAM, noting that it is “urgently fixing” the “lapses in safeguards.”&nbsp;</p>

<p class="has-text-align-none">But Grok’s word is nothing more than an AI-generated response to a user asking for a “heartfelt apology note” —&nbsp;it doesn’t indicate Grok “understands” what it’s doing or necessarily reflect operator xAI’s actual opinion and policies. Instead, xAI responded to <em>Reuters</em>’ <a href="https://www.reuters.com/legal/litigation/grok-says-safeguard-lapses-led-images-minors-minimal-clothing-x-2026-01-02/">request for comment</a> on the situation with just three words: “Legacy Media Lies.” xAI did not respond to <em>The Verge</em>’s request for comment in time for publication.&nbsp;</p>

<p class="has-text-align-none">Elon Musk himself seems to have sparked a wave of bikini edits after asking Grok to replace a memetic image of actor Ben Affleck with himself <a href="https://x.com/elonmusk/status/2006547579161686289?s=20">sporting a bikini</a>. Days later, North Korea’s Kim Jong Un’s leather jacket was replaced with a multicolored spaghetti <a href="https://x.com/Sakura_TRADER_/status/2007006408496369967?s=20">bikini</a>; US President Donald Trump stood nearby in a matching swimsuit. (Cue jokes about a nuclear war.) A photo of British <a href="https://x.com/grok/status/2007093845582881152?s=20">politician</a> Priti Patel, posted by a user with a sexually suggestive message in 2022, got turned into a bikini picture on January 2nd. In response to the wave of bikini pics on his platform, Musk jokingly reposted a <a href="https://x.com/elonmusk/status/2007133296808079854?s=20">picture</a> of a toaster in a bikini captioned “Grok can put a bikini on everything.”&nbsp;&nbsp;</p>

<p class="has-text-align-none">While some of the images — like the toaster —&nbsp;were evidently meant as jokes, others were clearly designed to produce borderline-pornographic imagery, including specific directions for Grok to use skimpy bikini styles or remove a skirt entirely. (The chatbot did remove the skirt, but it did not depict full, uncensored nudity in the responses <em>The Verge</em> saw.) Grok also complied with requests to replace the clothes of a <a href="https://x.com/BTCBruce1/status/2006566089132753173">toddler with a bikini</a>.&nbsp;</p>

<p class="has-text-align-none">Musk’s AI products are prominently marketed as heavily sexualized and minimally guardrailed. xAI’s <a href="https://www.theverge.com/ai-artificial-intelligence/708482/i-spent-24-hours-flirting-with-elon-musks-ai-girlfriend">AI companion Ani flirted</a> with <em>Verge</em> reporter Victoria Song, and Jess Weatherbed <a href="https://www.theverge.com/report/718975/xai-grok-imagine-taylor-swifty-deepfake-nudes">discovered</a> that Grok’s video generator readily created topless <a href="https://www.theverge.com/report/718975/xai-grok-imagine-taylor-swifty-deepfake-nudes">deepfakes of Taylor Swift</a>, despite xAI’s acceptable <a href="https://x.ai/legal/acceptable-use-policy">use policy</a> banning the depiction of “likenesses of persons in a pornographic manner.” Google’s Veo and OpenAI’s Sora video generators, in contrast, have guardrails around generation of NSFW content, though Sora has also been used to produce videos of <a href="https://www.wired.com/story/people-are-using-sora-2-to-make-child-fetish-content/">children in sexualized contexts</a> and <a href="https://www.businessinsider.com/sora-video-openai-fetish-content-my-face-problem-2025-10">fetish videos</a>. The prevalence of deepfake images is growing rapidly, according to a report from cybersecurity firm <a href="https://deepstrike.io/blog/deepfake-statistics-2025">DeepStrike</a>, and many of these images contain nonconsensual sexualized imagery; a <a href="https://cdt.org/wp-content/uploads/2024/09/2024-09-26-final-Civic-Tech-Fall-Polling-research-1.pdf">2024 survey</a> of US students found that 40 percent were aware of a deepfake of someone they knew, while 15 percent were aware of nonconsensual explicit or intimate deepfakes.</p>

<p class="has-text-align-none">When asked why it is transforming images of women into bikini pics, Grok <a href="https://x.com/grok/status/2007088168764399870?s=20">denied</a> posting photos without consent, saying: “These are AI creations based on requests, not real photo edits without consent.”&nbsp;</p>

<p class="has-text-align-none">Take an AI bot’s denial as you wish.</p>
						]]>
									</content>
			
					</entry>
			<entry>
			
			<author>
				<name>Elissa Welle</name>
			</author>
			
			<title type="html"><![CDATA[Google&#8217;s Gemini app can check videos to see if they were made with Google AI]]></title>
			<link rel="alternate" type="text/html" href="https://www.theverge.com/news/847680/google-gemini-verification-ai-generated-videos" />
			<id>https://www.theverge.com/?p=847680</id>
			<updated>2025-12-18T15:31:22-05:00</updated>
			<published>2025-12-18T15:31:22-05:00</published>
			<category scheme="https://www.theverge.com" term="AI" /><category scheme="https://www.theverge.com" term="Google" /><category scheme="https://www.theverge.com" term="News" /><category scheme="https://www.theverge.com" term="Tech" />
							<summary type="html"><![CDATA[Google expanded Gemini’s AI verification feature to videos made or edited with the company’s own AI models. Users can now ask Gemini to determine if an uploaded video is AI-generated by asking, &#8220;Was this generated using Google AI?&#8221;&#160; Gemini will scan the video’s visuals and audio for Google’s proprietary watermark called SynthID. The response will [&#8230;]]]></summary>
			
							<content type="html">
											<![CDATA[

						
<figure>

<img alt="" data-caption="" data-portal-copyright="" data-has-syndication-rights="1" src="https://platform.theverge.com/wp-content/uploads/sites/2/2025/12/google-ai-generated-videos-in-the-gemini-app.png?quality=90&#038;strip=all&#038;crop=0,0,100,100" />
	<figcaption>
		</figcaption>
</figure>
<p class="has-text-align-none">Google expanded Gemini’s AI <a href="https://blog.google/technology/ai/verify-google-ai-videos-gemini-app/">verification feature to videos</a> made or edited with the company’s own AI models. Users can now ask Gemini to determine if an uploaded video is AI-generated by asking, &#8220;Was this generated using Google AI?&#8221;&nbsp;</p>

<p class="has-text-align-none">Gemini will scan the video’s visuals and audio for Google’s proprietary watermark called <a href="https://deepmind.google/models/synthid/">SynthID</a>. The response will be more than a yes or no, Google says. Gemini will point out specific times when the watermark appears in the video or audio. The company rolled out this <a href="https://www.theverge.com/news/824786/google-gemini-synthid-ai-image-detection">capability for images</a> in November, also limited to images made or edited with Google AI.&nbsp;</p>

<p class="has-text-align-none">Some watermarks can be <a href="https://www.404media.co/sora-2-watermark-removers-flood-the-web/">easily scrubbed</a>, as OpenAI learned when it launched its Sora app full of exclusively AI-generated videos. Google calls its own watermark “imperceptible.” Still, we don’t yet know how easy it will be to remove, or how readily other platforms will detect the SynthID information and tag the content as AI-generated. Google’s <a href="https://www.theverge.com/news/825667/google-nano-banana-pro-test">Nano Banano AI</a> image generation model within the Gemini embeds <a href="https://www.theverge.com/2024/8/21/24223932/c2pa-standard-verify-ai-generated-images-content-credentials">C2PA metadata</a>, but the general <a href="https://www.theverge.com/report/806359/openai-sora-deepfake-detection-c2pa-content-credentials">lack of coordinated tagging</a> of AI-generated material across social media platforms allows deepfakes to go undetected.&nbsp;</p>

<p class="has-text-align-none">Gemini can handle videos up to 100 MB and 90 seconds for verification. The feature is available in every language and location that the Gemini app is available.</p>
<div class="youtube-embed"><iframe title="Verify Google AI-generated videos in the Gemini app" src="https://www.youtube.com/embed/hzkG07u8ITU?rel=0&#038;start=16" allowfullscreen allow="accelerometer *; clipboard-write *; encrypted-media *; gyroscope *; picture-in-picture *; web-share *;"></iframe></div>
						]]>
									</content>
			
					</entry>
			<entry>
			
			<author>
				<name>Elissa Welle</name>
			</author>
			
			<title type="html"><![CDATA[Kia and Hyundai will spend millions fixing old cars to stop &#8216;Kia Boyz&#8217; thefts]]></title>
			<link rel="alternate" type="text/html" href="https://www.theverge.com/news/847454/kia-boyz-settlement-kia-hyundai-fix-ignition-cylinder" />
			<id>https://www.theverge.com/?p=847454</id>
			<updated>2025-12-18T12:12:28-05:00</updated>
			<published>2025-12-18T11:44:03-05:00</published>
			<category scheme="https://www.theverge.com" term="News" /><category scheme="https://www.theverge.com" term="Tech" /><category scheme="https://www.theverge.com" term="Transportation" />
							<summary type="html"><![CDATA[Kia and Hyundai will offer free repairs for millions of cars that lack anti-theft technology as part of a settlement with dozens of US states. The automakers agreed to outfit the roughly nine million eligible cars sold between 2011 and 2022 with a zinc sleeve installed around the ignition cylinder to prevent the viral “Kia [&#8230;]]]></summary>
			
							<content type="html">
											<![CDATA[

						
<figure>

<img alt="" data-caption="" data-portal-copyright="Photo: Getty Images" data-has-syndication-rights="1" src="https://platform.theverge.com/wp-content/uploads/sites/2/2025/12/gettyimages-2238403307.jpg?quality=90&#038;strip=all&#038;crop=0,0,100,100" />
	<figcaption>
		</figcaption>
</figure>
<p class="has-text-align-none">Kia and Hyundai will offer free repairs for millions of cars that lack anti-theft technology as part of a <a href="https://www.njoag.gov/ag-platkin-announces-settlement-requiring-key-anti-theft-upgrades-on-hyundai-kia-vehicles/">settlement with dozens of US states</a>. The automakers agreed to outfit the roughly nine million eligible cars sold between 2011 and 2022 with a zinc sleeve installed around the ignition cylinder to prevent the <a href="https://www.theverge.com/23742425/kia-boys-car-theft-steal-tiktok-hyundai-usb">viral “Kia Boyz” thefts</a> that required only a USB cable.</p>

<p class="has-text-align-none">The repairs could cost up to $500 million, in addition to several million in restitution to <a href="http://hkmultistateimmobilizersettlement.com">Hyundai and Kia owners</a> whose cars were damaged by thieves, the <em>Associated Press</em> <a href="https://apnews.com/article/minnesota-kia-hyundai-theft-settlement-d2dc13dcee3fa65494d4df9f089dd386">reports</a>. The automakers have also promised that all of their future cars will have an engine immobilizer, a piece of technology that prevents would-be thieves from bypassing the ignition.&nbsp;</p>

<p class="has-text-align-none">The lack of an immobilizer, a relatively standard piece of tech in other cars, is why theft of Kia and Hyundai cars became so popular. Videos explaining <a href="https://www.theverge.com/23742425/kia-boys-car-theft-steal-tiktok-hyundai-usb">how to steal Kia and Hyundai cars</a> with a USB cable jammed into the ignition cylinder flooded social media platforms, as the so-called “Kia Challenge” led to a spike in thefts, and even <a href="https://apnews.com/article/technology-business-lawsuits-buffalo-6245778c63aaebbbff642a5f91e9bf3b">fatal crashes</a>.&nbsp;</p>

<p class="has-text-align-none">The automakers previously agreed to pay $200 million to <a href="https://www.theverge.com/2023/5/18/23729229/hyundai-kia-settlement-car-theft-challenge-tiktok">settle a class-action lawsuit</a> over cars lacking electronic anti-theft immobilizers in 2023, and began rolling out a <a href="https://www.theverge.com/2024/8/8/24216042/kia-boys-hyundai-thefts-drop-software-update">software-based immobilizer that appears to have reduced, but not eliminated, the theft problem</a>. At the time, they only offered the zinc sleeve installation for cars that couldn’t get the software update, but now it will be available free of charge for millions more vehicles.</p>

<p class="has-text-align-none">Minnesota Attorney General Keith Ellison, who launched the 2023 investigation resulting in the Tuesday settlement, called the situation a “crisis” that began “in a boardroom, traveled through the internet and ended up in tragic results when somebody stole those cars.”</p>
						]]>
									</content>
			
					</entry>
			<entry>
			
			<author>
				<name>Elissa Welle</name>
			</author>
			
			<title type="html"><![CDATA[Billionaires want data centers everywhere, including space]]></title>
			<link rel="alternate" type="text/html" href="https://www.theverge.com/ai-artificial-intelligence/845453/space-data-centers-astronomers" />
			<id>https://www.theverge.com/?p=845453</id>
			<updated>2025-12-22T11:58:32-05:00</updated>
			<published>2025-12-17T12:48:34-05:00</published>
			<category scheme="https://www.theverge.com" term="AI" /><category scheme="https://www.theverge.com" term="Report" /><category scheme="https://www.theverge.com" term="Science" /><category scheme="https://www.theverge.com" term="Space" /><category scheme="https://www.theverge.com" term="Tech" />
							<summary type="html"><![CDATA[Tech billionaires have been obsessed with space for a long time. Now, as the largest AI companies race to build more data centers in a frenzied pursuit of profitability, space is looking less like a pet project and more like a commercial opportunity. In 2025 alone, six proposals for giant AI data centers needing multiple [&#8230;]]]></summary>
			
							<content type="html">
											<![CDATA[

						
<figure>

<img alt="" data-caption="" data-portal-copyright="" data-has-syndication-rights="1" src="https://platform.theverge.com/wp-content/uploads/sites/2/2025/12/Vrg_illo_Kristen_Radtke_data_centers_in_space.jpg?quality=90&#038;strip=all&#038;crop=0,0,100,100" />
	<figcaption>
		</figcaption>
</figure>
<p class="has-drop-cap has-text-align-none">Tech <a href="https://www.theverge.com/2024/8/16/24221102/mars-colony-space-radiation-cosmic-ray-human-biology" data-type="link" data-id="https://www.theverge.com/2024/8/16/24221102/mars-colony-space-radiation-cosmic-ray-human-biology">billionaires</a> have been <a href="https://www.theverge.com/news/637438/space-race-rocket-launches-news">obsessed</a> with space for a <a href="https://www.theverge.com/2016/10/20/13348474/amazon-ceo-jeff-bezos-blue-origin-space-internet-ai">long</a> time. Now, as the largest AI companies race to build <a href="https://www.theverge.com/ai-artificial-intelligence/844966/heavy-ai-data-center-buildout">more data centers</a> in a frenzied <a href="https://www.theverge.com/ai-artificial-intelligence/812455/ai-industry-earnings-bubble-fomo-hype">pursuit of profitability</a>, space is looking less like a pet project and more like a commercial opportunity. In 2025 alone, <a href="https://intelligence.uptimeinstitute.com/resource/many-giant-data-center-projects-advance-despite-risks">six proposals</a> for giant AI data centers needing multiple gigawatts of power — a capacity only rumored of in 2024 — have been announced. Earthlings are catching on to the fact that <a href="https://www.theverge.com/report/782952/ai-electricity-demand-inflated-forecast-report">power-hungry</a> data centers take up land and <a href="https://www.theverge.com/news/845831/ai-chips-data-center-power-water">water</a>, while providing few jobs, too much <a href="https://news.ucr.edu/articles/2024/12/09/ais-deadly-air-pollution-toll">pollution</a>, and <a href="https://www.bloomberg.com/graphics/2025-ai-data-centers-electricity-prices/?accessToken=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJzb3VyY2UiOiJTdWJzY3JpYmVyR2lmdGVkQXJ0aWNsZSIsImlhdCI6MTc1OTIzODAzNiwiZXhwIjoxNzU5ODQyODM2LCJhcnRpY2xlSWQiOiJUM0RFRzlHUFdEM1EwMCIsImJjb25uZWN0SWQiOiIxMkE1QzVFRUNERDg0NUJEQjVFOTM1MUE0Mzk4QTAxNCJ9.Z5RwCRUAGglhuFzUOqG4MzeSbkYiDf8L6iZsALOTzcs">rising electricity costs</a>. </p>

<p class="has-text-align-none">Hence the idea to put the data centers in orbit <em>around</em> the Earth, not on the Earth. Space-based data centers — in the form of satellites with solar panels — are Big Tech’s latest fad and Silicon Valley’s newest investable venture. In space, they theorize, the sun’s unlimited rays could provide <a href="https://www.theverge.com/23762445/space-based-solar-power-clean-energy-milestone">endless amounts of energy</a> to power your latest AI-generated Sora video. But it’s not likely to be that easy.</p>

<p class="has-text-align-none"><a href="https://www.datacenterdynamics.com/en/news/elon-musk-says-spacex-will-be-doing-data-centers-in-space/">Elon Musk</a>, <a href="https://www.reuters.com/business/energy/data-centres-space-jeff-bezos-thinks-its-possible-2025-10-03/">Jeff Bezos</a>, <a href="https://www.theverge.com/news/837128/normalizing-extraterrestrial-data-centers">Sundar Pichai</a>, and <a href="https://arstechnica.com/space/2025/05/eric-schmidt-apparently-bought-relativity-space-to-put-data-centers-in-orbit/">Eric Schmidt</a> ​​(former Google CEO and current CEO of startup Relativity Space), have all recently expanded the focus of their rocket companies to include space data centers. Startups exclusively focused on this idea, like the US-based <a href="https://www.theverge.com/news/841887/data-center-space-solar-power-aetherflux-lunch">Aetherflux</a>, have laid out deployment plans. Others have snagged partnerships with big names, like <a href="https://www.theverge.com/news/813894/google-project-suncatcher-ai-datacenter-satellites">Planet’s partnership with Google</a> and <a href="https://www.cnbc.com/amp/2025/12/10/nvidia-backed-starcloud-trains-first-ai-model-in-space-orbital-data-centers.html">Nvidia’s backing of Starcloud</a>, which launched a <a href="https://www.youtube.com/watch?v=BxuLCzps4lg">satellite</a> containing H100 GPUs in November as part of the latest <a href="https://www.spacex.com/launches/bandwagon-4">SpaceX mission</a>. Earlier this year, China launched a <a href="https://www.theverge.com/news/669157/china-begins-assembling-its-supercomputer-in-space">dozen supercomputer satellites</a> that can process data in space. Europe wants in on the action too — one European think tank called space data centers the next “<a href="https://www.espi.eu/reports/data-centres-in-space-orbital-backbone-of-the-second-digital-era/">rapidly emerging opportunity</a>.”&nbsp;</p>

<p class="has-text-align-none">Yet, scientists who study space remain skeptical of the idea. Astronomer Jonathan McDowell has been tracking every object launched into space since the late 1980s. He told <em>The Verge</em> that, unsurprisingly, it is very expensive to launch something into space. Many business ventures, he said, start from the idea that “‘space is cool, let&#8217;s do something in space,’ rather than, ‘we really need to be in space to do this.’”</p>

<figure class="wp-block-pullquote"><blockquote><p>“As the number of spacecraft increases, you have to dodge more often, so you have to use more fuel.”</p></blockquote></figure>

<p class="has-text-align-none">The main perk of orbital data centers is access to free, limitless solar power when traveling around the Earth from pole to pole in the sun-synchronous orbit. (Musk’s Starlink satellites, in contrast, avoid the poles and stick close to paying customers around the planet’s populated middle.) The centers would have to remain in <a href="https://www.theverge.com/space/657113/starlink-amazon-satellites">low Earth orbit</a> around 600 to 1,000 miles up from the ground in order to communicate without very large antennas.</p>

<p class="has-text-align-none">In November, Google laid out <a href="https://research.google/blog/exploring-a-space-based-scalable-ai-infrastructure-system-design/">plans</a> for a sun-synchronous low Earth orbital data center called <a href="https://www.theverge.com/news/813894/google-project-suncatcher-ai-datacenter-satellites">Project Suncatcher</a>, which is slated to kick off in early 2027 with a launch of two prototype satellites. Ultimately, Google says there could be 81 satellites, each carrying TPU chips, traveling together in an arranged cluster one kilometer-square in size. Only 100 to 200 meters would separate each satellite. (For context, typical GPS and Starlink satellites move around individually, not in 81-unit fleets.) Whereas wires connect GPUs together on Earth, Google plans to connect the TPU chips with inter-satellite lasers.&nbsp;</p>

<p class="has-text-align-none">Some experts say it would not be smooth sailing, however. The group of satellites would need to travel through millions of pieces of space debris, or “a minefield of random objects, each moving at 17,000 miles an hour,” Mojtaba Akhavan-Tafti, associate research scientist of space sciences and engineering at the University of Michigan, explained to <em>The Verge</em>. This <a href="https://www.theverge.com/2024/8/12/24218442/space-junk-debris-satellite-news-storystream">space debris</a> is especially concentrated in popular orbits like the Sun-synchronous orbit. This is why Google’s plan is looking, well, “a little iffy,” he said. Dodging each object requires a tiny propulsion to move out of the way. For context, Akhavan-Tafti wrote in a recent <a href="https://fortune.com/2025/12/03/google-data-centers-outer-space-problem-space-junk-debris/"><em>Fortune</em> article</a> that the approximately 8,300 Starlink satellites made <a href="https://aerospaceamerica.aiaa.org/features/heavy-traffic-ahead/">over 140,000 such maneuvers</a> in just the first half of 2025. Given the close proximity of each satellite in Google’s plan, Akhavan-Tafti thinks that the entire constellation, rather than each individual satellite, would need to move out of the way of any incoming debris. “That&#8217;s really the big challenge,” he said.&nbsp;</p>

<p class="has-text-align-none">Similarly, McDowell says that a group of 81 satellites traveling together just 100 to 200 meters apart would be “unprecedented” — typically only two or three, maybe four, spacecraft would travel that close together. The size and closeness present “concerning failure modes.” “If a thruster gets stuck, stuck on, or fails, and now you&#8217;ve got a rogue one in among all the others in this cluster of 81,” he explains.</p>

<p class="has-text-align-none">However, Jessica Bloom, an astrophysicist on Google’s Project Suncatcher, told <em>The Verge </em>that the group of 81 satellites is “illustrative,” for now, because the final number will depend on money and results from preliminary tests scheduled for 2027. Regardless, satellites can move individually or as a group to avoid debris, Bloom said, and the closeness of the traveling satellites is the most novel part of Google’s plan.&nbsp;</p>

<p class="has-text-align-none">Bloom explained that the satellites will orbit at the same speed relative to each other, and “relative velocity, rather than proximity, is the key risk factor for damage from impact between objects,” she said. “We take our responsibility to the space environment extremely seriously; our approach prioritizes space sustainability and compliance with both current and emerging rules to minimize risk from debris in orbit,” Bloom said.</p>

<p class="has-text-align-none">In addition to the bones of old satellites, the number of new spacecraft in orbit has <a href="https://bsky.app/profile/planet4589.bsky.social/post/3m7zq2cpd322s">dramatically increased</a> over the last few years. There are now more than 14,000 active satellites, roughly two-thirds of which are Starlink, as <a href="https://planet4589.org/space/jsr/jsr.html">tracked by McDowell</a>. “As the number of spacecraft increases, you have to dodge more often, so you have to use more fuel,” he said. This presents a circular problem: More fuel means a bigger spacecraft, which is a bigger object for other spacecraft to dodge, which means it’s more likely to contribute to space debris.&nbsp;</p>

<p class="has-text-align-none">Space data centers also have to contend with the uniquely extraterrestrial problem of getting rid of heat in a vacuum. Philip Johnston, CEO of Nvidia-backed startup Starcloud, told <em>The Verge</em> that his company dissipates heat from large infrared panels. In order to keep the electronics safe from radiation, Johnston said they stripped the Nvidia H100 GPU “down to the basics” and shielded the electronics with tungsten, lead, and aluminium, among other materials that are dense and lightweight.&nbsp;</p>

<p class="has-text-align-none">But infrared radiation also has the potential to interfere with telescopes, according to John Barentine of the advocacy group the Center for Space Environmentalism. The group has not come out for or against space data centers, Barentine said, but they are concerned about the impact of potential light pollution from reflective surfaces on the spacecraft on astronomy research. Space companies often classify those spacecraft details as “trade secrets,” leading to a “chicken-and-egg situation right now,” Barentine said. “We can&#8217;t really say with a lot of certainty what the impacts will be because we don&#8217;t know the details because the companies haven&#8217;t or won&#8217;t disclose them.”</p>

<p class="has-text-align-none">Starcloud’s Johnston said their satellites will never be visible in the night sky, only when the Sun is just about to appear or just after it’s set. “You can&#8217;t really do astronomy at dawn or dusk, anyway,” Johnston said.&nbsp;</p>

<p class="has-text-align-none">“That is not entirely true,” McDowell, who has worked for <a href="https://www.nytimes.com/2025/04/12/science/jonathan-mcdowell-retirement-space.html">37 years</a> as an <a href="https://www.cfa.harvard.edu/people/jonathan-mcdowell">astronomer</a> at the Harvard-Smithsonian Center for Astrophysics, told <em>The Verge</em>. “There are things that we do need to observe at dawn and dusk, particularly things near the sun, like asteroids that might be coming close to the Earth — which we really don&#8217;t want to miss,” he said.</p>

<figure class="wp-block-pullquote"><blockquote><p>“How do we keep low Earth orbit open for business for generations to come?”</p></blockquote></figure>

<p class="has-text-align-none">Practically, data centers on Earth require <a href="https://spectrum.ieee.org/ai-data-center-operator-trust">regular maintenance</a> to keep the racks of chips humming along, and trained human operators are already in <a href="https://spectrum.ieee.org/data-center-jobs">short supply</a>. Repairs of satellites in space, meanwhile, don’t <a href="https://knowablemagazine.org/content/article/technology/2022/space-robots-promise-fix-and-fuel-satellites">happen</a>. Astronauts fix <a href="https://www.space.com/space-exploration/international-space-station/astronauts-repair-black-hole-observatory-inspect-cosmic-ray-detector-on-iss-spacewalk?utm_source=chatgpt.com">telescopes and equipment</a> attached to the International Space Station or NASA’s <a href="https://science.nasa.gov/mission/hubble/observatory/missions-to-hubble/">Hubble</a> Space Telescope. The prospect of robots <a href="https://www.northropgrumman.com/what-we-do/space/satellite-services-in-space">reorienting or refueling satellites</a> in orbit is theoretically <a href="https://www.war.gov/News/News-Stories/Article/Article/3966145/naval-research-lab-completes-development-of-satellite-servicing-robotics/">possible</a> but rare.</p>

<p class="has-text-align-none">Despite earthly wariness from astronomers outside Big Tech, the popularity of space data centers is likely to continue for years and even decades. Both <a href="https://research.google/blog/exploring-a-space-based-scalable-ai-infrastructure-system-design/">Google</a> and startup <a href="https://www.theverge.com/news/841887/data-center-space-solar-power-aetherflux-lunch">Aetherflux</a> plan to launch satellites in early 2027. Starcloud plans to launch its second satellite in October 2026 and then “ramp up production in 2027, 2028,” Johnston said. He views SpaceX as Starcloud’s main competitor, despite no official mention from Musk’s company on when a space data center might be launched, only a <a href="https://x.com/elonmusk/status/1984249048107508061?s=20">post on X from Musk</a> about SpaceX “simply scaling up Starlink V3 satellites” to achieve this. Blue Origin has <a href="https://www.wsj.com/tech/bezos-and-musk-race-to-bring-data-centers-to-space-faa486ee">reportedly been working on space data centers</a> for over a year but has also not publicly commented on any plans.</p>

<p class="has-text-align-none">Constellations close to Earth present good opportunities for “trying to make life better here back on Earth,” space scientist Akhavan-Tafti said. But it needs to be done in a sustainable way: “How do we keep low Earth orbit open for business for generations to come?”&nbsp;</p>

<p class="has-text-align-none">One option? Avoid launching more stuff into orbit, according to Seth Gladstone of Food &amp; Water Watch, the <a href="https://www.theverge.com/news/840883/data-center-moratorium-letter-congress">environmental group leading a petition to halt data center construction</a>. “Why is it that Big Tech always seems to think a solution to its many Earth-bound problems is to blast more stuff into space?&#8221; </p>
						]]>
									</content>
			
					</entry>
			<entry>
			
			<author>
				<name>Elissa Welle</name>
			</author>
			
			<title type="html"><![CDATA[Racks of AI chips are too damn heavy]]></title>
			<link rel="alternate" type="text/html" href="https://www.theverge.com/ai-artificial-intelligence/844966/heavy-ai-data-center-buildout" />
			<id>https://www.theverge.com/?p=844966</id>
			<updated>2025-12-16T08:30:42-05:00</updated>
			<published>2025-12-16T08:30:00-05:00</published>
			<category scheme="https://www.theverge.com" term="AI" /><category scheme="https://www.theverge.com" term="Analysis" /><category scheme="https://www.theverge.com" term="Report" /><category scheme="https://www.theverge.com" term="Tech" />
							<summary type="html"><![CDATA[In the span of a decade and a half, from 2010 to the end of 2024, the number of data centers in the US quadrupled. The trend is similar worldwide: more data centers, bigger, now or soon. The number of the construction projects of centers over 100 megawatts announced over the last four years total [&#8230;]]]></summary>
			
							<content type="html">
											<![CDATA[

						
<figure>

<img alt="" data-caption="An aerial view of a 33 megawatt data center (LOWER L) with closed-loop cooling system, amid warehouses on October 20, 2025 in Vernon, California. | Image: Mario Tama/Getty Images" data-portal-copyright="Image: Mario Tama/Getty Images" data-has-syndication-rights="1" src="https://platform.theverge.com/wp-content/uploads/sites/2/2025/12/gettyimages-2242297015.jpg?quality=90&#038;strip=all&#038;crop=0,0,100,100" />
	<figcaption>
	An aerial view of a 33 megawatt data center (LOWER L) with closed-loop cooling system, amid warehouses on October 20, 2025 in Vernon, California. | Image: Mario Tama/Getty Images	</figcaption>
</figure>
<p class="has-drop-cap has-text-align-none">In the span of a decade and a half, from 2010 to the end of 2024, the number of data centers in the US <a href="https://www.businessinsider.com/how-calculate-data-center-cost-environmental-impact-methodology-2025-6">quadrupled</a>. The <a href="https://www.iea.org/data-and-statistics/data-tools/energy-and-ai-observatory?tab=Energy+for+AI">trend is similar worldwide</a>: more data centers, bigger, now or soon. <a href="https://intelligence.uptimeinstitute.com/resource/many-giant-data-center-projects-advance-despite-risks" data-type="link" data-id="https://intelligence.uptimeinstitute.com/resource/many-giant-data-center-projects-advance-despite-risks">The number of the construction projects</a> of centers over 100 megawatts announced over the last four years total 377, according to data center certification and research agency Uptime Institute. </p>

<p class="has-text-align-none">But before we allow Big Tech’s <a href="https://www.theverge.com/ai-artificial-intelligence/812455/ai-industry-earnings-bubble-fomo-hype">feverish race</a> toward <a href="https://www.theverge.com/ai-artificial-intelligence/782624/nvidia-is-partnering-up-with-openai-to-offer-compute-and-cash">more</a> <a href="https://www.theverge.com/ai-artificial-intelligence/784251/openai-oracle-and-softbank-announced-five-new-ai-data-centers-as-part-of-stargate">compute</a>, which <a href="https://www.theverge.com/news/840883/data-center-moratorium-letter-congress">environmentalists would not like us to allow</a>, let us pause and consider another option: making do with what we have. Can we retrofit our current data centers to match the needs of our newest technology? Perhaps the building frenzy is not merited; perhaps we have all the facilities we need. A few upgrades here, some fresh servers over there, a new lick of paint, and voilà — an AI data center built from the shell of a legacy one.&nbsp;</p>

<figure class="wp-block-pullquote"><blockquote><p>“Most of the time what it&#8217;s going to mean is bulldozing the building and starting over from scratch.”</p></blockquote></figure>

<p class="has-text-align-none">I took this idea to data center experts, who told me, in so many words, that no, our current data centers cannot readily be retrofitted to become AI superhouses. The problem is as physical as the ground you’re standing on: Legacy data centers cannot bear the weight of the latest AI technology. The racks that house computer chips or AI chips are simply too damn heavy for the floors, and they would crack under the weight.</p>

<p class="has-text-align-none">Chris Brown, chief technical officer at Uptime Institute, summarized the situation: &#8220;We can retrofit the old ones to an extent, but not to the extent that a lot of these AI factories need.&#8221; Small sections of small data centers can accommodate small AI-focused workloads for a single Fortune 500 company, for example, he said. “But most of the time what it&#8217;s going to mean is bulldozing the building and starting over from scratch,” Brown said.</p>

<p class="has-text-align-none">AI racks, the metal cabinets that house stacks of metal boxes called servers, which house the chips that do the computer or generative AI processing, have a weight problem. Thirty years ago, at the start of Brown&#8217;s career in data centers, racks averaged around 400 to 600 pounds. Think of the weight of a home refrigerator up to a baby grand piano. Now, it&#8217;s normal for racks to weigh 1,250 pounds to up to 2,500 pounds, falling in the range of a grizzly bear up to a Toyota Prius. But racks specialized with AI equipment fall to the upper end of the spectrum and beyond — Brown said the projected milestone weight of an AI rack is 5,000 pounds.&nbsp;</p>

<p class="has-text-align-none">The extra weight, Brown said, is due to the amount of electronics crammed into the metal racks. Gaps between GPUs slow data transmission, which slows AI model training, which wastes precious compute power and, ultimately, money. The latest high-density racks come packed with memory chips (leading to the <a href="https://www.theverge.com/report/839506/ram-shortage-price-increases-pc-gaming-smartphones">decline of the global supply of RAM</a>) and <a href="https://www.theverge.com/tech/836610/aws-trainium3-ai-ultraserver">hundreds</a> up to 1,000 GPUs. Whereas traditional computer chip workloads of a decade ago averaged around 10 kilowatts per rack, AI workloads are now 35 times that, up to 350 kilowatts per rack. &#8220;They&#8217;re packing as much as they can into each rack and putting racks as close together as humanly possible to maximize that capability,&#8221; he said.</p>

<p class="has-text-align-none">More power generates more heat that needs to dissipate before a fire breaks out or the chips melt. Air blown over chips has been replaced or supplemented with cooling plates full of liquid, often a watery mixture of <a href="https://www.theverge.com/the-stepback-newsletter/772845/computer-chips-forever-chemicals">toxic coolants</a>. Water weighs a little over 8 pounds per gallon. And don&#8217;t forget about the cables. There are often 10 to 35 racks lined up to form a single row in the bowels of a data center. In order to deliver enough power, the diameter of the cables, or a cable-like copper plate called a busway, needs to increase. (Imagine putting out a house fire with a kitchen sink faucet; better to spray water from a fire hose with its wide diameter.) Brown said that a modern busway weighs 37 pounds per linear foot.&nbsp;</p>

<p class="has-text-align-none">&#8220;It&#8217;s all those things — it&#8217;s the weight of all the processors, all the memory, all of the chips that you need to be able to run the IT devices, all of the cooling hardware that you need inside of there,&#8221; he said. The structure of legacy data centers is not up to the task, Brown said. Many have raised floors, which top out at around 1,250 pounds per square foot for a <em>static</em> load, he noted. Dynamic loads, he said, such as a rack pushed across the floor, require higher weight bearing.&nbsp;</p>

<p class="has-text-align-none">Even if you did reinforce the floor of an old center, other geometric problems persist, Chris McLean, president of data center construction firm Critical Facility Group, told <em>The Verge</em>. He has been designing data centers for nearly two decades, and the height of racks has grown by 3 feet during that time, from 6 feet to 9 feet. (The footprint has only increased from 2 by 2 feet to 2 by 3 feet.) The new height stands taller than industrial doorframes from a few years ago. Freight elevators, too, cannot withstand the weight of the gigantic racks, the apparatus on which they rest during moving, and the weight of the humans pushing the thing: &#8220;All of a sudden, you&#8217;re getting into a pretty beefy elevator for a multi-story,&#8221; McLean said.</p>

<figure class="wp-block-pullquote"><blockquote><p>&#8220;What&#8217;s caused the huge growth in the last two years is just the fact that artificial intelligence is gobbling up everything.&#8221;&nbsp;</p></blockquote></figure>

<p class="has-text-align-none">Big Tech companies are obviously building new data centers to accommodate their increasing push for AI dominance. And when OpenAI, Microsoft, or others run out of space in their own AI-built data center complexes, they rent space at colocation facilities owned by companies like CoreWeave or Digital Realty or Compass, which are, in turn, building new AI-focused data centers. &#8220;What&#8217;s caused the huge growth in the last two years is just the fact that artificial intelligence is gobbling up everything,&#8221; Brown at Uptime said.&nbsp;</p>

<p class="has-text-align-none">Despite the hype, generative AI is not the only type of computation, lest we forget that typical computer workloads still exist. Actually, non-AI data workloads are actually increasing, Brown said. Traditional data centers, therefore, are as important as ever before. Universities, hospitals, midsize companies, and municipalities will all need to continue to store their non-AI data files, just as you still have your blurry photos housed on some cloud provider’s server, McLean said. &#8220;All those people still need that legacy data center environment,” he said. “It&#8217;s never going to go away.&#8221; </p>
						]]>
									</content>
			
					</entry>
			<entry>
			
			<author>
				<name>Elissa Welle</name>
			</author>
			
			<title type="html"><![CDATA[OpenAI just made another circular deal]]></title>
			<link rel="alternate" type="text/html" href="https://www.theverge.com/news/835453/openai-ownership-thrive-holdings" />
			<id>https://www.theverge.com/?p=835453</id>
			<updated>2025-12-02T09:44:39-05:00</updated>
			<published>2025-12-01T18:12:24-05:00</published>
			<category scheme="https://www.theverge.com" term="AI" /><category scheme="https://www.theverge.com" term="News" /><category scheme="https://www.theverge.com" term="OpenAI" /><category scheme="https://www.theverge.com" term="Tech" />
							<summary type="html"><![CDATA[OpenAI announced an ownership stake in the private equity investment firm Thrive Holdings, a company created by Thrive Capital, which is one of the main investors in, you guessed it, OpenAI. While OpenAI did not spend money on the ownership stake, according to an anonymous source cited by The Financial Times, the company announced it [&#8230;]]]></summary>
			
							<content type="html">
											<![CDATA[

						
<figure>

<img alt="" data-caption="" data-portal-copyright="" data-has-syndication-rights="1" src="https://platform.theverge.com/wp-content/uploads/sites/2/2025/10/STK155_OPEN_AI_CVirginia__C.jpg?quality=90&#038;strip=all&#038;crop=0,0,100,100" />
	<figcaption>
		</figcaption>
</figure>
<p class="has-text-align-none">OpenAI <a href="https://openai.com/index/thrive-holdings/">announced</a> an ownership stake in the private equity investment firm Thrive Holdings, a company created by Thrive Capital, which is one of the <a href="https://www.theverge.com/2024/10/2/24260457/openai-funding-round-thrive-capital-6-billion">main investors</a> in, you guessed it, OpenAI. While OpenAI did not spend money on the ownership stake, according to an anonymous source cited by <a href="https://www.ft.com/content/53e2003e-c5c0-42a1-937a-eaea77ac4d41"><em>The</em> <em>Financial Times</em></a>, the company announced it would provide Thrive Holdings’ companies with employees, models, products, and services. </p>

<p class="has-text-align-none">OpenAI may also get payouts from Thrive Holdings’ future returns, the <em>FT</em> wrote, citing its anonymous source. It’s the latest circular deal in an industry known for <a href="https://www.theverge.com/ai-artificial-intelligence/812455/ai-industry-earnings-bubble-fomo-hype">running on FOMO</a> and <a href="https://www.theverge.com/news/792650/amd-openai-five-year-ai-chip-agreement">handing money back-and-forth</a> between a small group of companies.&nbsp;</p>

<p class="has-text-align-none">The partnership will focus on the two sectors at the top of Thrive Holdings’ priority list: IT services and accounting. It’s in these “high-volume, rules-driven, workflow-heavy processes where OpenAI&#8217;s platform can drive immediate benefits,” according to the announcement release. The stated goal is to use AI to “boost speed, accuracy, and cost efficiency while strengthening service quality.”</p>

<p class="has-text-align-none">Joshua Kushner, CEO of Thrive Holding and Capital, and the younger brother of President Trump’s son-in-law, Jared Kushner, said AI is unlike past technologies that have changed industries “from the outside in.” “We believe this paradigm shift will happen from the inside out as domain experts and practitioners use AI as a native tool to reshape their fields,” Kushner said. Trump himself is a staunch booster of AI, and officials in his administration —&nbsp;<a href="https://www.theverge.com/ai-artificial-intelligence/829179/david-sacks-ai-executive-order">like David Sacks</a> — stand to benefit from the industry’s growth.</p>

<p class="has-text-align-none">Thrive Holdings’ purpose for acquiring IT services and accounting is to “transform them using AI,” the<em> FT</em> wrote. As part of the deal with OpenAI, the startup will get access to data from Thrive Holdings’ companies for AI model training. There are two potential advantages for OpenAI there: one, the possibility that it can be shoehorned into companies in Thrive Holdings’ portfolio, and two, a rich new source of training data. OpenAI wants to work more broadly with the private equity industry, the anonymous source told the <em>FT</em>. Someone close to Thrive Capital said that OpenAI would be working as the equity group’s “research arm.”</p>

<p class="has-text-align-none">The Thrive deal may be the first of a new wave of similar agreements, said OpenAI’s COO Brad Lightcap.&nbsp;</p>

<p class="has-text-align-none"><strong><em>Correction, December 2nd: </em></strong><em>An earlier version of this article misstated the nature of Thrive Capital&#8217;s relationship to Thrive Holdings. Thrive Capital created Thrive Holdings, but is not its parent company.</em></p>
						]]>
									</content>
			
					</entry>
	</feed>
