Adi Robertson

Senior Editor, Tech & Policy
Senior Editor, Tech & Policy
More From Adi Robertson


The Nation interviewed Graham Granger, the student arrested for consuming part of an exhibit of AI-generated art — although apparently he didn’t eat everything he chewed up.
Do you have any qualms about the fact that AI art is made by scraping other artists?
Yeah, I mean, that’s part of why I spat it out, because AI chews up and spits out art made by other people.



Coming into force this year: AI regulations galore, a teen social media lockdown, and “Taylor Swift” laws.
Hell Gate investigates a new NYC phenomenon: PureGym’s tubelike automated “entry pods”.
“What the fuck? I literally don’t want to do this,” one man remarked, before putting his whole body into the tube. “It’s like a bad sci-fi show, it’s ridiculous!” another man exclaimed, while—you guessed it—he succumbed to the tube.
Cool update to last week’s story on why language doesn’t equal intelligence: a Michigan judge cited it to justify imposing sanctions over a ChatGPT-assisted filing that mentioned real cases but misstated their facts. Congrats to author Benjamin Riley, and thanks to folks who pointed it out on X and Bluesky!




ProPublica writes that Paul “Nazi Streak” Ingrassia — the White House’s Department of Homeland Security liaison — told customs officials to return devices they’d seized from the influencer and alleged rapist and sex trafficker, possibly hindering an investigation and alarming DHS officials, who described the act as “handing out favors” to Tate.
Most Popular
- Sony’s PlayStation 5 is $200 off for the first time since December
- Anthropic’s most dangerous AI model just fell into the wrong hands
- Elon Musk admits that millions of Tesla vehicles won’t get unsupervised FSD
- You’re about to feel the AI money squeeze
- Microsoft launches ‘vibe working’ in Word, Excel, and PowerPoint
![LLMs are toolsthat “emulate the communicative function of language, not the separateand distinct cognitive process of thinking and reasoning.” BenjaminRiley, Large language mistake, The Verge https://thevergetoday.pages.dev/aiartificial-intelligence/827820/large-language-models-ai-intelligenceneuroscience-problems [https://perma.cc/7EHD-PLLZ]. When an LLMoverstates a holding of a case, it is not because it made a mistake whenlogically working through how that case might represent a“nonfrivolous argument for extending, modifying, or reversing existinglaw or for establishing new law;” it is just piecing together a plausiblelooking sentence – one whose content may or may not be true](https://platform.theverge.com/wp-content/uploads/sites/2/2025/12/Screenshot-2025-12-04-at-8.35.16%E2%80%AFAM.png?quality=90&strip=all&crop=0%2C0%2C100%2C100&w=2400)