A new survey suggests that AI will only help the rich get richer. “The rhetoric out there is that the tools are going to be democratizing. But the reality is that . . . you require a certain degree of education, abstract and quantitative skills, familiarity with computers and coding in order to be using the models,” said MIT professor Daron Acemoglu, who’s also a Nobel laureate in economics. “AI is going to increase inequality between labour and capital. That is almost for sure. I would say it is setting us up for a . . . shitshow.”
AI
Artificial intelligence is more a part of our lives than ever before. While some might call it hype and compare it to NFTs or 3D TVs, generative AI is causing a sea change in nearly every part of the technology industry. OpenAI’s ChatGPT is still the best-known AI chatbot around, but with Google pushing Gemini, Microsoft building Copilot, and Apple adding its Intelligence to Siri, AI is probably going to be in the spotlight for a very long time. At The Verge, we’re exploring what might be possible with AI — and a lot of the bad stuff AI does, too.
- RELATED /

“The first big stumble will have everyone running for the exits.”

The new workspace agents can perform tasks like reporting on product feedback on their own in the cloud.
Latest In AI
That’s “up from 50% last fall,” according to a blog post from Google CEO Sundar Pichai. Google recently created a “strike team” to improve its AI models’ coding capabilities and catch up to Anthropic, which as of February writes 70 to 90 percent of its code with Claude Code.
After announcing in September it was working with industry group DDEX on a standard for disclosing when AI is used in a song, AI credits are launching with DistroKid as the first partner. Unfortunately, even if the rest of the industry gets on board, voluntary labels likely won’t be enough as AI uploads threaten to overtake humans.
Per 9to5Mac, Google Cloud CEO Thomas Kurian was excited to boast about Gemini’s big new customer. The upgraded Siri is still coming “later this year.”
Kennedy’s remarks come from congressional hearings today. He claims AI, while “very dangerous” has the opportunity to “develop new drugs and personalized medicine for every citizen.” Please, a moment of silence for my sanity.
Sullivan and Cromwell, the law firm representing President Trump in many of his cases and which handled the SpaceX and xAI merger, was just forced to apologize to a federal judge for filing documents full of fake case citations hallucinated by AI. The list of errors ran three pages long, the NYT reports. Just the latest in the legal profession forgetting that language is not actually intelligence.
[The New York Times]


A new security feature in Chrome Enterprise can help businesses detect and combat “anomalous” activity by AI-powered agents within compromised extensions or online services. Google is rolling out its AI auto browse feature to enterprise customers as well, which can perform multi-step tasks in Chrome on your behalf.


The feature, which first arrived for AI Pro and Ultra subscribers in January, lets you use Gmail’s search bar to ask questions about what’s in your inbox. Gmail will then provide an AI-generated summary that draws from the information in your emails.
New AI tools can unlock insights from aerial and satellite images or anchor “imaginative scenes in the real world,” Google says. Pretty niche, but probably useful for urban planners, or putting spaceships in front of New York landmarks.
Anthropic’s cybersecurity-focused AI model found 271 bugs in Firefox 150, Mozilla CTO Bobby Holley said, calling Claude Mythos Preview “every bit as capable” as top security researchers. Reassuringly, Mozilla hasn’t “seen any bugs that couldn’t have been found by an elite human researcher,” either.


I wrote about that — and other Catholic concerns — at my friend Rusty’s newsletter while he took the day off.
[Today in Tabs]


Canva CEO Melanie Perkins dodged this question with a lot of charm and verve, but I wonder if the answer is quickly going from Adobe to Anthropic. More on this week’s Decoder!
When Google launched Gemini for Home, it put one key feature behind a paywall. Continued Conversation became available only on Gemini Live, which required Google Home Premium.
Starting today, users in Early Access can once again ask follow-up questions to Google’s voice assistant on their Google Home devices without saying “Hey Google” every time, and without paying. Another bonus is that the feature now works with all supported languages and in all regions.
[Google Nest Community Blog]

Does Tim Cook’s newly announced successor have what it takes to regain the company’s lost ground in the AI race?
During a television interview with CNBC, he said Anthropic, which has been enmeshed in a dramatic lawsuit with the Department of Defense, had a positive meeting at the White House. Anthropic had come to discuss Mythos, its buzzy private model. “We had some very good talks with them, and I think they’re shaping up,” he said. “They’re very smart, and I think they can be of great use.”

Inventing the future requires a future people want.
According to The Information, the Google co-founder said in a memo to DeepMind employees that “every Gemini engineer must be forced to use internal agents for complex, multistep tasks.”
Anthropic’s tools have been leading the AI coding race, and Brin apparently sees catching them as a step toward building AI that can improve itself.
[The Information]
Project Luna — a round screen with a swiveling head that reminds me of Samsung’s “AI OLED Turntable” — offers a glimpse of what’s to come from Samsung’s design, according to chief design officer, Mauro Porcini. Samsung teased the bot in a YouTube Shorts, and now Fast Company has some exclusive details.
Sources told Axios that the agency was among the roughly 40 organizations granted access. This, despite the Pentagon arguing that Anthropic is a threat to national security. The NSA has reportedly been using it primarily to identify vulnerabilities in its own network, but considering its track record, it’s understandable if you’re wary.




The New York Times has found hundreds of fake accounts on Instagram, TikTok, and Facebook that appear to be a pre-midterm push to get conservative voters to the polls in support of Trump’s agenda. The accounts often use the same captions and awkward phrasing.
It’s not clear who created the A.I. accounts, and determining whether they are the product of a hired content farm, a foreign influence operation, an experiment or something else is difficult, experts said. They all agree, however, that creating such avatars is becoming easier, especially for contractors and marketing companies that now specialize in developing and dispatching A.I. avatars in bulk for increasingly low prices.
[New York Times]





Panic detailed the changes in a policy that went into effect this month. For now, however, Panic will allow Catalog titles that “have used AI assistance in the coding process,” but those games will be flagged to note that.
For its own games, Panic cofounder Cabel Sasser recently told The Verge that it does not “have any interest in generative AI-created products.”
[Playdate]
Most Popular
- Anthropic’s most dangerous AI model just fell into the wrong hands
- Sony’s PlayStation 5 is $200 off for the first time since December
- The unraveling of Dan Crenshaw
- Framework is building a better couch keyboard because everyone hates the Logitech one
- Elon Musk admits that millions of Tesla vehicles won’t get unsupervised FSD


























