The “New Home” commercial features nostalgic piano music and a heartfelt voiceover of a mother and son envisioning their new house with some help from Gemini. Notably, it steers clear of fact-focused prompts like the Gouda cheese stat Gemini got wrong in one of last year’s Super Bowl ads.
AI
Artificial intelligence is more a part of our lives than ever before. While some might call it hype and compare it to NFTs or 3D TVs, generative AI is causing a sea change in nearly every part of the technology industry. OpenAI’s ChatGPT is still the best-known AI chatbot around, but with Google pushing Gemini, Microsoft building Copilot, and Apple adding its Intelligence to Siri, AI is probably going to be in the spotlight for a very long time. At The Verge, we’re exploring what might be possible with AI — and a lot of the bad stuff AI does, too.
- RELATED /


Crypto.com CEO Kris Marszalek is getting in on the Super Bowl AI commercial break craze by launching his new AI.com website during the game. A press release describes the site as a way for users to “generate a private, personal AI agent that doesn’t just answer questions, but actually operates on the user’s behalf.”


Hemsworth’s latest action scene isn’t in an Avengers movie, but a stand-off with Amazon’s AI assistant, which he fears is planning elaborate ways to kill him. Maybe Ultron is still fresh in the Thor actor’s mind. It’s far from the only ad for AI in this year’s Super Bowl.

The emails show the “anti-woke” crusaders are afraid of accountability.
GPT‑5.3‑Codex, its new coding and development model, is apparently the first “that was instrumental in creating itself.” No, that probably doesn’t mean ChatGPT is ready to build its own Skynet, but it can help in debugging and testing:
“The Codex team used early versions to debug its own training, manage its own deployment, and diagnose test results and evaluations—our team was blown away by how much Codex was able to accelerate its own development.”
ChatGPT users who create designs via the Canva app can now connect to their Canva Brand Kits, allowing designs to draw from on-brand colors and assets. Anthropic’s good week continues, however — Claude got the same Canva Brand Kit feature first.
If you buy something from a Verge link, Vox Media may earn a commission. See our ethics statement.
After rolling out account verification for brands and individual users, Reddit CEO Steve Huffman writes in a letter to shareholders that the platform is trying to make it easier to identify bots, too.
In the age of AI, if you can’t easily distinguish a real person’s thoughts or recommendations from a bot, that trust erodes. That’s why we’re actively working on ways to preserve our authenticity and conversation quality.

“Now you’re just like, ‘Here’s the magic castle. Build it.’ And it gets done.”



Why you can’t label your way into consensus reality amid the AI deepfake apocalypse.
Maybe combining Musk’s companies is really about space AI data centers. But reports from Bloomberg and the Wall Street Journal indicate that SpaceX’s IPO pursuit includes a push to have major index providers find a way around the usual waiting periods before they’ll add newly listed companies.




“Ads are coming to AI. But not to Claude,” is a clear shot at OpenAI’s decision to bring ads to ChatGPT without mentioning it by name. There are four commercials in the campaign, with one trimmed to thirty-seconds to air during the Super Bowl at a cost of around $8 million.
Karl Slatoff, responding to a question about Google’s Project Genie AI tool on an earnings call today:
Genie is not a game engine. It’s very exciting technology. I think the question is: how can it benefit our creators? I think that there will be a moment in time where that will become more defined. It certainly doesn’t replace the creative process.
Shares of Take-Two dipped the day after Google announced Project Genie last week.
OpenAI’s new “head of preparedness,” Dylan Scandinaro, came from an AGI safety role at the company’s chief competitor. “AI is advancing rapidly,” he wrote in a post on X. “The potential benefits are great—and so are the risks of extreme and even irrecoverable harm. There’s a lot of work to do, and not much time to do it!”
[X (formerly Twitter)]

SpaceX is profitable, while xAI is burning about $1 billion a month. Is this another case of Musk bailing out himself?
In the middle of a Forbes profile of Altman’s journey through the AI world — which is just astoundingly chaotic, when you see it all laid out in a row — Altman says that “we basically have built AGI, or very close to it.” Which, uh, okay! But then he changes his mind, sort of. From the story:
A few days later, Altman dials things back. “I meant that as a spiritual statement, not a literal one,” he says. Achieving AGI, he concedes, will require “a lot of medium-sized breakthroughs. I don’t think we need a big one.”
Grasshopper Manufacture’s Suda 51 discussing generative AI with Eurogamer:
For me personally, a lot of the AI stuff I see pops up on social media. As far as it’s come, there’s something about the images and videos you see that feels off. Most people have that same kind of sense, something psychological lets you know something isn’t right here. Something’s kinda funky.



Things got even weirder on Moltbook, the viral Reddit-style platform, over the weekend.
I used to compare Elon Musk to an old boss of mine who would spin up a company division every time he found a new hobby, but this might be just as apt:
ElectricOrchestra613:
Elon Musk’s constant new ventures and subsequent mergers just feels like the corporate equivalent of creating a new email every time you want to sign up for a free trial.
Get the day’s best comment and more in my free newsletter, The Verge Daily.
DOJ documents released last week contain a $3 million funding request from roboticist David Hanson to build an “attractive female android,” complete with a “working gorgeous robot face and body.” The proposal includes a rough sketch of the “gynoid,” noting “the final design will be done collaboratively with you.”


404 Media reports that security researcher Jamieson O’Reilly found a vulnerability that allows humans to control OpenClaw’s AI agents on Moltbook — the network that recently went viral for hosting “discussions” between supposed AI bots.
Wiz dug into the misconfiguration as well, uncovering 1.5 million exposed API keys and 35,000 email addresses. Moltbook has since secured the database.

Docusign’s Allan Thygesen says ‘not providing an AI service isn’t really an option.’


























