11 – Breaking News & Latest Updates 2026
Skip to main content

AI

Artificial intelligence is more a part of our lives than ever before. While some might call it hype and compare it to NFTs or 3D TVs, generative AI is causing a sea change in nearly every part of the technology industry. OpenAI’s ChatGPT is still the best-known AI chatbot around, but with Google pushing Gemini, Microsoft building Copilot, and Apple adding its Intelligence to Siri, AI is probably going to be in the spotlight for a very long time. At The Verge, we’re exploring what might be possible with AI — and a lot of the bad stuff AI does, too.

  • RELATED /
Terrence O'Brien
Terrence O'Brien
The US used Anthropic AI for strikes in Iran despite ban.

On Friday, Donald Trump announced a ban on the federal government’s use of Claude. Though he had to walk back his demand that agencies “IMMEDIATELY CEASE” using it, instead saying there would be a six-month phaseout. Part of that might be because planning for Saturday’s strikes against Iran was underway and relied on Claude for intelligence assessments and target identification. According to the Wall Street Journal:

Within hours of declaring that the federal government will end its use of artificial-intelligence tools made by tech company Anthropic, President Trump launched a major air attack in Iran with the help of those very same tools.

Terrence O'Brien
Terrence O'Brien
A former Trump advisor calls the fight with Anthropic “attempted corporate murder.”

Dean Ball, who worked as a senior AI policy advisor, said on X that designating Anthropic as a “supply chain risk” or threatening to invoke the Defense Production Act could have a chilling effect on the entire industry. Alan Rozenshtein, a former DOJ official specializing in technology law, told Politico this could be the first step toward partial nationalization of the AI industry.

Hayden Field
Hayden Field
OpenAI reached a new agreement with the Pentagon.

CEO Sam Altman wrote on X that the agreement allowed the US military to “deploy our models in their classified network.” He said the agreement reflects OpenAI’s desire for prohibitions on domestic mass surveillance and “human responsibility for the use of force, including for autonomous weapon systems.” Altman also wrote that OpenAI is “asking the DoW to offer these same terms to all AI companies, which in our opinion we think everyone should be willing to accept.” This follows a rollercoaster week of negotiations between Anthropic and the Pentagon.

Sam Altman’s post

[X (formerly Twitter)]

Trump orders federal agencies to drop Anthropic’s AITrump orders federal agencies to drop Anthropic’s AI
Hayden Field and Richard Lawler
Hayden Field
Hayden Field
Even Ilya Sutskever weighed in on the Anthropic-Pentagon situation.

The OpenAI co-founder, who left after CEO Sam Altman’s ouster and reinstatement and then started his own AI startup called Safe Superintelligence, posted on X:

It’s extremely good that Anthropic has not backed down, and it’s siginficant that OpenAI has taken a similar stance.

In the future, there will be much more challenging situations of this nature, and it will be critical for the relevant leaders to rise up to the occasion, for fierce competitors to put their differences aside. Good to see that happen today.

We don’t have to have unsupervised killer robots

AI companies could stand together to draw red lines on military AI — why aren’t they?

Hayden Field
Thomas Ricker
Thomas Ricker
OpenAI would have alerted police to Canadian shooter if account was discovered today.

That’s the takeaway from Altman and Co’s new safety protocols that govern when to involve law enforcement. The changes come after a fatal shooting at a school in Tumbler Ridge, British Columbia. this month that killed eight and left dozens injured.

The suspect’s interactions with ChatGPT suggested the possibility of real-world violence and led OpenAI to shut down the account, but not notify police.

Stevie Bonifield
Stevie Bonifield
Google Translate is using Gemini AI to offer alternative translations based on context.

In December, Google Translate started using Gemini AI to improve translations for colloquial phrases and idioms that might not make sense in other languages, and now the Translate app also offers alternative phrasings on iOS or Android.

Users can also ask the AI for more context with new “Understand” and “Ask” buttons.

A screenshot of Google Translate with a new “show alternatives” button.
Image: Google
Terrence O'Brien
Terrence O'Brien
Qobuz is automatically detecting and labeling AI music now, too.

Deezer started labeling AI content last year. Now Qobuz is doing the same. It’s also enacting an AI charter promising “The heart of Qobuz is and will remain human,” saying curation and editorial won’t be AI-driven. It stopped short of banning AI content, which some customers have been asking for.

Tina Nguyen
Tina Nguyen
The Pentagon is making moves.

In what appears to be preparations to fully blacklist Anthropic for not budging on their acceptable use policies, the Defense Department has begun reaching out to contractors to assess their exposure to the AI company’s products. Boeing and Lockheed Martin, two of the biggest companies in the defense space, have reportedly been contacted.

Jess Weatherbed
Jess Weatherbed
YouTube’s algorithm loves feeding AI slop to kids.

After watching popular children’s channels like CoComelon, Bluey, or Ms. Rachel, The New York Times found that more than 40 percent of Shorts recommended by the platform “appeared to contain AI-generated visuals.” YouTube doesn’t require animated AI videos for children to be labeled, placing all moderation burdens on parents instead.

Dominic Preston
Dominic Preston
The new new.

David Luan, head of Amazon’s San Francisco AGI lab, is leaving to work on something new. Newer than AGI, that is, which is saying something considering that’s still nonexistent.

poliwhirl08:

Wait, the guy in charge of trying to reach AGI is leaving to “cook up something new”??? AGI doesn’t exist, he was literally already trying to cook up something new.

Get the day’s best comment and more in my free newsletter, The Verge Daily.

Stevie Bonifield
Stevie Bonifield
Perplexity’s “Computer” is full of AI agents.

It says the new platform “reasons, delegates, searches, builds, remembers, codes, and delivers,” across sub AI agents, creating what Perplexity calls a “general-purpose digital worker” that exists somewhere between OpenClaw and Claude Cowork.

Jay Peters
Jay Peters
Google is expanding what its Flow AI tool can do.

Flow launched as a way to make AI-generated videos, but the company is bringing its experimental Whisk and ImageFX AI image generator tools into the app to let users “generate, edit and animate everything in one unified workspace.”

Does Anthropic think Claude is alive? Define ‘alive’

Anthropic calls its chatbot ‘a new kind of entity’ that might be conscious — and it’s opening a huge can of worms.

Hayden Field
Justine Calma
Justine Calma
Trump says he told tech companies “they have the obligation to provide for their own power needs.”

During his State of the Union speech, Trump claimed to have negotiated a “ratepayer protection pledge” to keep data centers from raising utility bills for other customers. He didn’t say which companies are involved or what commitments they’ve made.

A draft pledge obtained by Politico earlier this month describes a voluntary pact to cover the costs of new energy infrastructure.

Lauren Feiner
Lauren Feiner
Trump plugs FLOTUS’ work on AI at State of the Union.

The president shouted out the work of his “movie star” wife Melania, whose support of the Take It Down Act last year — a bill requiring social platforms to remove content reported as nonconsensual intimate imagery (including AI deepfakes) — helped usher it into law.

Stevie Bonifield
Stevie Bonifield
Google is ‘deeply sorry’ for push notification that included a racial slur.

The notification was related to an incident at the BAFTAs on Sunday, as Variety reports. “We’ve removed the offensive notification,” Google spokesperson Brianna Duff said in a statement to The Verge, adding that the error didn’t involve AI, but rather that push notification systems intended to “accurately characterize” content from the web “accidentally” included the slur.

Hayden Field
Hayden Field
OpenAI has a new chief people officer.

Arvind KC, who was formerly Roblox’s chief people and systems officer, has also held senior roles at Google, Palantir, and Meta, according to OpenAI. He replaces the company’s former chief people officer, Julia Villagra, who departed in August 2025 after less than six months in the role. The post had since been vacant.