Elon musk xai openai trial model distillation – Breaking News & Latest Updates 2026
Skip to main content

Elon Musk confirms xAI used OpenAI’s models to train Grok

He said it was “partly” true that the company had used model distillation to improve xAI’s models.

He said it was “partly” true that the company had used model distillation to improve xAI’s models.

If you buy something from a Verge link, Vox Media may earn a commission. See our ethics statement.

STK022_ELON_MUSK_CVIRGINIA4_G
STK022_ELON_MUSK_CVIRGINIA4_G
Image: Cath Virginia / The Verge, Getty Images
Hayden Field
is The Verge’s senior AI reporter. An AI beat reporter for more than five years, her work has also appeared in CNBC, MIT Technology Review, Wired UK, and other outlets.

In a federal courtroom in California on Thursday, Elon Musk testified that his own AI startup, xAI, has used OpenAI’s models to improve its own.

The matter at question is model distillation, a common industry practice by which one larger AI model acts as a “teacher” of sorts to pass on knowledge to a smaller AI model, the “student.” Although it’s often used legitimately within companies using one of their own AI models to train another, it’s also a practice that’s sometimes used by smaller AI labs to try to get their models to mimic the performance of a larger competitor’s model.

Asked on the stand whether he knew what model distillation was, Musk said it’s to use one AI model to train another. When asked whether xAI has distilled OpenAI’s technology, Musk seemed to avoid the question, saying that “generally all the AI companies” do such a thing. And when asked if that was a yes, he said, “Partly.”

When pressed, Musk said, “It is standard practice to use other AIs to validate your AI.”

Model distillation has been on the rise and has incited more controversy among AI labs, in recent years, since the lines for what’s legal — and what violates a company’s certain terms or policies — often fall within a gray area. Companies like OpenAI and Anthropic have accused Chinese firms of distilling their models, with OpenAI publicly stating its concerns about DeepSeek, and Anthropic specifically naming DeepSeek, Moonshot, and MiniMax. Google, also, has taken steps to try to prevent what it calls “distillation attacks,” or “a method of intellectual property theft that violates Google’s terms of service.”

In Anthropic’s own blog post on the matter, the company wrote, “Distillation is a widely used and legitimate training method. For example, frontier AI labs routinely distill their own models to create smaller, cheaper versions for their customers. But distillation can also be used for illicit purposes: competitors can use it to acquire powerful capabilities from other labs in a fraction of the time, and at a fraction of the cost, that it would take to develop them independently.”

Follow topics and authors from this story to see more like this in your personalized homepage feed and to receive email updates.