While testifying on Thursday in federal court, Elon Musk seemed to indicate that his AI lab may have used OpenAI’s models to train xAI’s own. He touched upon the topic while sitting on the witness stand answering cross-examination questions from an OpenAI attorney amid his ongoing legal battle against the ChatGPT-maker.
This is the exchange, as best as WIRED could capture it:
OpenAI Lawyer William Savitt: Do you know what distillation is?
Musk: It means to use one AI model to train another AI model.
Savitt: Has xAI done that with OpenAI?
Musk: Generally all the AI companies [do that].
Savitt: So that’s a yes.
Musk: Partly.
Distillation is a technique where a smaller AI model is trained to mimic the behavior of a larger, more capable model, making it cheaper and faster to run while preserving much of its performance.
OpenAI’s lawyer, William Savitt, then asked whether OpenAI’s technology had been used in any way to develop xAI.
Savitt: Has OpenAI technology been used in any way to develop xAI?
Musk: It is standard practice to use other AIs to validate your AI.
OpenAI and xAI did not immediately respond to WIRED’s request for comment.
OpenAI has been trying to prevent its competitors from distilling its AI models, in particular, the Chinese AI lab DeepSeek. In a February 2026 memo to a House committee, OpenAI wrote that it has “taken steps to protect and harden our models against distillation.” In that memo, OpenAI said it was focused on ensuring a playing field in which “China can’t advance autocratic AI by appropriating and repackaging American innovation.”
The Trump administration has also taken steps to prevent Chinese companies from distilling American AI models. Michael Kratsios, the White House’s director of the office of science and technology policy, said in an April 2026 memo that it would share information with US AI companies about foreign distillation. Kratsios said in a post on X that the “US government is committed to the free and fair development of AI technologies across a competitive ecosystem.”
American AI labs have used each other’s AI models in other ways, to test progress and assess safety. But in today’s competitive landscape, some AI companies have completely cut off rival labs. In August 2025, Anthropic blocked OpenAI’s access to its Claude coding models after the company alleged that its terms of service had been violated. More recently, Anthropic cut off xAI from using its AI models for coding as well.
In his multiday cross-examination of Musk, Savitt has questioned Musk about his attempts to assume control of OpenAI and, subsequently, his quest to beat the ChatGPT-maker. On Wednesday, Savitt presented emails and texts from 2017 to support a line of questions as to whether Musk squeezed OpenAI by withholding funding and hiring away key researchers.













Leave a Reply