Elon Musk Testifies xAI Used OpenAI Models in Training Grok

Elon Musk Testifies xAI Used OpenAI Models in Training Grok
The Verge

Key Points

  • Elon Musk testified in a California federal court that xAI used OpenAI models to train its Grok chatbot.
  • Musk described model distillation as a common industry practice, answering “partially” when asked if xAI directly distilled OpenAI technology.
  • Model distillation involves a larger AI model teaching a smaller one, improving performance while reducing resource needs.
  • OpenAI, Anthropic and Google have warned that distillation can be abused to steal intellectual property.
  • Anthropic cited Chinese firms DeepSeek, Moonshot and MiniMax as examples of alleged illicit distillation.
  • Google is implementing measures to block “distillation attacks,” labeling them as IP theft.
  • The testimony highlights the lack of clear legal standards governing AI model sharing.

In a California federal courtroom on Thursday, Elon Musk told a judge that his AI startup xAI employed OpenAI’s models to develop its own system, Grok, through a practice known as model distillation. Musk said the technique is common across the industry, answering “partially” when asked if xAI directly distilled OpenAI technology. The testimony highlights a growing debate over the legality and ethics of AI model sharing, with companies like OpenAI, Anthropic and Google warning of potential intellectual‑property violations.

Elon Musk took the stand Thursday in a federal courtroom in California and confirmed that his artificial‑intelligence venture, xAI, has used OpenAI’s models as part of the training process for its own chatbot, Grok. The discussion centered on model distillation, a method where a larger, often more capable AI model serves as a “teacher” to a smaller “student” model, transferring knowledge to improve performance.

When the judge asked Musk if he understood the practice, the billionaire replied that it involves using one AI to train another. Pressed about whether xAI had specifically distilled OpenAI’s technology, Musk sidestepped a direct yes or no, noting that “generally all the AI companies” engage in such activity and adding, “Partly.” He later clarified, “It is standard practice to use other AIs to validate your AI.”

The admission comes amid increasing scrutiny of model distillation. Industry observers say the technique walks a fine line between legitimate efficiency‑boosting and potential intellectual‑property theft. OpenAI and Anthropic have publicly accused Chinese firms of distilling their models, naming companies such as DeepSeek, Moonshot and MiniMax. Google has also taken steps to block what it calls “distillation attacks,” describing them as a form of IP theft that breaches its terms of service.

Anthropic’s blog acknowledges the dual nature of the practice. The company wrote that while “frontier AI labs routinely distill their own models to create smaller, cheaper versions for their customers,” competitors could also use distillation to acquire powerful capabilities at a fraction of the development cost and time. The controversy has sparked calls for clearer legal guidelines and industry standards.

Musk’s testimony does not resolve whether xAI’s use of OpenAI models violates any agreements, but it underscores how pervasive the practice has become. As AI applications proliferate, the debate over what constitutes fair use versus theft is likely to intensify, prompting regulators and tech leaders to grapple with the evolving landscape of artificial‑intelligence development.

#Elon Musk#xAI#OpenAI#AI model distillation#Grok#federal courtroom#intellectual property#Google#Anthropic#DeepSeek#Moonshot#MiniMax
Generated with  News Factory -  Source: The Verge

Also available in: