Overview
- Testifying Thursday in a California federal court, Elon Musk said xAI "partly" used OpenAI models through distillation to help train its Grok chatbot.
- Distillation uses a larger "teacher" model’s answers to train a smaller "student" model, which can cut costs and speed up training while copying much of the teacher’s behavior.
- OpenAI’s terms bar using its outputs to train competing models, raising the prospect that such use could violate contracts or lead to account bans and loss of access.
- AI labs and U.S. officials have moved to blunt mass extraction, with OpenAI telling Congress in February it hardened models against distillation and the White House in April pledging information‑sharing on foreign efforts.
- The admission surfaced during Musk’s ongoing lawsuit accusing OpenAI of abandoning its nonprofit mission, placing cross‑lab distillation at the center of an industry and policy fight that has already drawn accusations against firms like DeepSeek.