Overview
- OpenAI, Anthropic and Google are sharing information through the Frontier Model Forum to spot adversarial distillation attempts, according to reporting based on people familiar with the effort.
- Adversarial distillation uses a model’s outputs to train a cheaper copy that mimics its skills, which the companies say violates their terms of service.
- In 2025, DeepSeek’s R1 release triggered Microsoft and OpenAI investigations into whether data from U.S. models had been improperly extracted to help build it.
- Anthropic previously blocked some China‑linked access and named DeepSeek, Moonshot and MiniMax as trying to extract its models’ capabilities through distillation.
- Times Now, citing Bloomberg, reports U.S. officials estimate unauthorized distillation costs labs billions in annual profit, while antitrust uncertainty still limits how much the firms can share.