Particle.news
Download on the App Store

OpenAI, Anthropic and Google Join Forces to Stop AI Model Copying by Chinese Firms

The move targets large-scale copying that companies warn endangers profits as well as national security.

Overview

  • OpenAI, Anthropic and Google are sharing signals through the Frontier Model Forum to detect “adversarial distillation,” a step OpenAI confirmed Tuesday.
  • Adversarial distillation uses massive batches of API prompts to mimic a model’s behavior and train a cheaper clone without permission, which can undercut paid services.
  • Scrutiny spiked after DeepSeek released its R1 model in January 2025, prompting Microsoft and OpenAI to investigate whether large volumes of their outputs were extracted to help build it.
  • Anthropic blocked Chinese-controlled access to Claude and said DeepSeek, Moonshot and MiniMax ran more than 16 million exchanges through about 24,000 fake accounts, warning that distilled clones often lack safety guardrails.
  • The companies say antitrust uncertainty limits how much intelligence they can share, and the Trump administration’s AI Action Plan proposes a formal information‑sharing center to help coordinate defenses.