Particle.news
Download on the App Store

Anthropic Alleges DeepSeek, Moonshot AI and MiniMax Ran 24,000 Fake Accounts to Distill Claude

The company says industrial‑scale harvesting exploits a U.S. policy gap, replicating reinforced capabilities without chip or weight transfers.

Overview

  • Anthropic reports more than 16 million exchanges with Claude were generated through roughly 24,000 fraudulent accounts in coordinated distillation campaigns by three China‑based labs.
  • The activity targeted Claude’s highest‑value functions such as complex reasoning, coding and tool use rather than casual prompts, according to Anthropic’s threat team.
  • Investigators cite IP correlations, request metadata and infrastructure indicators to attribute the operations, with observed volumes including about 150,000 exchanges tied to DeepSeek, 3.4 million to Moonshot and over 13 million to MiniMax.
  • Anthropic says it intervened, strengthened behavioral detection and account verification, and shared intelligence with U.S. authorities and industry peers, while noting no evidence of direct Chinese government coordination.
  • The company warns distilled models may lack safety guardrails and could be funneled into cyber, disinformation or surveillance systems, as OpenAI and Google separately reported similar harvesting attempts targeting their models; Pentagon talks with Anthropic over military use of Claude are ongoing.