- Anthropic says DeepSeek, Moonshot AI and MiniMax ran 16M+ exchanges to extract Claude capabilities
- Activity allegedly used fraudulent accounts to bypass regional and access restrictions
- Illicit distillation raises export control, security, and infrastructure concerns
Anthropic says it has uncovered coordinated campaigns by DeepSeek, Moonshot AI, and MiniMax aimed at extracting capabilities from its Claude models. According to the company, the three labs generated more than 16 million exchanges using roughly 24,000 fraudulent accounts, violating terms of service and regional access controls.

The activity was reportedly identified through IP correlations, metadata analysis, infrastructure indicators, and cross-industry intelligence sharing. Anthropic claims the exchanges were not random testing, but systematic efforts to replicate Claude’s reasoning, coding, and tool-use capabilities.
Distillation at Scale
At the center of the dispute is “distillation,” a method where a smaller model is trained on the outputs of a more capable system. Frontier AI labs commonly use distillation internally to create lighter versions of their own models. Anthropic argues that in this case, the technique was deployed externally and illicitly to reproduce Claude’s strengths at scale.
DeepSeek allegedly conducted more than 150,000 exchanges focused on reasoning tasks and eliciting detailed step-by-step explanations. Moonshot AI reportedly ran over 3.4 million exchanges targeting coding and agentic reasoning. MiniMax accounted for more than 13 million exchanges, with Anthropic detecting traffic spikes aligned with Claude model updates.
Security and Export Control Implications
Anthropic warns that models trained through unauthorized distillation may not inherit the same safety guardrails embedded in Claude. The concern is not just intellectual property theft, but potential misuse in sensitive domains such as cyber operations or biological research.
The company also argues that large-scale capability replication could undermine U.S. export controls. If foreign labs can reproduce restricted model capabilities indirectly, enforcement mechanisms lose effectiveness.
Defensive Measures and Industry Coordination
In response, Anthropic says it has deployed new behavioral detection systems, strengthened account verification, and shared intelligence with industry peers and authorities. It is also developing product-level and API safeguards designed to limit the effectiveness of distillation attempts without disrupting legitimate users.

The broader message is that no single lab can address this alone. Anthropic stated that mitigating large-scale distillation will require coordination among AI developers, cloud providers, and policymakers.
Why This Matters Beyond AI
The episode underscores a growing tension in frontier technology: open access versus controlled capability. As AI systems become more powerful, the boundary between innovation and security tightens.
For sectors like crypto and digital infrastructure, the lesson is familiar. When core systems scale rapidly, adversarial behavior scales with them. The challenge is not just building advanced models — it is protecting them without stifling legitimate use.











