OpenAI, Anthropic, and Google Are Sharing Intel to Stop China From Cloning Their Models
In an unprecedented move, rival AI labs OpenAI, Anthropic, and Google are sharing threat intelligence through the Frontier Model Forum to detect and block Chinese companies from stealing their models via adversarial distillation — with Anthropic citing 16 million fraudulent exchanges from 24,000 fake accounts linked to DeepSeek, Moonshot AI, and MiniMax.
Original sourceIn what may be the first major intelligence-sharing operation in the AI industry, OpenAI, Anthropic, and Google announced on April 6-7, 2026 that they are actively sharing threat data through the Frontier Model Forum — the nonprofit the three companies co-founded with Microsoft in 2023 — to block adversarial distillation attempts by Chinese AI companies.
Adversarial distillation is the practice of extracting knowledge from a more capable "teacher" model by generating massive volumes of query-response pairs and using them to fine-tune a cheaper "student" model. The technique is legal in many jurisdictions but typically violates terms of service of major AI providers, and the resulting distilled models often lack the safety guardrails of the original.
Anthropic disclosed the most specific figures: three Chinese AI firms — DeepSeek, Moonshot AI, and MiniMax — collectively generated over 16 million exchanges with Claude through approximately 24,000 fraudulent accounts. The company said the threat "extends beyond any single company or region" and poses national security risks, since distilled models stripped of safety layers can be used for purposes the original providers explicitly prohibit.
This represents the first time the Frontier Model Forum has been activated as an active threat-intelligence operation against a specific external adversary, moving from a policy coordination body to an operational security alliance. All three companies are sharing attack pattern data to improve detection and blocking — treating model extraction as they would credential stuffing or DDoS attacks on traditional cloud infrastructure.
The political timing is notable. The collaboration comes amid a broader US-China technology decoupling and growing congressional attention on AI security. By framing model theft as a national security issue rather than a terms-of-service dispute, the labs are explicitly inviting regulatory backing for their detection and enforcement efforts.
Panel Takes
The Builder
Developer Perspective
“The 24,000 fraudulent accounts figure from Anthropic is the real signal here — this isn't opportunistic scraping, it's a coordinated extraction operation at scale. If you're building on top of Claude or GPT APIs, know that your provider is now actively involved in geopolitical threat intelligence, which has implications for API reliability and rate limiting policies.”
The Skeptic
Reality Check
“The irony of OpenAI, Google, and Anthropic uniting against model copying is that these same companies built their models on data scraped without explicit consent. The distillation they're now trying to stop is technically the same knowledge transfer they performed at scale on human-created content. This is regulatory framing as much as genuine security action.”
The Futurist
Big Picture
“The Frontier Model Forum just became the AI equivalent of the Five Eyes intelligence alliance — a closed group sharing threat intelligence about a specific adversary. This is how AI governance actually gets built: not through UN treaties but through industry security coalitions that preempt regulatory action by demonstrating self-governance. Watch this structure carefully.”