OpenAI, Anthropic, and Google walk into a room. No, this is not the setup for a joke. These three companies have spent the last few years trying to outbuild, outspend, and outmarket each other in the most expensive tech race since the space program. And now they are sharing intelligence like old war buddies, because someone in China has been stealing their homework.
Three AI Rivals, One Common Enemy
The announcement came through the Frontier Model Forum, an industry nonprofit the three companies co-founded with Microsoft back in 2023. The mission: detect and block “adversarial distillation,” a technique where a competitor floods an AI model with carefully designed prompts, collects the outputs, and uses them to train a cheaper clone. Think of it as photocopying someone else’s brain, one question at a time.
The three Chinese labs named in this operation are DeepSeek, Moonshot AI, and MiniMax. According to Anthropic, they collectively created over 24,000 fraudulent accounts and generated more than 16 million exchanges with Claude. That is not casual browsing. That is an industrial extraction operation.
The Scale Is Wild
Let’s break it down by company, because the details matter. MiniMax was the most aggressive, responsible for over 13 million of those 16 million exchanges. Their target: Claude’s agentic coding and tool-use capabilities. Moonshot AI came second with 3.4 million exchanges, focused on reasoning, coding, data analysis, and computer vision. DeepSeek’s operation was smaller in volume but arguably the most disturbing in purpose: they used Claude to generate alternative responses to politically sensitive queries about dissidents, party leaders, and authoritarianism, likely training their own models to dodge censorship-related topics more effectively.
To avoid detection, these labs ran what Anthropic calls “hydra cluster” architectures, sprawling networks of fake accounts distributed across API access points and third-party cloud platforms. When one account got flagged, others picked up the slack. We’ve written before about how AI is reshaping cybersecurity, but this time the AI companies are on the receiving end.
Why Rivals Are Cooperating
Here is the part that tells you everything about how serious this is. OpenAI, Anthropic, and Google do not agree on much. Their business models are different. Their safety philosophies are different. OpenAI just closed a $122 billion funding round. Anthropic raised $30 billion at a $380 billion valuation. Google has its own chips and infrastructure. These companies are not natural allies.
But when all three independently detect the same attack patterns, coming from the same actors, using the same techniques, the math changes. The collaboration works like cybersecurity threat intelligence sharing: when one company spots a suspicious pattern (unusual API usage, account behaviors consistent with systematic output extraction), it flags it for the others. Together, they can catch patterns that any single lab would miss.
The Bigger Picture: $300 Billion Quarter
This story does not exist in a vacuum. Q1 2026 just shattered every venture funding record in history: $300 billion poured into startups globally, with 80% of it going to AI companies. Four of the five largest venture rounds ever recorded closed in a single quarter. The frontier AI labs alone (OpenAI, Anthropic, xAI, Waymo) raised $188 billion combined. When the stakes are this high, protecting your intellectual property stops being a legal nicety and becomes a survival imperative.
Anthropic’s run rate revenue hit $30 billion in March 2026, up from $9 billion at the end of 2025. Over 1,000 business customers now spend more than $1 million annually on Claude. That is a lot of value sitting behind an API, and every one of those 16 million distillation queries was an attempt to extract it for free.
Can They Actually Stop It?
This is the uncomfortable question nobody has a clean answer for. The technical challenge is brutal. Chain-of-Thought reasoning traces (the intermediate steps a model takes before reaching an answer) are one of the prime targets for distillation, because they reveal how a model thinks, not just what it concludes. But the same techniques used to extract training data can also be used for completely legitimate research, auditing, and testing. Drawing the line between “customer” and “attacker” is not straightforward when both look identical from the outside.
Watermarking outputs is one theoretical defense, but none of the three companies have confirmed deploying it at scale. Rate limiting and behavioral analysis catch the obvious cases, but a well-funded state-adjacent lab with thousands of proxy accounts and rotating infrastructure is a different kind of adversary than your average terms-of-service violator.
What This Means for the AI Race
The AI industry just entered its Cold War phase. Not in the apocalyptic, skynet-takes-over sense, but in the geopolitical, espionage-and-alliances sense. American labs are building the most powerful models in history, Chinese labs are finding creative ways to close the gap without the same compute budgets, and the rules of engagement are being written in real time.
The Frontier Model Forum coalition is a first. If it works, expect it to expand. If it does not, expect something much more aggressive from Washington. Either way, the days of “open” AI development, where anyone can query any model and learn from the outputs, are getting numbered. The walls are going up, and this time it is not just about export controls on chips. It is about the intelligence inside the machines themselves.
🐾 Visit the Pudgy Cat Shop for prints and cat-approved goodies, or find our illustrated books on Amazon.





Leave a Reply