China AI labs accused of stealing from Anthropic’s Claude chatbot

NEWYou can now listen to Fox News articles!
FIRST ON FOX: As Washington tightens export controls to maintain America’s AI edge, leading AI firm Anthropic says three China-based AI labs have found another way to access advanced US capabilities.
According to a report first obtained by Fox News Digital, the US firm alleges that DeepSeek, Moonshot AI and MiniMax used approximately 24,000 fake accounts to create more than 16 million purchases with Anthropic’s Claude chatbot in a coordinated “distillation” campaign designed to generate high-value model outputs.
The threat goes beyond robbing US companies, according to the report. Anthropic argues that models created through large-scale distillation are unlikely to preserve the security guardrails built into the United States’ border systems.
“Foreign labs that parse American models can then feed these vulnerable capabilities into military, intelligence, and surveillance systems, allowing authoritarian governments to deploy border AI for offensive cyber operations, disinformation campaigns, and mass surveillance,” Anthropic said. he said.
KYRSTEN SINEMA WARNES THAT IF AMERICA FALLS BEHIND IN THE TECHNOLOGY RACE, THE ENEMY OF THE USA WILL PROGRAM AI WITH ‘CHINESE VALUES’
It was reported that the US military used Anthropic’s artificial intelligence tool Claude during the operation that captured Venezuelan leader Nicolás Maduro. (Kurt “CyberGuy” Knutsson)
Anthropic says it detects campaigns using IP address correlations, request metadata and infrastructure indicators that differ sharply from normal customer traffic. The company said the activity focuses on Claude’s most advanced capabilities, including complex reasoning, coding and tool usage, rather than ordinary consumer prompts.
“We have high confidence that these labs are executing distillation attacks at scale,” Jacob Klein, Anthropic’s head of threat intelligence, told Fox News Digital.
Distillation is a common AI training technique in which a smaller or less capable model is trained on the outputs of a more powerful model.
Frontier labs often use this internally to create cheaper versions of their own systems. But Anthropic says the campaigns it uncovered are unauthorized and designed to shortcut years of research and reinforcement learning.
DEMOCRATS GIVE TRUMP A GREEN LIGHT NVIDIA AI CHIP SALES COULD INCREASE CHINA’S MILITARY SUPERIORITY
According to Klein, more than 16 million exchanges were performed in the three operations over a period of weeks to months. Anthropic intervened after detecting the activity, but acknowledged that a broader problem remained.
“There is no immediate silver bullet that will stop all this,” Klein said. “We see this as a larger event than the Anthropics.”
While the company can’t precisely quantify how much the Chinese labs have improved its systems, Klein said the skill gains are “meaningful” and “significant.”
“What we can say with confidence is that they have distilled us on a massive scale,” he said.
The report raises new questions about the effectiveness of current U.S. export controls, which largely focus on limiting China’s access to advanced AI chips and the direct transfer of model weights.
Klein argued that distillation targets a different layer of competitive advantage—the reinforcement learning process that refines and sharpens boundary models after they are trained.
“If you’re thinking about how to stay ahead in the AI race, computation is part of that,” Klein said. “But reinforcement learning is becoming increasingly critical. Distillation allows you to unlock those capabilities.”

China has been accused of stealing US artificial intelligence technology. (Kritsapong Jieantaratip via Getty Images)
He stressed that advanced chips were still “very important” but said policymakers needed to think about the issue “holistically.”
Anthropic said it is sharing its findings with relevant U.S. government agencies and industry partners. Klein suggested that naming the labs publicly could encourage “thoughtful government action” or at least interaction with the companies involved.
At the same time, the company said there was no evidence that the Chinese government directly coordinated the campaigns. But proxy services used to resell access to US border AI models operate openly in China.
Washington has sought to slow China’s AI progress by limiting access to the most advanced computer chips used to train powerful systems. But Anthropic argues that even without direct access to these chips, foreign labs can replicate parts of a model’s intelligence by repeatedly interrogating them and training their own systems on the answers.

Anthropic has been on the agenda in recent weeks amid tensions with the Pentagon over how artificial intelligence models could be used in military operations. Secretary of War Pete Hegseth meets with CEO Dario Amodei to discuss terms for military use of Claude. (iStock)
On February 12, OpenAI sent a memo to the Election Committee of the House of Representatives of the Communist Party of China alleging that Chinese AI startup DeepSeek was systematically “stealing” its intellectual property through large-scale distillation. According to OpenAI, DeepSeek employees used third-party routers and masking techniques to bypass geographic access restrictions and collect output from ChatGPT.
CLICK TO DOWNLOAD FOX NEWS APPLICATION
That same day, Google’s Threat Intelligence Group warned of what it described as “distillation attacks” targeting Gemini models. Google said it observed campaigns using more than 100,000 prompts aimed at replicating Gemini’s reasoning abilities. The company attributed the activity to “private sector companies” as well as state-linked actors.
The reports suggest that distillation has emerged as a growing flashpoint in the US-China AI race, raising questions about how leading American systems can be protected even when direct transfer of model weights and cutting-edge chips is restricted.
Anthropic has been on the agenda in recent weeks amid tensions with the Pentagon over how artificial intelligence models could be used in military operations. Secretary of War Pete Hegseth meets with CEO Dario Amodei to discuss terms for military use of Claude. Administration officials said Anthropic raised questions about the model’s reported role in a U.S. operation targeting Venezuelan leader Nicolás Maduro and suggested the company would not approve the use of its product, while Anthropic said the disagreements insisted on mass surveillance and fully autonomous weapons.



