24,000 Fake Accounts: Inside the AI Copying Allegations

Anthropic Alleges Massive Data Extraction From Claude

The420 Correspondent
7 Min Read

Anthropic says it recently uncovered what it describes as one of the largest known efforts to systematically extract knowledge from a leading artificial intelligence model.

In a detailed blog post, the San Francisco-based company accused three AI firms — DeepSeek, Moonshot and MiniMax — of orchestrating coordinated campaigns to harvest responses from its flagship model, Claude. According to Anthropic, the companies collectively created roughly 24,000 fake user accounts and generated more than 16 million interactions with the chatbot.

The activity, Anthropic said, was not the work of individual developers experimenting at the margins. Instead, it characterized the campaigns as highly structured operations designed to operate at scale while evading detection. The firms allegedly relied on proxy services, rotating access points and coordinated prompting patterns to extract responses over extended periods.

FCRF Launches Flagship Certified Fraud Investigator (CFI) Program

Moonshot’s effort accounted for more than 3.4 million interactions, Anthropic said. MiniMax’s campaign was larger still, exceeding 13 million prompts. DeepSeek’s activity was smaller but included attempts to coax the model into revealing detailed, step-by-step reasoning — information that could be particularly valuable for training rival systems.

The three companies did not immediately respond to requests for comment.

The Mechanics of “Distillation”

At the center of Anthropic’s complaint is a technique known as “distillation,” a widely used method in artificial intelligence development.

Distillation typically involves using the outputs of a large, sophisticated model to train a smaller or more efficient one. The approach allows developers to transfer knowledge from a powerful system into a more streamlined version that requires fewer computing resources.

When conducted internally, distillation is considered routine. Major AI companies regularly refine their own models using this method to improve performance and reduce costs.

The dispute arises when the source model belongs to a competitor.

Anthropic contends that distillation becomes problematic when companies use another firm’s proprietary system as a shortcut to replicate years of research and development. By systematically querying Claude and collecting its responses, the company argues, competitors may be able to reproduce elements of its reasoning, coding proficiency and tool-use capabilities without incurring the same training expenses.

In recent years, the cost of training frontier AI models has soared into the hundreds of millions — and in some cases billions — of dollars, driven largely by the computing power required to process vast datasets. Against that backdrop, the temptation to extract capabilities from an already trained system can be strong.

Industry researchers say that distinguishing between legitimate use and illicit extraction can be technically and legally complex. Large volumes of automated queries are not unusual in software ecosystems. Determining intent — and proving it — can be difficult.

Safety and Strategic Stakes

Anthropic’s concerns extend beyond intellectual property.

The company emphasized that its models are built with safety layers designed to prevent certain forms of misuse, such as generating harmful content or facilitating cyberattacks. If rival models are trained by copying outputs, Anthropic argues, those guardrails may not transfer in full.

The risk, the company suggested, is that powerful AI capabilities could proliferate without the same restrictions embedded in the original system.

The issue intersects with broader geopolitical tensions over advanced technology. Governments, particularly in the United States, have imposed export controls aimed at limiting access to cutting-edge semiconductors and AI systems. If companies can replicate advanced capabilities by systematically querying publicly available models, the effectiveness of those controls could be undermined.

The AI sector has increasingly become a strategic arena, where questions of national security, economic competitiveness and technological sovereignty overlap. As governments digitize infrastructure and integrate AI into defense, healthcare and financial systems, the implications of model replication extend beyond commercial rivalry.

Anthropic did not directly link its allegations to specific governments, but it framed the matter as part of a global competition in which access to frontier AI systems carries geopolitical weight.

Policing the Frontier

Anthropic said it has begun tightening safeguards around its systems, including improving detection mechanisms for suspicious activity and strengthening account verification processes. It is also seeking greater collaboration across the industry to identify coordinated extraction campaigns.

Still, executives acknowledged that the challenge cannot be solved by one company alone.

Large AI models are typically accessed through application programming interfaces, or APIs, that allow developers to send queries and receive responses at scale. That openness, essential for innovation and commercial integration, can also create vulnerabilities. The very infrastructure that makes AI widely usable can make it susceptible to systematic harvesting.

The episode highlights a shifting landscape in the artificial intelligence race. The contest is no longer confined to building bigger and faster models. It now includes efforts to protect proprietary capabilities, enforce digital boundaries and define acceptable norms of competition.

As AI systems become more powerful — and more central to economic and strategic power — the lines between innovation, imitation and appropriation are likely to be tested repeatedly.

For companies like Anthropic, the battle is not just about improving algorithms. It is about safeguarding the intellectual and safety frameworks that underpin them.

About the author — Suvedita Nath is a science student with a growing interest in cybercrime and digital safety. She writes on online activity, cyber threats, and technology-driven risks. Her work focuses on clarity, accuracy, and public awareness.

Stay Connected