Anthropic accused Dipsik and two other Chinese AI labs of unauthorized use of Claude’s advanced model. By creating thousands of fake user accounts, these companies have sent millions of requests to mine cloud data and train their models. This action is considered a clear violation of the law and has caused widespread security concerns at the global level.
Anthropic announced that three companies, DeepSik, MiniMax and Moonshot, have launched an industrial-scale data mining campaign. By creating about 24 thousand fake user accounts, they have had more than 16 million interactions with cloud artificial intelligence and distilled its information.
Distillation is a process in which a weaker model is trained using the outputs of a more powerful model, and now these Chinese companies have used this method to illegally copy cloud AI capabilities. This method allows them to achieve advanced capabilities in a fraction of the time and at very little cost.
Entropic accuses Chinese companies of cloud data mining
Distilled models lack layers of security and protection, allowing governments to use the technology for cyber attacks, disinformation operations, and mass surveillance.

American companies such as Entropic build their systems with strict security protocols to prevent dangerous exploits (such as the creation of biological weapons). But when foreign companies illegally distill these models, these layers of protection are lost, creating huge risks. If these unprotected models are released as open source, their control will be out of the hands of all governments, and dangerous technologies will be easily accessible to everyone.
Entropy’s detailed investigations show that each of these companies had specific goals and coordinated traffic patterns. Dipsik focused on the reasoning capabilities of the cloud, recording more than 150,000 interactions. DeepSick researchers asked the cloud to generate anti-censorship responses about politically sensitive topics (such as party leaders, the opposition, or authoritarian systems) to train their models to guide conversations on these topics.
Moonshot, the maker of Kimi models, recorded more than 3.4 million interactions with the cloud. Their main goal was to extract coding skills, data analysis and development of intelligent agents. Minimax company also focused on coding and using tools by registering a record of more than 13 million interactions.


For national security reasons, Entropic does not provide its business services in China or for its subsidiaries. However, Chinese labs have created large networks of fake accounts by buying access from intermediary services. In this system, which is known as “Hydra cluster”, when any account is blocked, another account will immediately replace it.
They mixed their requests with the traffic of normal users so that security systems would not notice their repeating patterns. Entropic emphasizes that the rapid developments of these laboratories are not the result of independent innovation, but rather due to the extraction of information from American models.
RCO NEWS



