Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Launchpad
Be early to the next big token project
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
Chinese AI Firms Threaten National Security Through Illegal Extraction From US Companies: Anthropic
(MENAFN- IANS) New Delhi, March 25 (IANS) United States-based artificial intelligence firm Anthropic accused three Chinese unicorns- DeepSeek, Minimax and Moonshot AI - of having illegally extracted capabilities from its Claude model to advance their own systems, a new report has said.
The report from CNN Business said that the US firm alleged that the theft through a process known as distillation raised national security concerns.
The modus-operandi of the alleged theft involved creation of around 24,000 fraudulent accounts to train Chinese models using over 16 million exchanges with Claude.
The company warned that models produced this way may lack the safety guardrails implemented by companies such as itself and thus could be used for cyberattacks, and biological weapons.
These models could lead to“authoritarian governments to deploy frontier AI for offensive cyber operations, disinformation campaigns, and mass surveillance,” it said, warning that the“The window to act is narrow.”
CNN has reached out to DeepSeek, MiniMax and Moonshot AI for comment, the report said.
DeepSeek’s sudden rise to prominence in China, being dubbed as“AI tigers" had led to a sense of US export controls having failed.
The three unicorns currently rank among the top 15 models on the prominent Artificial Analysis leaderboard, the report added.
However, Anthropic said that the distillation attempts proved the effectiveness of export controls and that cutting-edge model development cannot be sustained without access to advanced chips.
Similar claims have earlier been made by OpenAI, which accused DeepSeek of "free riding on the capabilities developed by OpenAI and other US frontier labs.”
Anthropic PBC had recently been formally designated a“Supply Chain Risk (SCR)” by the US government and the firm’s CEO also apologised for criticising President Donald Trump.
The company clarified that the designation would apply only to use of Anthropic’s Claude models within Department of War contracts and not to "all use of Claude by customers who have such contracts”.
-IANS
aar/
MENAFN25032026000231011071ID1110904509