Futures
Access hundreds of perpetual contracts
CFD
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Pre-IPOs
Unlock full access to global stock IPOs
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
Promotions
AI
Gate AI
Your all-in-one conversational AI partner
Gate AI Bot
Use Gate AI directly in your social App
GateClaw
Gate Blue Lobster, ready to go
Gate for AI Agent
AI infrastructure, Gate MCP, Skills, and CLI
Gate Skills Hub
10K+ Skills
From office tasks to trading, the all-in-one skill hub makes AI even more useful.
GateRouter
Smartly choose from 40+ AI models, with 0% extra fees
Anthropic warns that the gap between US and Chinese large models will only remain for a few months, calling for legislation to ban "distillation attacks"
According to Beating Monitoring, Anthropic has released a policy document on US-China AI competition, for the first time categorizing the distillation of Chinese large models as a systemic act of industrial espionage, urging the US Congress to legislate to make it illegal, and calling for a complete closure of compute power loopholes such as overseas data centers.
Anthropic assesses that, although the US currently maintains an overall advantage of 12 to 24 months, China’s top models are only a few months behind in intelligence. The document points out that Chinese laboratories can keep up with the forefront mainly due to two major loopholes. One is obtaining restricted US chips, with Anthropic explicitly naming Alibaba and ByteDance for using Southeast Asian data centers to bypass bans, and mentioning DeepSeek’s use of prohibited chips to train the latest models. The second is distilling US cutting-edge models through large numbers of fake accounts, to steal innovation achievements at extremely low cost.
Therefore, Anthropic calls for clear legislation to define distillation attacks as illegal, requests increased law enforcement budgets for chip smuggling, and proposes establishing threat intelligence sharing mechanisms between US laboratories and the government. To emphasize the practical destructive power of technological gaps, the document reveals that its Mythos Preview model released in April helped Firefox fix more security vulnerabilities in a single month than the total for the entire previous year. This capability has been described by Chinese cybersecurity analysts as a sudden automatic Gatling gun being aimed at opponents.
While there is still controversy over the ethical boundaries of distilling large models in the industry, Anthropic directly elevates it to a matter of national security, attempting to forcibly cut off low-cost competitive shortcuts through legal means.