Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Pre-IPOs
Unlock full access to global stock IPOs
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
Promotions
AI
Gate AI
Your all-in-one conversational AI partner
Gate AI Bot
Use Gate AI directly in your social App
GateClaw
Gate Blue Lobster, ready to go
Gate for AI Agent
Gate MCP
Gate Skills Hub
10K+ Skills
From office tasks to trading, the all-in-one skill hub makes AI even more useful.
GateRouter
Smartly choose from 30+ AI models, with 0% extra fees
DeepSeek V4 Sparks Debate Between Two US Factions: Think Tank Says Reliance on Banned Chips Puts It Half a Year Behind, Silicon Valley CEO Says It’s Open Innovation
ME News message, April 24 (UTC+8). According to Beating Monitoring, Chris McGuire, the Council on Foreign Relations (CFR) Senior Research Fellow for China and Emerging Technologies (a former member of the White House National Security Council and the Department of Defense), published a post stating that V4 has not changed the U.S.-China AI competition landscape. He cited the original text of the V4 report, pointing out that DeepSeek itself admits its reasoning capability is “about 3 to 6 months behind leading-edge models,” with the target being GPT-5.2 and Gemini 3.0 Pro, which were released half a year earlier. He also questioned that while the V4 report disclosed reasoning-side adaptation to NVIDIA GPUs and Huawei Ascend NPUs, it did not publicly reveal the specific GPU models and costs used for training (V3 had claimed the use of 2,000 H800 cards at a cost of $5.57 million), and he believes the silence implies the use of export-controlled NVIDIA Blackwell chips. Previously, U.S. government officials anonymously put forward similar claims in February, which NVIDIA dismissed as “far-fetched”; DeepSeek denied using Blackwell, saying the model was trained on NVIDIA H800 and Huawei Ascend 910C.
Replit CEO Amjad Masad responded sharply, saying that while U.S. politicians and lobbyists are stoking panic over “China distillation,” Chinese scientists are publicly sharing real AI breakthroughs. He cited the structural innovations listed in DeepSeek’s official tweet, including token-level attention compression (DeepSeek Sparse Attention) and a significant improvement in long-context computation efficiency, noting that in the 1M context length, V4-Pro’s per-token inference compute and KV cache usage are both far lower than V3.2. Masad believes that this kind of architectural innovation has nothing to do with training-data distillation, and that everyone can benefit from open source, including U.S. labs and experiments. (Source: BlockBeats)