Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Pre-IPOs
Unlock full access to global stock IPOs
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
Promotions
AI
Gate AI
Your all-in-one conversational AI partner
Gate AI Bot
Use Gate AI directly in your social App
GateClaw
Gate Blue Lobster, ready to go
Gate for AI Agent
AI infrastructure, Gate MCP, Skills, and CLI
Gate Skills Hub
10K+ Skills
From office tasks to trading, the all-in-one skill hub makes AI even more useful.
GateRouter
Smartly choose from 30+ AI models, with 0% extra fees
"Huawei chips delay the launch of DeepSeek V4"? The same core dominates both NVIDIA and Ascend, with nearly double the acceleration.
According to Beating Monitoring, before the release of DeepSeek V4, there was a widespread rumor in the community: the V4 launch was later than expected because the model encountered adaptation difficulties when migrating from NVIDIA to Huawei Ascend platform.
Although the V4 technical report did not directly address this rumor, the disclosed performance data clearly contradicts it.
The report shows that the fine-grained expert partition scheme (Fine-Grained EP Scheme) for V4 has been deployed and verified on both NVIDIA GPUs and Huawei Ascend NPUs, with regular inference load acceleration of 1.50 to 1.73 times, and the highest acceleration of 1.96 times in latency-sensitive scenarios such as RL rollout and high-speed Agent services.
The team has also open-sourced the MegaMoE kernel within CUDA as part of DeepGEMM. In other words, V4 achieved near-theoretical maximum efficiency on both hardware platforms, and cross-platform adaptation did not cause performance loss.