Futures
Access hundreds of perpetual contracts
CFD
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Pre-IPOs
Unlock full access to global stock IPOs
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
Promotions
AI
Gate AI
Your all-in-one conversational AI partner
Gate AI Bot
Use Gate AI directly in your social App
GateClaw
Gate Blue Lobster, ready to go
Gate for AI Agent
AI infrastructure, Gate MCP, Skills, and CLI
Gate Skills Hub
10K+ Skills
From office tasks to trading, the all-in-one skill hub makes AI even more useful.
GateRouter
Smartly choose from 40+ AI models, with 0% extra fees
DeepSeek open-source GPU operator library DeepGEMM major update, adding Mega MoE that merges the five-step MoE computation into a single kernel
ME News message, April 16 (UTC+8), according to Dongcha Beating monitoring, DeepSeek today released the largest update since the open-sourcing of DeepGEMM. This GPU operator library, released during last February’s “Open Source Week,” was originally intended only for FP8 matrix multiplication, but has now been expanded into a complete operator library covering key parts of large-model inference. It supports matrix operations in multiple precisions including FP8, FP4, and BF16, as well as specialized operators such as MoE and attention scoring. The core addition is Mega MoE.
MoE (Mixture of Experts) architecture is the foundation for models such as DeepSeek V3. During inference, it needs to execute five steps in sequence: EP dispatch, the first linear transformation, the SwiGLU activation, the second linear transformation, and EP merging. With the traditional approach, these five steps are performed by five separate kernels in sequence; each call waits for the previous one to finish, and data is repeatedly moved back and forth in GPU memory. Mega MoE fuses these five steps into a single kernel, allowing NVLink communication and Tensor Core computation to run simultaneously, eliminating intermediate waiting and data transfers. At present, it only supports FP8×FP4 precision combinations, requires PyTorch 2.9 or above, and the team says optimization is still ongoing; performance comparison data will be announced later.
Other new additions include: mixed-precision FP8×FP4 matrix multiplication; an FP4 attention scoring operator (Indexer) that supports larger MTP; PDL (Programmed Dependency Launch, a GPU scheduling optimization that reduces kernel launch latency); faster JIT compilation speed; and multiple optimizations for MoE matrix operations. This update also adapts to DeepEPv2’s MoE data layout.
The PR description specifically notes: “This release is only related to DeepGEMM development and has nothing to do with internal model releases.”
(Source: BlockBeats)