Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Pre-IPOs
Unlock full access to global stock IPOs
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
Promotions
AI
Gate AI
Your all-in-one conversational AI partner
Gate AI Bot
Use Gate AI directly in your social App
GateClaw
Gate Blue Lobster, ready to go
Gate for AI Agent
AI infrastructure, Gate MCP, Skills, and CLI
Gate Skills Hub
10K+ Skills
From office tasks to trading, the all-in-one skill hub makes AI even more useful.
GateRouter
Smartly choose from 30+ AI models, with 0% extra fees
Analysis: The open-source content of TileKernels corresponds to the V4 architecture specifications disclosed by Yifan Zhang
CryptoWorld News reports that the specifications of the V4 architecture disclosed by analyst Yifan Zhang correspond in multiple places with the open-source TileKernels core library from DeepSeek. Zhang states that the residual connections in V4 use manifold-constrained hyperconnections (MHC), which is an improved version of the HC with double random matrix constraints proposed by the ByteSeed team in 2024. By analyzing the TileKernels core code, the V4 architecture is inferred, with three core matches and one mismatch. The model card confirms that V4 uses MHC, which is a match. The model card also confirms that V4 is an MOE model, which is a match. The weights are stored using a hybrid of FP4 and FP8, which is a match. The only mismatch is the conditional memory module (Engram), which the model card also does not mention. The model card reveals a new component not covered by TileKernels: the hybrid attention mechanism (CSA + HCA), which is the key to V4’s significant improvement in long-context efficiency, with inference FLOPS at only 27% of V3 at 1 million tokens, and KV cache at only 10%. Training was switched to the Muon optimizer.