Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Pre-IPOs
Unlock full access to global stock IPOs
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
Google DeepMind opens the Gemma 4 multimodal model family
ME News, April 3rd (UTC+8): Google DeepMind has recently open-sourced the Gemma 4 multimodal model family. The model family supports text and image inputs (the smaller models also support audio) and generates text outputs. It includes both pre-training and instruction-tuning variants, with context windows up to 256K tokens, and supports over 140 languages. The models adopt two architectures: Dense and Mixture of Experts (MoE). There are four sizes in total: E2B, E4B, 26B A4B, and 31B. Their core capabilities include high-performance inference, scalable multimodal processing, device-side optimization, expanded context windows, strengthened encoding and agent capabilities, and native system prompt support. In terms of technical details, the models use a hybrid attention mechanism, where global layers use unified key-value pairs and scaled RoPE (p-RoPE). Specifically, the E2B and E4B models use the layer-wise embedding (PLE) technology, meaning the effective parameters are fewer than the total parameters. Meanwhile, the 26B A4B MoE model activates only 3.8B parameters during inference, with a running speed close to a 4B parameter model. (Source: InFoQ)