Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Pre-IPOs
Unlock full access to global stock IPOs
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
Google DeepMind open-sources the Gemma 4 multimodal model family
ME News Update, April 3 (UTC+8): Google DeepMind has recently open-sourced the Gemma 4 multimodal model family. This model family supports text and image inputs (the smaller models also support audio), and generates text outputs. It includes both pre-training and instruction-tuned variants, with context windows up to 256K tokens, and supports more than 140 languages. The models come in two architectures: Dense and mixture of experts (MoE), with four sizes in total: E2B, E4B, 26B A4B, and 31B.
Its core capabilities include high-performance inference, scalable multimodal processing, on-device optimization, expanded context windows, enhanced encoding and agent capabilities, and native system prompt support. In technical details, the model uses a hybrid attention mechanism: the global layers use unified key-value pairs and a scaled RoPE (p-RoPE). Among them, the E2B and E4B models adopt layer-wise embedding (PLE) technology, with effective parameters fewer than the total parameters. Meanwhile, the 26B A4B MoE model activates only 3.8B parameters during inference, with a runtime speed close to that of a 4B parameter model. (Source: InFoQ)