Futures
Access hundreds of perpetual contracts
CFD
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Pre-IPOs
Unlock full access to global stock IPOs
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
Promotions
AI
Gate AI
Your all-in-one conversational AI partner
Gate AI Bot
Use Gate AI directly in your social App
GateClaw
Gate Blue Lobster, ready to go
Gate for AI Agent
AI infrastructure, Gate MCP, Skills, and CLI
Gate Skills Hub
10K+ Skills
From office tasks to trading, the all-in-one skill hub makes AI even more useful.
GateRouter
Smartly choose from 40+ AI models, with 0% extra fees
Nous Research发布Lighthouse Attention,长序列预训练提速1.4-1.7倍
AIMPACT News, May 17 (UTC+8), Nous Research introduces the Lighthouse Attention method, which addresses the quadratic growth of attention computation costs in long sequence pretraining by using a selective hierarchical attention mechanism. This method performs symmetric pooling on Query, Key, and Value, with the selection logic outside the attention kernel, enabling reuse of the FlashAttention kernel, and employs a two-stage training strategy. Empirical tests on NVIDIA B200 show a 21x speedup in forward propagation at around 512K context length, a 17.3x combined speedup for forward + backward, with the first stage achieving a throughput of 126k tokens/sec/GPU (compared to 46k for dense SDPA). End-to-end acceleration ranges from 1.40× to 1.69×, while maintaining matching or lower training loss. Validation on a 530M parameter Llama-3 style model shows that the three Lighthouse runs achieve final losses (0.698-0.71) better than the from-scratch trained dense SDPA baseline (0.7237), saving 22.5-27 hours of training time. Paper: arXiv:2605.06554.