Futures
Access hundreds of perpetual contracts
CFD
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Pre-IPOs
Unlock full access to global stock IPOs
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
Promotions
AI
Gate AI
Your all-in-one conversational AI partner
Gate AI Bot
Use Gate AI directly in your social App
GateClaw
Gate Blue Lobster, ready to go
Gate for AI Agent
AI infrastructure, Gate MCP, Skills, and CLI
Gate Skills Hub
10K+ Skills
From office tasks to trading, the all-in-one skill hub makes AI even more useful.
GateRouter
Smartly choose from 40+ AI models, with 0% extra fees
Meta and others introduce BLT acceleration methods, reducing memory bandwidth by up to 92%
AIMPACT News, May 12 (UTC+8), research teams from Meta, Stanford University, and the University of Washington recently introduced three new methods that significantly accelerate the inference speed of Byte Latent Transformer (BLT). BLT is a language model that operates directly on raw bytes, dynamically grouping bytes into variable-length patches using an entropy-based segmentation strategy, matching the performance of token-based models. Because autoregressive decoding on a byte-by-byte basis requires multiple forward passes, memory bandwidth becomes the main bottleneck. The three acceleration methods are as follows: BLT-D uses block discrete diffusion, training combines next-byte prediction and masked byte prediction losses, generating multiple bytes per forward pass; when block size is 4, memory bandwidth is less than half that of standard BLT, and at size 16, it reduces by 87-92%; BLT-S employs a lightweight local decoder as a speculative draft generator, requiring no additional training, and produces outputs identical to standard BLT under greedy decoding, achieving a 77% reduction in memory bandwidth; BLT-DV combines diffusion drafting with autoregressive verification, with the same model weights usable bidirectionally, reducing memory bandwidth by 81%. All methods benefit most on translation tasks, while encoding tasks are more sensitive to block size. On benchmark tests based on probability such as ARC-Easy, ARC-Challenge, PIQA, HellaSwag, and MMLU, BLT-D scores are close to the BLT baseline, maintaining robust inference capabilities.