Futures
Access hundreds of perpetual contracts
CFD
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Pre-IPOs
Unlock full access to global stock IPOs
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
Promotions
AI
Gate AI
Your all-in-one conversational AI partner
Gate AI Bot
Use Gate AI directly in your social App
GateClaw
Gate Blue Lobster, ready to go
Gate for AI Agent
AI infrastructure, Gate MCP, Skills, and CLI
Gate Skills Hub
10K+ Skills
From office tasks to trading, the all-in-one skill hub makes AI even more useful.
GateRouter
Smartly choose from 40+ AI models, with 0% extra fees
Ant launches trillion-parameter thinking model Ring-2.6-1T: PinchBench scores 87.60, surpassing GPT-5.4
According to Beating Monitoring, Ant Group’s Bailing Large Model Team has launched the trillion-parameter flagship reasoning model Ring-2.6-1T (active parameters 63 billion). This model is specifically designed for complex tasks and production environments. Its core novelty is a “dynamic thinking intensity” mechanism, enabling the system to flexibly balance between cognitive depth, Token cost, and execution speed.
Based on different compute load requirements, the model provides two operating modes: high and xhigh. In the Agent mode (high), which focuses on multi-step execution and tool calls, its PinchBench score reaches 87.60, higher than GPT-5.4 xHigh and Gemini-3.1-Pro high, and its ClawEval test score is 63.82. In the deep-thinking mode (xhigh) for mathematical reasoning and scientific research, its AIME 26 score is 95.83, and its GPQA Diamond score is 88.27.
Officially, text-format conversion and math competitions have markedly different demands on compute power. The purpose of designing this mechanism is to reduce Token overhead, allowing the model to serve as the default foundation for high-frequency scenarios such as tool orchestration, programming, and multi-turn interactions. Starting today, the model will jointly with Novita offer a one-week free API trial on the OpenRouter platform (until May 15), and its weights will be open-sourced soon.