Futures
Access hundreds of perpetual contracts
CFD
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Pre-IPOs
Unlock full access to global stock IPOs
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
Promotions
AI
Gate AI
Your all-in-one conversational AI partner
Gate AI Bot
Use Gate AI directly in your social App
GateClaw
Gate Blue Lobster, ready to go
Gate for AI Agent
AI infrastructure, Gate MCP, Skills, and CLI
Gate Skills Hub
10K+ Skills
From office tasks to trading, the all-in-one skill hub makes AI even more useful.
GateRouter
Smartly choose from 40+ AI models, with 0% extra fees
According to Beating, Microsoft has recently open-sourced the Phi-Ground model family, aimed at solving the problem of "where AI should click on the computer screen." This 4 billion parameter version, combined with a larger language model used for instruction planning, outperformed OpenAI Operator and Claude Computer Use in click accuracy on the Showdown benchmark, and ranked first among all models with fewer than 10 billion parameters in five evaluations, including ScreenSpot-Pro. The team trained on over 40 million data samples and found that three common training techniques used in academic papers become ineffective at scale. The key insight proved to be simple: output coordinates as regular numbers, such as "523, 417." Previous research invented specialized vocabulary for coordinates, but these methods do not scale. The team also discovered that placing text instructions before images can improve performance because the model can recognize targets when processing pixels. Additionally, reinforcement learning methods like DPO can still improve accuracy after fine-tuning.