Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Pre-IPOs
Unlock full access to global stock IPOs
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
Promotions
AI
Gate AI
Your all-in-one conversational AI partner
Gate AI Bot
Use Gate AI directly in your social App
GateClaw
Gate Blue Lobster, ready to go
Gate for AI Agent
AI infrastructure, Gate MCP, Skills, and CLI
Gate Skills Hub
10K+ Skills
From office tasks to trading, the all-in-one skill hub makes AI even more useful.
GateRouter
Smartly choose from 30+ AI models, with 0% extra fees
Perplexity Reveals Post-Training Method for Search Agent, Qwen3.5 Model Surpasses GPT-5.4 in Accuracy and Cost
According to monitoring by Dongcha Beating, the Perplexity research team has published a technical article detailing the post-training process for its web search agent. This process is based on the open-source models Qwen3.5-122B-A10B and Qwen3.5-397B-A17B, employing a two-stage approach: first, supervised fine-tuning (SFT) is used to establish necessary behaviors for deployment, such as instruction adherence and language consistency; then, online policy reinforcement learning (RL) optimizes search accuracy and tool usage efficiency. The RL phase utilizes the GRPO algorithm, with training data consisting of two parts: first, a self-developed multi-hop verifiable question-answer dataset, which constructs questions requiring 2 to 4 hops of reasoning from internal seed queries and verifies answer uniqueness with multiple independent solvers; second, general dialogue data based on scoring criteria (rubric), which transforms deployment requirements like instruction adherence and format constraints into objectively checkable atomic conditions to prevent degradation of behaviors established during SFT in the RL phase. The core of the reward design is gated aggregation: preference scores are only considered in calculations when the baseline is correct (i.e., the question-answer is correct or all scoring criteria are met), preventing high preference signals from masking factual errors. Efficiency penalties are applied using an intra-group anchoring method, where the correct answers in the same group serve as a baseline to impose smooth penalties on excessive tool invocation counts and generation lengths. Evaluation shows that the post-trained Qwen3.5-397B-SFT-RL performs optimally across multiple search benchmarks. On FRAMES, a single tool invocation achieves 57.3%, surpassing GPT-5.4 by 5.7 percentage points and Sonnet 4.6 by 4.7 percentage points. Under a medium budget (4 tool invocations), it reaches 73.9%, with a cost of 2.0 cents per query; under the same conditions, GPT-5.4 achieves 67.8% at 8.5 cents, and Sonnet 4.6 reaches 62.4% at 15.3 cents. Cost data is calculated based on publicly available API pricing from each vendor, excluding cache optimization.