Futures
Access hundreds of perpetual contracts
CFD
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Pre-IPOs
Unlock full access to global stock IPOs
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
Promotions
AI
Gate AI
Your all-in-one conversational AI partner
Gate AI Bot
Use Gate AI directly in your social App
GateClaw
Gate Blue Lobster, ready to go
Gate for AI Agent
AI infrastructure, Gate MCP, Skills, and CLI
Gate Skills Hub
10K+ Skills
From office tasks to trading, the all-in-one skill hub makes AI even more useful.
GateRouter
Smartly choose from 40+ AI models, with 0% extra fees
Windsurf trained a specialized bug-catching small model using RL, and in internal evaluations, it has matched Claude Opus 4.6.
ME News Report, April 15 (UTC+8), according to Beating Monitoring, the parent company of AI programming tool Windsurf, Cognition AI, has partnered with AI training company Applied Compute to train a model specifically for code bug detection, SWE-Check, using reinforcement learning. The model analyzes the user’s current code changes (diff), automatically flags potential bugs, and provides repair suggestions. In evaluation datasets with the same distribution as training data, SWE-Check’s F1 score has matched Claude Opus 4.6 (the gap has decreased from 0.09 to 0); in cross-distribution evaluations, the gap has narrowed from 0.49 to 0.29, still behind cutting-edge models but showing significant progress. The key advantages are speed and cost: SWE-Check runs an order of magnitude faster than state-of-the-art models, with greatly reduced inference costs, enabling real-time, free bug detection within IDEs—something that large models like Opus 4.6 cannot do through direct calls. Two training design aspects are worth noting: 1. Reward linearization: The team aims to optimize the global F-beta metric, but this metric cannot be directly decomposed into individual samples. They convert the global metric into a per-sample reward function via first-order approximation, allowing effective optimization of the overall metric. Early versions had high false positive rates, so the team adjusted beta from 1 to 0.5 to emphasize precision. 2. Two-stage post-training: The first stage focuses solely on maximizing bug detection ability without penalizing latency; the second stage introduces latency penalties based on the real user distribution of how long they wait after triggering detection before switching away. This staged approach outperforms simultaneous optimization of both objectives, which can lead to local optima, such as learning to be very fast but shallow in analysis. SWE-Check’s preview version is now available in Windsurf Next (shortcut cmd+U), and will later be integrated into Windsurf’s official release. (Source: BlockBeats)