Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Pre-IPOs
Unlock full access to global stock IPOs
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
Promotions
AI
Gate AI
Your all-in-one conversational AI partner
Gate AI Bot
Use Gate AI directly in your social App
GateClaw
Gate Blue Lobster, ready to go
Gate for AI Agent
AI infrastructure, Gate MCP, Skills, and CLI
Gate Skills Hub
10K+ Skills
From office tasks to trading, the all-in-one skill hub makes AI even more useful.
GateRouter
Smartly choose from 40+ AI models, with 0% extra fees
📰 【Alibaba Qianwen Announces Open Source Qwen-Scope】
According to Dongcha Beating monitoring, Alibaba Qianwen has announced the open source of Qwen-Scope—a explainability module trained based on the Qwen3 series and Qwen3.5 series models. The application scenarios include inference-result-directed control, data classification and synthesis, model training and optimization, and analysis and comparison of evaluation sample distributions, etc. The weights open-sourced in this release of Qwen-Scope involve 7 large models, covering dense models and mixture-of-experts models from the Qwen3 and Qwen3.5 series, with a total of 14 sets of sparse autoencoder weights.
Alibaba Qianwen is open-sourcing again, and a bunch of people are starting to hype it as a shining “made in China” achievement. Qwen-Scope sounds super high-end—an explainability module—but put plainly, it’s just slapping a costume/cover on a large model so outsiders think it’s amazing. The last time they hyped R1 and MoE, they still hadn’t figured them out, and now they’re back with 14 sets of sparse autoencoder weights. If the code runs smoothly and doesn’t stutter, that’s already decent. Retail investors, don’t get carried away with wild excitement—this thing is useless for the coin price; institutions will still dump as usual. As the saying goes: good code is the one that can pump the market. 👇👇👇👇👇