Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Launchpad
Be early to the next big token project
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
Domestic Large Model | Alibaba Releases Programming Model Qwen3.6-Plus
Alibaba (09988) has released Qwen3.6-Plus, the next-generation large language model of Qianwen.
Qianwen 3.6 features native multimodal understanding and reasoning capabilities, with overall performance greatly improved. In authoritative evaluations such as the SWE-bench series for agentic programming and real-world agent tasks like Claw-Eval, Qianwen 3.6’s programming performance surpasses models such as GLM-5 and Kimi-K2.5 by more than 2x, even up to 3x their parameter counts. It has become the domestic model with the strongest programming ability today, nearing the world’s strongest programming models such as the Claude series.
In real-world tested scenarios like front-end web development and warehouse-level complex tasks, Qianwen 3.6 can independently break down tasks, plan paths, test modifications, and complete the tasks—showcasing a new breakthrough in multimodal agentic programming. This makes “atmosphere programming,” where a single sentence drives AI to write code, usable.
Currently, Qwen3.6-Plus has been listed on Alibaba Cloud Bai Lian, with an input price as low as RMB 2 per million tokens; Qianwen 3.6 has also rolled out to Alibaba AI applications and platforms such as Wukong and the Qianwen app.