Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Pre-IPOs
Unlock full access to global stock IPOs
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
Promotions
AI
Gate AI
Your all-in-one conversational AI partner
Gate AI Bot
Use Gate AI directly in your social App
GateClaw
Gate Blue Lobster, ready to go
Gate for AI Agent
AI infrastructure, Gate MCP, Skills, and CLI
Gate Skills Hub
10K+ Skills
From office tasks to trading, the all-in-one skill hub makes AI even more useful.
GateRouter
Smartly choose from 40+ AI models, with 0% extra fees
0G integrates Alibaba Cloud Qianwen LLM, and on-chain AI agents are the first to gain commercial large-model access
On April 21, the 0G Foundation announced an official partnership with Alibaba Cloud, bringing the Qianwen (Qwen) large language model into decentralized infrastructure so that AI agents can directly call commercial-grade LLMs from on-chain environments. Developers will use a token-gated access mechanism to access the Qianwen inference capabilities, effectively converting LLM calls into on-chain, measurable actions, making Qianwen one of the first major commercial LLMs to be embedded into decentralized agent frameworks.
Token-Gated Access: How LLM Calls Become On-Chain, Measurable Actions
Under the cooperation agreement, developers switched their payment for using Qianwen inference capabilities from traditional cloud-based billing to a token-based access mechanism, achieving on-chain measurability for LLM calls. When an AI agent calls Qianwen inference, its computation requests and fee settlement can be recorded as on-chain, auditable actions, rather than being silently completed in a centralized backend.
This architecture enables Web3 developers to embed the same LLM primitives into AI agents. These agents can be minted, traded, and composed like other crypto-native assets, making LLM calls programmable resources that coexist with tokens, DeFi, and on-chain governance.
0G’s Agent Economy Vision: An $88.88 Million Ecosystem Fund to Support It
For 0G, the integration of Qianwen is part of a broader strategy to build an on-chain “Agent Economy,” aiming to establish an autonomous AI agent that can have an identity, pay for compute costs, and interact with other protocols—without relying on the ecosystem of centralized AI platforms. Earlier this year, the 0G Foundation announced an $88.88 million ecosystem growth plan intended to fund decentralized AI agents and efficient decentralized applications (dApps).
Alibaba Cloud Qianwen’s Technical Foundation: 90,000 Deployment Instances Worldwide
Alibaba Cloud stated that the Qianwen series has already been deployed in more than 90,000 instances globally, covering Qwen2.5 models ranging from 7 billion to 72 billion parameters, as well as multimodal products such as Qianwen VL (vision), Qianwen Audio, and more, available in both open and closed versions. The goal of this cooperation is to extend this enterprise-grade technology stack to a permissionless environment.
Frequently Asked Questions
What is the 0G Foundation, and why does it introduce the Qianwen LLM?
The 0G Foundation is a Web3 infrastructure project positioned as an “AI Layer (AIL) and a decentralized AI operating system (dAIOS),” aiming to provide a complete on-chain execution environment for autonomous AI agents. Introducing the Qianwen LLM is a specific step in building an on-chain Agent Economy, allowing AI agents to directly call commercial-grade LLMs when executing inference tasks without relying on a centralized API gateway.
What are the fundamental differences between token-gated LLM access and traditional cloud APIs?
Traditional cloud APIs rely on fiat-based billing and centralized identity verification, and all operations are performed in a centralized backend, making them not externally auditable. Token-gated access maps LLM calls into on-chain, measurable actions, with fees paid in tokens. The entire process can be verified on-chain, allowing AI agents to independently pay for and call inference services in a programmatic way.
What is the practical significance of this integration for Web3 developers?
Developers can embed Qianwen’s inference capabilities directly into on-chain AI agents, enabling agents to call enterprise-grade LLM intelligence while executing tasks, all while maintaining the decentralized characteristics of the agents. If the experiment succeeds, it could become a concrete template for how ultra-large-scale cloud vendors can coordinate cloud-native AI with decentralization.