Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Pre-IPOs
Unlock full access to global stock IPOs
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
Promotions
AI
Gate AI
Your all-in-one conversational AI partner
Gate AI Bot
Use Gate AI directly in your social App
GateClaw
Gate Blue Lobster, ready to go
Gate for AI Agent
AI infrastructure, Gate MCP, Skills, and CLI
Gate Skills Hub
10K+ Skills
From office tasks to trading, the all-in-one skill hub makes AI even more useful.
GateRouter
Smartly choose from 30+ AI models, with 0% extra fees
I’ve noticed that a lot of talk is going around about AI agents on the blockchain right now, but few people dig into why they actually don’t work the way we’d like. Galaxy Research has released an interesting report, and it lays bare the core of the problem.
It all starts with a paradox: blockchain is, by definition, programmable, permissionless, and an open ecosystem—so it would seem to be an ideal environment for autonomous agents. But the trouble is that blockchain was created for consensus and deterministic execution, not for machines to understand the economic meaning of what’s happening.
That’s the key difference from traditional algorithmic systems. A typical algorithm can scan the DeFi market, find new contracts, allocate capital—automatically. But the moment an unfamiliar interface appears, the system stalls. It needs a human who can work through the code, understand the mechanics, and write the integration. The human interprets; the algorithm executes.
Agents based on large language models are shifting this boundary. They can read unfamiliar code, analyze documentation, and infer the system’s economic functions. It sounds powerful, but there’s a catch: they do it imperfectly, and in an environment involving real assets, a mistake can cost money.
Galaxy identified four main sources of friction. The first is discovery. On-chain, all deployed contracts look the same to the protocol, but the agent has to figure out which are legitimate, which are fakes, and which are abandoned projects. Humans solve this through websites, social signals, and interfaces. The agent only sees raw bytecode.
The second friction is verification. Remember the story of WETH. On Ethereum, there are almost 200 tokens named “Wrapped Ether,” with the symbol “WETH” and 18 digits after the decimal point. How will a machine determine which one is real? There’s no built-in concept on the blockchain of “this is the official application contract.” Repositories and trusted interfaces help humans, but for an agent, it turns into a logical puzzle.
The third friction is data. Imagine you want to compare yield between Aave v3 and Compound v3. Both are lending markets, and the economic concepts are the same. But the ways to obtain the data are completely different. In Aave, you first need to get a list of reserves, and then, for each one, make a separate call for liquidity and rates. In Compound, each deployment is its own market, and there’s no unified reserves structure at all. The agent has to use different methods for each protocol. This isn’t just inconvenient—it creates delays and introduces the risk of data desynchronization.
The fourth friction is execution. When you click a button in an interface, you’re informally checking: does this look reasonable? What’s the risk? Is slippage acceptable? People do this intuitively. Agents have to encode these checks explicitly. They have to translate the goal “maximize yield with risk control” into a concrete plan: choose the protocol, the market, the volume, and the sequence of actions. Then verify that every step complies with the constraints. And finally, make sure that the result actually matches the goal, even if the transaction is technically successful.
These aren’t just engineering problems—they’re structural contradictions. Blockchain is designed to guarantee the correctness of state transitions, but it does not guarantee that economic states are easy to interpret, that contracts are standardized, or that goals can be achieved.
Part of the problems are a consequence of openness and the lack of permission (which is both a strength and a weakness of blockchain). Part comes from the current state of tools and infrastructure. But the main point is that all existing infrastructure assumes that there’s a human in between interpreting the state and executing an action.
Galaxy suggests that the solution will require new layers: unifying the economic state across protocols, indexing services for semantic primitives, registries for verifying contracts and tokens, and execution frameworks with hard-coded constraints.
As agents begin to truly manage capital, the architectural assumptions of the current interaction level will become increasingly obvious. It’ll be interesting to see how the blockchain ecosystem adapts to this. It seems there will be a lot of building ahead.