Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Pre-IPOs
Unlock full access to global stock IPOs
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
A16z: How can blockchain fill the gaps in AI Agent identity, payments, and trust?
Title: The Missing Infrastructure for AI Agents: 5 Ways Blockchains Can Help
Source: a16z crypto
Translation: AididiaoJP, Foresight News
Author: Rhythm BlockBeats
Source:
Repost: Mars Finance
AI Agents are rapidly evolving from auxiliary tools to true economic participants at a pace far exceeding other infrastructures.
Although Agents can now perform tasks and execute trades, they still lack a standardized cross-environment way to prove “Who am I,” “What am I authorized to do,” and “How do I get paid.” Identity cannot be migrated, payments are not yet default programmable, and collaboration remains siloed.
Blockchains are addressing these issues from the infrastructure level. Public ledgers provide verifiable proof for every transaction; wallets give Agents portable identities; stablecoins serve as an alternative settlement layer. These are not future concepts—they are available today and can help Agents operate as genuine economic entities in a permissionless manner.
Providing Identity for Non-Humans
The current bottleneck in the Agent economy is no longer intelligence but identity.
In the financial services industry alone, the number of non-human identities (automated trading systems, risk engines, fraud models) is already about 100 times that of human employees. As modern Agent frameworks (tool-calling large models, autonomous workflows, multi-Agent orchestration) are deployed at scale, this ratio will continue to rise across industries.
However, these Agents are still essentially “bank account-less.” They can interact with financial systems but cannot do so in a portable, verifiable, and default-trusted manner. They lack standardized ways to prove their permissions, operate independently across platforms, or take responsibility for their actions.
What is missing is a universal identity layer—akin to an SSL for Agents—that can standardize collaboration across platforms. Current solutions remain fragmented: one side is vertically integrated, fiat-priority stacks; the other is crypto-native, open standards (like x402 and emerging Agent identity proposals); and there are developer framework extensions attempting to bridge application-layer identities (such as MCP, Model Context Protocol).
There is still no widely adopted, interoperable way for one Agent to prove to another: who it represents, what it is permitted to do, and how it gets paid.
This is the core idea behind KYA (Know Your Agent). Just as humans rely on credit records and KYC (Know Your Customer), Agents will need cryptographically signed credentials that bind them to entities, permissions, constraints, and reputation.
Blockchain provides a neutral coordination layer: portable identities, programmable wallets, and verifiable proofs that can be parsed in chat apps, APIs, and marketplaces.
Early implementations are already emerging: on-chain Agent registries, native wallet Agents using USDC, ERC standards for “minimally trusted Agents,” and developer toolkits combining identity with embedded payments and fraud controls.
But until universal identity standards are established, merchants will continue to block Agents at firewalls.
Governing AI-Running Systems
Agents are beginning to take over real systems, raising a new question: who truly controls them? Imagine a community or company where AI systems coordinate key resources—whether allocating capital or managing supply chains.
Even if people can vote on policy changes, if the underlying AI layer is controlled by a single provider capable of pushing model updates, adjusting constraints, or overriding decisions, that authority becomes fragile. The formal governance layer may be decentralized, but the operational layer remains centralized—who controls the model ultimately controls the outcome.
When Agents assume governance roles, they introduce a new dependency layer. In theory, this could make direct democracy more feasible: everyone could have an AI proxy to help understand complex proposals, model trade-offs, and vote according to their preferences.
But this vision only works if Agents are truly accountable to the people they represent, portable across providers, and technically constrained to follow human instructions. Otherwise, the system may appear democratic but is actually manipulated by opaque, uncontrolled model behaviors.
If current reality is that Agents are built mainly on a few foundational models, we need ways to prove that an Agent is acting in the user’s interest, not the model company’s.
This likely requires cryptographic guarantees at multiple levels:
(1) The training data, fine-tuning, or reinforcement learning underlying the model instance;
(2) The specific prompts and instructions the Agent follows;
(3) Its actual behavior records in the real world;
(4) Trusted assurances that providers cannot alter instructions or retrain the model without notice. Without these guarantees, Agent governance devolves into control by those who manage the model weights.
This is where cryptography can play a particularly powerful role. If collective decisions are recorded on-chain and automatically executed, AI systems can be required to strictly follow verified outcomes. If Agents have cryptographic identities and transparent execution logs, people can verify whether they are acting within boundaries.
If the AI layer is user-owned and portable, rather than locked to a single platform, no company can unilaterally change rules through a model update.
Ultimately, governing AI systems is an infrastructure challenge, not just a policy issue. True authority depends on building enforceable guarantees directly into the system architecture.
Filling the Gap in Traditional Payment Systems for Native AI Business
AI Agents are starting to purchase various services—web scraping, browsing sessions, image generation—while stablecoins are becoming an alternative settlement layer for these transactions. Meanwhile, a new market for Agent-oriented services is emerging.
For example, Stripe and Tempo’s MPP marketplace aggregates over 60 services designed specifically for AI Agents. In its first week, it processed over 34,000 transactions, with fees as low as $0.003, and stablecoins are one of the default payment methods.
The key difference is how these services are accessed: there are no checkout pages. Agents read schemas, send requests, pay, and receive outputs—all in a single exchange.
This represents a new class of unidentity merchants: just a server, a set of endpoints, and a price per call. No front-end interface, no sales team.
Payment infrastructure for this has already launched. Coinbase’s x402 and MPP use different approaches but both embed payments directly into HTTP requests. Visa is also expanding card payment rails in a similar direction, providing CLI tools that let developers spend from the terminal, with merchants receiving stablecoins instantly in the backend.
Data is still early-stage. After filtering out non-organic activity like fraud, x402 handles about $1.6 million per month in Agent-driven payments, far below Bloomberg’s recent report of $24 million (citing x402.org data). But the surrounding infrastructure is rapidly expanding: Stripe, Cloudflare, Vercel, and Google have integrated x402 into their platforms.
Developer tools present a major opportunity. As “vibe coding” broadens the pool of people able to build software, the total addressable market for developer tools is growing. Companies like Merit Systems are building products for this world, such as AgentCash—a CLI wallet and marketplace connecting MPP and x402. These products enable Agents to use a single balance of stablecoins to purchase data, tools, and capabilities.
For example, a sales Agent can call an endpoint while simultaneously fetching data from Apollo, Google Maps, and Whitepages to enrich leads—all without leaving the command line.
This Agent-to-Agent commerce tends to use cryptographic payment rails (and emerging card solutions) for several reasons.
First, underwriting risk: traditional payment processors need to assume merchant risk, but a headless merchant without a website or legal entity is hard to underwrite.
Second, stablecoins on open networks are permissionless programmable assets: any developer can enable an endpoint to accept payments without involving a payment processor or signing merchant agreements.
We’ve seen this pattern before. Every shift in business form creates a new class of merchants that existing systems initially struggle to serve. Companies building this infrastructure are betting not on the current $1.6 million per month but on what happens when Agents become the default buyers—what that number could grow to.
Repricing Trust in the AI Economy
Over the past 300k years, human cognition has been a bottleneck for progress. Today, AI is pushing the marginal cost of execution toward zero. When scarcity resources become abundant, constraints shift. When intelligence becomes cheap, what becomes expensive? The answer is verification.
In the Agent economy, the real scalability limit is our biological capacity for auditing and underwriting machine decision-making. Agents’ throughput already far exceeds human oversight. Due to high supervision costs and lagging failure detection, markets tend to underinvest in oversight. “Humans in the loop” is rapidly becoming physically impossible.
But deploying unverified Agents introduces compound risks. Systems relentlessly optimize “proxy” metrics while quietly diverging from human intent, creating hollow productivity illusions and accumulating massive AI debt. To safely delegate the economy to machines, trust can no longer rely solely on manual checks—trust must be embedded directly into the system architecture.
When anyone can generate content for free, the most critical factor is verifiable provenance—knowing where it came from and whether it can be trusted. Blockchain, on-chain proofs, and decentralized digital identity systems are changing what can be securely deployed in the economy. You no longer treat AI as a black box but gain clear, auditable histories.
As more AI Agents begin to transact with each other, settlement rails and provenance proofs are becoming tightly integrated.
Systems handling funds (like stablecoins and smart contracts) can also carry cryptographic credentials showing who did what and who is responsible if issues arise.
Humans’ comparative advantage will shift upward: from detecting small errors to setting strategic directions and bearing responsibility when things go wrong. Lasting advantage belongs to those who can cryptographically authenticate outputs, insure them, and absorb responsibility in failure.
Unverified scale is a liability that accumulates over time.
Maintaining User Control
Decades of new abstraction layers have defined how users interact with technology. Programming languages abstract away machine code; GUIs replaced command lines, followed by mobile apps and APIs. Each shift hides more underlying complexity but keeps users firmly in the loop.
In the Agent world, users specify outcomes rather than specific actions, and the system decides how to achieve them. Agents not only abstract task execution but also who performs it. Users set initial parameters and then step back, letting the system run autonomously. The user’s role shifts from interaction to supervision; unless intervened, the default is “on.”
As users delegate more tasks to Agents, new risks emerge: ambiguous inputs may cause Agents to act on incorrect assumptions without user awareness; failures may go unreported, making diagnosis difficult; a single approval could trigger complex multi-step workflows unexpectedly.
This is where cryptography can help. Cryptographic techniques have long aimed to minimize blind trust.
As users entrust more decisions to software, Agent systems make this problem more acute, demanding more rigorous design—by setting clearer limits, increasing visibility, and enforcing stronger guarantees about system capabilities.
Emerging cryptographic tools are proliferating. Scope delegation frameworks—such as MetaMask’s Delegation Toolkit, Coinbase’s AgentKit and Agent Wallet, and Merit Systems’ AgentCash—allow users to define what Agents can and cannot do at the smart contract level. Intent-based architectures (like NEAR Intents, which has processed over $15 billion in DEX volume since Q4 2024) let users specify desired outcomes (e.g., “bridge tokens and stake”) without detailing how to achieve them.