How does blockchain fill the gaps in AI Agent identity, payments, and trust?

Written by: a16z crypto

Translated by: AididiaoJP, Foresight News

AI Agents are rapidly evolving from auxiliary tools to genuine economic participants at a pace far surpassing other infrastructure.

Although Agents can now perform tasks and execute trades, they still lack a standardized way across environments to prove “Who am I,” “What am I authorized to do,” and “How do I get paid.” Identity cannot be migrated, payments are not yet default programmable, and collaboration remains siloed.

Blockchain is addressing these issues from the infrastructure layer. Public ledgers provide verifiable credentials for every transaction; wallets give Agents portable identities; stablecoins serve as an alternative settlement layer. These are not future concepts—they are available today and can help Agents operate as true economic entities in permissionless ways.

Providing Identity for Non-Humans

The current bottleneck in the Agent economy is no longer intelligence but identity.

In the financial services industry alone, the number of non-human identities (automated trading systems, risk engines, fraud models) is already about 100 times that of human employees. As modern Agent frameworks (tool-calling large models, autonomous workflows, multi-Agent orchestration) are deployed at scale, this ratio will continue to rise across industries.

However, these Agents are still essentially “bank account-less.” They can interact with financial systems but cannot do so in a portable, verifiable, and default-trusted manner. They lack standardized ways to prove their permissions, operate independently across platforms, or take responsibility for their actions.

What is missing is a universal identity layer—akin to an SSL for Agents—that can standardize collaboration across platforms. Current solutions remain fragmented: one side is vertically integrated, fiat-priority stacks; the other is crypto-native, open standards (like x402 and emerging Agent identity proposals); and there are developer frameworks attempting to bridge application-layer identities (such as MCP, Model Context Protocol).

There is still no widely adopted, interoperable method for one Agent to prove to another: who it represents, what it is permitted to do, and how it gets paid.

This is the core idea behind KYA (Know Your Agent). Just as humans rely on credit records and KYC (Know Your Customer), Agents will need cryptographically signed credentials binding them to entities, permissions, constraints, and reputation. Blockchain provides a neutral coordination layer: portable identities, programmable wallets, and verifiable proofs that can be parsed in chat apps, APIs, and marketplaces.

Early implementations are already emerging: on-chain Agent registries, native wallet Agents using USDC, ERC standards for “minimal trust Agents,” and developer toolkits combining identity with embedded payments and fraud controls.

But until universal identity standards emerge, merchants will continue to block Agents at firewalls.

Governing AI-Run Systems

Agents are beginning to take over real-world systems, raising a new question: who truly controls them? Imagine a community or company coordinated by AI systems managing key resources—whether allocating capital or managing supply chains. Even if people vote on policy changes, if the underlying AI layer is controlled by a single provider capable of pushing model updates, adjusting constraints, or overriding decisions, that authority is fragile. The formal governance layer might be decentralized, but the operational layer remains centralized—who controls the models ultimately controls the outcomes.

When Agents assume governance roles, they introduce a new dependency layer. In theory, this could make direct democracy more feasible: everyone could have an AI proxy to help understand complex proposals, model trade-offs, and vote according to preferences. But this vision only works if Agents are truly accountable to the humans they represent, portable across providers, and technically constrained to follow human instructions. Otherwise, the system may appear democratic but is actually manipulated by opaque, uncontrolled model behaviors.

If current reality is that Agents are built primarily on a few foundational models, we need ways to prove that an Agent acts in the user’s interest, not the model company’s. This likely requires cryptographic guarantees at multiple levels: (1) training data, fine-tuning, or reinforcement learning used to build the model instance; (2) the exact prompts and instructions the Agent follows; (3) its actual behavior records in the real world; (4) trusted assurances that the provider cannot alter instructions or retrain the model without user knowledge. Without these guarantees, Agent governance devolves into control by those who manage the model weights.

This is where cryptography can play a crucial role. If collective decisions are recorded on-chain and automatically enforced, AI systems can be required to strictly follow verified outcomes. If Agents have cryptographic identities and transparent execution logs, users can verify whether their proxies are acting within bounds. If the AI layer is owned and portable by users, not locked into a single platform, then any company cannot simply change the rules with a model update.

Ultimately, governing AI systems is an infrastructure challenge, not just a policy one. True authority depends on building enforceable guarantees into the system itself.

Filling the Gap in Traditional Payment Systems for AI-native Business

AI Agents are starting to purchase various services—web scraping, browsing sessions, image generation—while stablecoins are becoming an alternative settlement layer for these transactions. Meanwhile, a new market for Agent-oriented services is emerging. For example, Stripe and Tempo’s MPP marketplace aggregates over 60 services designed specifically for AI Agents. In its first week, it processed over 34,000 transactions with fees as low as $0.003, with stablecoins as a default payment method.

What’s different is how these services are accessed: no checkout pages. Agents read schemas, send requests, pay, and receive outputs—all in a single exchange. This represents a new class of unidentity merchants: just a server, a set of endpoints, and a price per call. No front-end interface, no sales team.

Payment rails enabling this are already live. Coinbase’s x402 and MPP embed payments directly into HTTP requests using different approaches. Visa is also expanding its card payment rails in a similar direction, providing CLI tools that let developers spend from the terminal, with merchants receiving stablecoins instantly in the backend.

Data is still early-stage. After filtering out non-organic activity like fraud, x402 processes about $1.6 million per month in Agent-driven payments, far below Bloomberg’s recent report of $24 million (based on x402.org data). But the surrounding infrastructure is rapidly expanding: Stripe, Cloudflare, Vercel, and Google have integrated x402 into their platforms.

Developer tools represent a major opportunity. As “vibe coding” broadens the pool of people able to build software, the total addressable market for developer tools grows. Companies like Merit Systems are building products for this world, such as AgentCash—a CLI wallet and marketplace connecting MPP and x402. These products enable Agents to use a single stablecoin balance to purchase data, tools, and capabilities. For example, a sales Agent can call an endpoint while simultaneously fetching data from Apollo, Google Maps, and Whitepages to enrich leads, all from the command line.

This Agent-to-Agent commerce tends to favor cryptographic payment rails (and emerging card solutions) for several reasons. First, underwriting risk: traditional payment processors require risk assessment for merchants, but a headless merchant without a website or legal entity is hard to underwrite. Second, stablecoins on open networks are permissionless and programmable: any developer can enable an endpoint to accept payments without involving a payment processor or merchant agreement.

We’ve seen this pattern before. Every shift in business models creates a new class of merchants that existing systems struggle to serve initially. Companies building this infrastructure are betting not on the current $1.6 million per month, but on what happens when Agents become the default buyers—how big that number can grow.

Repricing Trust in the Agent Economy

Over the past 300k years, human cognition has been a bottleneck for progress. Today, AI is pushing the marginal cost of execution toward zero. When scarcity resources become abundant, constraints shift. When intelligence becomes cheap, what becomes expensive? The answer is verification.

In the Agent economy, the real scalability limit is our biological capacity for auditing and underwriting machine decision-making. Agents’ throughput already far exceeds human oversight. Due to high supervision costs and lagging failure detection, markets tend to underinvest in oversight. “Humans in the loop” is rapidly becoming physically impossible.

But deploying unverified Agents introduces compound risks. Systems will relentlessly optimize proxy metrics while quietly diverging from human intent, creating hollow productivity illusions and accumulating massive AI debt. To safely delegate the economy to machines, trust can no longer rely solely on manual checks—trust must be embedded into the system architecture itself.

When anyone can generate content for free, the most critical aspect is verifiable provenance—knowing where it came from and whether it can be trusted. Blockchain, on-chain proofs, and decentralized digital identity systems are changing what can be securely deployed at scale. You no longer treat AI as a black box but gain clear, auditable histories.

As more AI Agents begin to transact with each other, settlement rails and provenance proofs are becoming tightly integrated. Systems handling funds (like stablecoins and smart contracts) can carry cryptographic credentials showing who did what and who is responsible if issues arise.

Human’s comparative advantage will shift upward: from catching small errors to setting strategic direction and bearing responsibility when things go wrong. Lasting advantage belongs to those who can cryptographically authenticate outputs, insure them, and absorb responsibility for failures.

Unverified scale is a liability that accumulates over time.

Maintaining User Control

Decades of new abstraction layers have defined how users interact with technology. Programming languages abstracted away machine code; GUIs replaced command lines, followed by mobile apps and APIs. Each shift concealed more underlying complexity but kept users firmly in control of the loop.

In the Agent world, users specify outcomes rather than specific actions, and the system decides how to achieve them. Agents not only abstract task execution but also who performs it. Users set initial parameters and then step back, letting the system run autonomously. The user’s role shifts from interaction to supervision; by default, the system is “on” unless intervened.

As users delegate more tasks to Agents, new risks emerge: ambiguous inputs may cause Agents to act on incorrect assumptions without user awareness; failures may go unreported, making diagnosis difficult; a single approval could trigger complex multi-step workflows unexpectedly.

This is where cryptography can help. Cryptographic techniques have long aimed to minimize blind trust. As users entrust more decisions to software, Agent systems make this problem more acute and demand more rigorous design—by setting clearer constraints, increasing visibility, and enforcing stronger guarantees about system capabilities.

Emerging cryptographic-native tools are addressing this. Scope delegation frameworks—like MetaMask’s Delegation Toolkit, Coinbase’s AgentKit and Agent Wallet, and Merit Systems’ AgentCash—allow users to define what Agents can and cannot do at the smart contract level. Intent-based architectures (such as NEAR Intents, which has processed over $15 billion in DEX volume since Q4 2024) enable users to specify desired outcomes (e.g., “bridge tokens and stake”) without detailing how to achieve them.

USDC0,02%
View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
Add a comment
Add a comment
No comments
  • Pin