A16z: 5 Ways Blockchain Supports Infrastructure for AI Agents

Author: a16z

Translated by: Hu Tao, ChainCatcher

AI agents are quickly shifting from “co-pilots” to economic participants—at a pace that even surpasses the surrounding infrastructure.

Although agents can now carry out tasks and execute trades, they lack standardized ways to prove their identity, permissions, and how they are compensated across environments. Identity information can’t be shared across platforms, default programmable payment methods are not yet in place, and coordination work is carried out in silos.

Blockchain solves this problem at the infrastructure layer. Public ledgers provide receipts for every transaction, so anyone can audit. Wallets give users portable identity information. Stablecoins provide an alternative settlement layer. These are not far-off future technologies. They are available now, and they can help users operate like true economic actors without permission.

  1. Non-human identity

The bottleneck in the agent economy is no longer intelligence—it’s identity.

Even within the financial services industry alone, the number of non-human identities (automated trading systems, risk engines, fraud models) is already about 100x higher than the number of human employees. As modern agent frameworks (LLMs that use tools, autonomous workflows, multi-agent orchestration) are deployed at scale, this ratio is sure to keep rising across all industries.

However, these agents still don’t have bank accounts in practice. They can interact with financial systems, but the way they interact lacks portability and verifiability, and it isn’t trusted by default. They lack standardized methods for proving permissions, can’t operate independently across platforms, and can’t be held accountable for their actions.

What’s missing is a universal identity layer—equivalent to an SSL protocol for agents—that standardizes coordination across platforms. Although there are notable attempts today, the approaches remain fragmented: one side is a vertically integrated, fiat-first stack; the other is crypto-native, open standards (such as x402 and emerging agent identity proposals); and there are also developer frameworks like MCP (Model Context Protocol) extensions that try to bridge identity at the application layer.

At present, there still isn’t a widely adopted, interoperable way for one agent to prove to another: who it represents, what it is allowed to do, and how it gets paid. This is the core idea behind KYA (Know Your Agent).

Just as humans rely on credit history and KYC (Know Your Customer), agents also need cryptographically signed credentials that bind an agent to its delegators, permissions, constraints, and reputation. Blockchain provides a neutral coordination layer for all of this: portable identities, programmable wallets, and verifiable proofs that can be parsed in chat apps, APIs, and marketplaces.

We’ve already started to see early implementations: on-chain agent registries, native agents in wallets using USDC, an ERC standard for “trust-minimized agents,” and developer toolkits that combine identity with embedded payments and fraud controls.

But until a universal identity standard exists, merchants will still block agents at the firewall.

  1. Governance of AI operating systems

Agent operators are beginning to run real systems—raising some new questions.

The key issue is: who truly controls everything? Imagine a community or a company where an AI system coordinates critical resources, whether it’s allocating funds or managing the supply chain. Even if people vote to decide policy changes, if the underlying AI layer is controlled by a single vendor—one that can push model updates, adjust constraints, or overturn decisions—then that power is extremely fragile. Formal governance layers might be decentralized, but the operating layer remains centralized; whoever controls the models ultimately controls the outcomes.

When agents take on governance roles, they introduce a new dependency layer. In theory, this could make direct democracy easier to implement: everyone could have an AI representative responsible for understanding complex proposals, weighing pros and cons, and voting according to the preferences they declare.

But this vision can only work if these agents are truly accountable to the people they represent, can operate across different service providers, and are technically limited to following human instructions. Otherwise, the resulting system may appear democratic on the surface, but it would in reality be driven by opaque model behavior—behavior that nobody can actually control.

If the current reality is that agents are built from a small number of foundational models, then we need some way to prove that agent behavior aligns with users’ interests, not the interests of model companies. This may require multi-layer cryptographic guarantees: (1) which training data, fine-tuning process, or reinforcement learning process actually produced a model instance; (2) the exact prompts and instructions that control a specific agent; (3) records of what the agent actually does in the real world; and (4) reliable assurances that, once deployed, providers cannot change instructions or retrain the agent—so it can’t run in ways users don’t know about. Without these guarantees, governance of agents will ultimately devolve into governance by whoever controls the model weights.

This is where cryptocurrencies come into play. If collective decisions are recorded on-chain and executed automatically, AI systems can be required to carry out verified outcomes. If agents have cryptographic identities and transparent execution logs, people can check whether their agents follow the rules. And if the AI layer is owned by users and is portable—not locked into a single platform—then no company can change the rules unilaterally through model updates.

In the end, governance of AI systems is really an infrastructure challenge, not a policy challenge. Real authority depends on building enforceable guarantees into the system itself.

  1. Filling the gap of traditional payments for AI-native enterprises

AI agents are starting to buy things—web scraping, browser sessions, image generation—and stablecoins are becoming the alternative settlement layer for these transactions. At the same time, a new class of agent-focused marketplaces is taking shape. For example, Stripe and Tempo’s MPP marketplace aggregates 60+ services specifically designed for AI agents. In its first week after launch, it processed more than 34,000 transactions, with fees as low as 0.003 USD, and stablecoins are one of the default payment methods.

The difference lies in how these services are accessed. There is no checkout page. Agents read the schema, send requests, pay, and receive outputs in a single exchange. They represent a new kind of “headless” merchant: one server, a set of endpoints, and a price per call. No front end—no storefront or sales team.

The payment rails to make this possible are already live. Coinbase’s x402 and MPP take different approaches, but both embed payments directly into HTTP requests. Visa is also expanding in a similar direction, offering a CLI tool that lets developers spend from the terminal, while merchants receive stablecoins instantly on the backend.

Current data is still early-stage. After filtering out non-organic activity such as wash trading, x402 processes about 1.6 million USD per month in agent-driven payments—far below Bloomberg’s recent report of 24 million USD (citing x402.org data). But the surrounding infrastructure is expanding rapidly: Stripe, Cloudflare, Vercel, and Google have all integrated x402 into their platforms.

The developer tools space holds huge opportunities. The rise of Vibe Codeing is expanding the base of software developers, and it also expands the potential market for developer tools. Companies like Merit Systems are working on future-oriented solutions; they launched AgentCash, a CLI wallet and marketplace platform that connects to the MPP and x402 protocols. These products let agents use stablecoins from a single account to buy the data, tools, and functionalities they need. For example, a sales team’s agents only need to call a single endpoint to use data from Apollo, Google Maps, and Whitepages to enrich lead information, without ever leaving the command-line interface.

The reason this agent-to-agent business tends to favor crypto payments (and emerging card-based solutions) comes down to several factors. First is underwriting: when a payment processor plugs into a merchant, it takes on the merchant’s risk. A headless merchant with no website or legal entity is difficult for traditional processors to underwrite. Second, stablecoins are programmable on open networks: any developer can make an endpoint support payments without integrating a payment processor or signing merchant agreements.

We’ve seen this kind of model before. Every shift in business models creates a new set of merchants, and existing systems struggle to serve them at first. Companies building this infrastructure aren’t betting on monthly revenue of 1.6 million USD—they’re betting on what revenue could look like when agents become the default buyers.

  1. Repricing trust in the agent economy

For 300,000 years, human cognition has been the bottleneck limiting progress. Today, AI is pushing the marginal cost of execution toward zero. When scarce resources become abundant, the constraint shifts. When intelligence becomes cheap, what becomes expensive? Verification.

In the agent economy, the real limit to scaling is our biological instincts—the ability to audit and evaluate machine decisions. The throughput of agents has already far surpassed the capacity of human oversight. Because oversight costs are high and failures take time to surface, markets tend to reduce investment in supervision. “Human-AI collaboration” is quickly becoming a reality that’s an impossible ideal.

But deploying unverified agents introduces compounding risks. Systems will relentlessly optimize “agent” metrics while subtly diverging from human intent, creating a false illusion of productivity that masks the massive accumulation of AI debt. To safely entrust economic delegation to machines, trust can no longer rely on human review alone—trust must be hard-coded into the architecture itself.

When anyone can generate content for free, the most important thing becomes verifiable provenance—knowing where the content comes from and whether it is trustworthy. Blockchain, along with on-chain attestations and decentralized digital identity systems, changes the economic boundaries of secure deployment. AI is no longer seen as a black box, but as a system with a clear, auditable history.

As more AI agents begin trading with each other, settlement mechanisms and provenance systems become inseparable. Money transfer systems—such as stablecoins and smart contracts—can also carry cryptographic receipts, recording who did what and who should be held responsible when problems arise.

Human comparative advantage keeps improving: from catching small errors, to setting strategic directions, to taking responsibility when problems occur. The enduring advantage belongs to those who can cryptographically certify outputs, insure them, and take responsibility for failures.

Lack of verified scaling is a risk that keeps accumulating over time.

  1. Preserving user control

For decades, layered abstractions have repeatedly changed how users interact with technology. Programming languages abstracted machine code. Command-line interfaces were replaced by graphical user interfaces, which later evolved into mobile applications and application programming interfaces (APIs). Each transformation hid more underlying complexity while keeping users in control of the big picture.

In the agent world, users specify results rather than actions, and the system decides how to achieve those results. Agents don’t just abstract how tasks are completed—they also abstract the executors of tasks. After users set initial parameters, they step back and let the system run on its own. The user’s role shifts from interaction to supervision; unless users intervene, the system defaults to an “on” state.

As users delegate more tasks to agents, new risks emerge: ambiguous inputs may cause agents to take actions based on incorrect assumptions without the user’s knowledge; failures may not be reported, leaving no clear diagnostic path; and a single approval could trigger a multi-step workflow that nobody anticipated.

Cryptography is what makes a difference here. The core purpose of cryptography is always to minimize blind trust as much as possible. As users hand more decision-making power to software, agent systems make this problem even more pronounced—and raise our requirements for rigorous system design. We need to set clearer boundaries, improve transparency, and provide stronger guarantees about what these systems can do.

To address this challenge, a new generation of crypto-native tools has emerged. For example, MetaMask’s Delegation Toolkit, Coinbase’s AgentKit and agent wallets, and Merit Systems’ AgentCash—scope-based delegation frameworks that allow users to define, at the smart contract layer, what actions agents can perform and cannot perform. And intent-based architectures like NEAR Intents (since Q4 2024, its cumulative trading volume on (DEX) has exceeded 15 billion USD) allow users to set expected outcomes—for example, “bridge tokens and stake”—without specifying the exact implementation.

***

AI makes scaling low-cost, but it’s difficult to establish trust. Cryptocurrency can rebuild trust at scale.

Internet infrastructure is being built; in this infrastructure, individuals can participate directly in economic activity. The question now is whether it will be designed with maximum transparency, accountability, and user control in mind—or whether it will be built on systems that were never suitable for non-human actors.

LAYOUT REFERENCE (source): total_lines=107, non_empty_lines=48, blank_lines=59

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
Add a comment
Add a comment
No comments
  • Pin