Is AI pricing emerging from the "locked room"? Bittensor provides the answer

Author: Prathik Desai Source: TokenDispatch Translation: Shan Ouba, Jinse Caijing

The AI industry today is very much like a closed religious system: funding and valuations are conducted behind closed doors. A few leading companies raise massive funds, recruit top researchers, and rent large-scale computing clusters, while the market can only infer their value through funding rounds announced months later. The so-called “valuation” is often just a number agreed upon by a few people in a room, not the true market price discovered through free flow. By the time ordinary investors see the price, most of the upside has already been divided among early participants.

Bittensor’s core proposition is: AI shouldn’t be financed this way. I am deeply fascinated by the system it is building. Not because it can produce models better than OpenAI, Anthropic, or Google—at least not yet—but because it has found a decentralized path that allows for public evaluation, funding, and pricing of AI projects before they grow into traditional companies.

This model is fundamentally different from many previous attempts at decentralization during past AI booms.

Bittensor’s subnet system continuously supports teams, rewards efficient performers, eliminates lagging projects, and real-time re-prices the entire AI ecosystem. This is an unprecedented way of pricing AI. I admit, this process of building AI is quite brutal, but it is also more honest.

In this in-depth analysis, I will break down how Bittensor works and why it might be more advantageous than any previous AI pricing attempts.

The Closed Room of AI Pricing

In the first quarter of 2025 alone, AI startups raised $73.1 billion, accounting for 58% of global venture capital. Despite warnings from institutions like GIC and TPG that valuations in some segments are excessively high, there is little operational performance to support these valuations.

This model benefits founders, insiders, and late-stage investors but excludes others: providers of key computing resources, open-source model developers, and early ordinary users cannot share in the dividends. Even the rise of open-source AI hasn’t changed this situation; funding remains concentrated in cloud service contracts, deployment layers, enterprise packaging, technical support, security, and distribution.

Throughout the value creation process, the public contributes widely, but the benefits are only reaped by a few. Although this pattern has existed for a long time, real change is coming with the rise of open-source AI economic models.

Red Hat developers report that companies are increasingly deploying open-source AI models locally for autonomous control and specialized tasks, especially in highly regulated industries like telecommunications and banking. What companies need are AI deployment solutions for monitoring, automation, and scaling, not just a single AI model.

Large institutions like McKinsey also recognize this trend. Their research shows that over half of surveyed companies have fully integrated open-source AI into their tech stacks. The survey covered 41 countries, with over 700 technical leaders and senior developers.

Bittensor’s model is precisely a response to these industry changes, challenging the current AI project valuation system.

Crypto-native investors are currently obsessed with Bittensor’s native token TAO, which has doubled in price over the past month. Others are passionately debating the pros and cons of decentralized AI versus centralized AI. But for me, the more important pursuit is exploring more precise AI valuation methods. Bittensor’s answer is: bring all parties involved in funding, development, validation, and usage into the same market, and price AI based on transparent metrics.

Bringing AI to the Public Market

If you see Bittensor as a network of multiple micro AI economies rather than a single token, it becomes easier to understand.

Each subnet is a specialized market within the AI tech stack, focusing on reasoning, distributed training, prediction signals, or computational power supply. Subnet creators set incentive mechanisms and target tasks, miners perform tasks, validators score results, and stakers can support specific validators by staking TAO.

After Bittensor launched dynamic TAO upgrades in February 2025, the incentive mechanism became more innovative: each subnet now has its own token and funding pool. Bittensor is no longer a single generalized AI investment target but an ecosystem hosting many small AI projects.

In the second half of 2025, Bittensor will allocate rewards more based on TAO net inflow rather than rigid token prices. In December of that year, TAO underwent its first halving, reducing daily issuance to 3,600 tokens, further incentivizing capital to allocate efficiently and turning the AI market into a survival-of-the-fittest arena.

Web3 researcher and writer Jeff summarized this as a “Darwinian dynamic mechanism for AI,” with a brilliant summary in his 0xJeff newsletter:

The core of Darwinism is natural selection: individuals compete, and advantageous traits are passed down through generations.

This logic manifests at multiple levels within Bittensor:

  • Subnet competition: subnets vie for their share of the 3,600 TAO daily incentives; top-performing subnets gain longer survival.

  • Miner competition: miners compete to produce the best results; global participants compare performance based on key metrics, with top miners earning 41% of the subnet alpha token rewards.

  • Validator and investor competition: validators verify miner tasks; investors compete to bet on the best-performing subnets.

What happens if you don’t participate in the competition or perform poorly? You get eliminated. Subnets can be directly removed (yes, deleted from the system).

This is the key difference from traditional AI models.

In traditional models, founders pitch their companies, raise equity, build teams, and hope the market recognizes their valuation.

Bittensor disrupts this pattern by openly revealing investment projects early in the market. In this model, entrepreneurs first launch subnets, then GPU operators contribute computing resources. Developers and researchers contribute work, investors buy stakes via TAO or specific subnet tokens, and finally, customers pay for underlying services. The market then considers all factors to price the project holistically.

What I love most is that it reimagines the capital market for all stakeholders.

Unlike private startups, investors can continuously discover prices without waiting for the next funding round. In fact, Bittensor allows them to view the entire ecosystem via the TAO platform or focus on their favorite subnets for more precise investment.

For developers, the appeal is that they can participate in AI development without relying solely on elite data centers like Anthropic or OpenAI.

It provides entrepreneurs with a capital market centered around their ideas, even before their projects mature into full-fledged companies—something unprecedented in venture capital. This is evident in how internal network capital is gathered: a few subnets attract disproportionate inflows and outflows of TAO, while others lag behind. The top five subnets by market cap nearly account for one-third of the total subnet market value.

For customers, this system offers cheaper, more flexible access to open infrastructure.

Moreover, Bittensor’s model is more attractive to all stakeholders because it sounds fairer and is more commercially viable.

Market Maturity

Institutional investors are increasingly viewing Bittensor as a compliant, investable asset, and this trend is clear.

By December 2025, Grayscale’s Bittensor Trust was listed on OTCQX, a top-tier over-the-counter market, providing traditional investors with a familiar channel to participate in this unfamiliar but in-demand asset.

A sign of market maturity is having compliant packaging, trading codes, screen quotes, and broker access—similar to Bitcoin, Ethereum ETFs, and digital asset bonds (DATs). While Bittensor may not yet be as well known as Bitcoin or Ethereum in crypto circles, the launch of the Grayscale Trust indicates institutional interest has shifted from theory to tangible products.

Bittensor’s work has even gained recognition from top industry figures it might disrupt.

When renowned venture capitalist and entrepreneur Chamath Palihapitiya mentioned Bittensor’s distributed training runtime to NVIDIA CEO Jensen Huang, Huang did not dismiss it; instead, he considered it a mere low-level achievement in crypto. He called it “a modern version of Folding@home,” referring to a decentralized distributed project that uses volunteers’ spare computing power to simulate protein folding or other complex problems.

This positioning places Bittensor within the long history of distributed computing, rather than just token cycles.

One of its top subnets, Templar, recently demonstrated its technical capability: its Covenant-72B model, with 720 billion parameters, was trained from scratch by over 20 global participants via Bittensor, based on 1.1 trillion tokens. In public benchmarks, Covenant-72B scored 67.11 on MMLU, surpassing LLaMA-2-70B’s 65.63.

Simply put, it still can’t outperform OpenAI or Anthropic, but it proves that decentralized collaboration can build AI infrastructure with commercial value.

Subnets like Chutes are explicitly positioned as decentralized, serverless AI compute platforms, and Bittensor’s official documentation defines subnets as independent markets for inference, training, and other digital goods. This indicates that the market is not pricing vague AI narratives but is instead pricing specific modules within the tech stack.

Demand-side Dilemmas

Bittensor’s supply-side transparency far exceeds that of any other AI market: issuance, staking flows, subnet capital aggregation, and other data are all clear at a glance. The real issue lies on the demand side, where information remains opaque.

Blockchains only record token transfers but do not collect data on user retention, API usage quality, profit margins, or audited revenues. Even if a subnet appears commercially thriving, investors often can only infer business quality from market structure rather than financial statements.

Pine Analytics, in analyses titled “Supply Transparency vs. Demand Opacity” and “Chutes (SN64): Subsidy-Supported Low Prices,” sharply criticizes that some of Bittensor’s impressive commercial performance may still be subsidy-driven, with subsidies essentially being TAO issuance rewards within subnets. Pine estimates that the confirmed external revenue of the entire network is still negligible compared to the implied value of TAO.

A prime example is Bittensor’s largest subnet, Chutes: it receives $52 million annually in TAO issuance subsidies, but its external revenue is only $2.4 million. Without subsidies, its operations would be financially unsustainable. This doesn’t negate Bittensor’s model but highlights that the market is currently pricing AI visions rather than cash flows.

Because of this, I pay close attention to Bittensor’s development. It shows all signs of an ecosystem maturing, even though it hasn’t ended the debate over “decentralized AI.” It is still refining the most accurate way to value AI projects, but it has already begun to address the long-ignored issue of internal network capital, pricing faith and valuation through the open market.

While private AI giants ask the world to trust a few people in a room to determine multi-trillion-dollar valuations, Bittensor chooses to trust the open market. I know the latter isn’t perfect, but I appreciate and endorse the transparency it brings.

TAO-2,64%
View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
Add a comment
Add a comment
No comments
  • Pin