Dolphin is a decentralized AI inference network that merges AI with DePIN, designed to build open AI infrastructure by leveraging idle GPU resources worldwide. As demand for computing power from large language models (LLMs) and AI Agents continues to rise, the high costs and resource concentration of traditional centralized cloud platforms have become more apparent. Dolphin aims to lower the barrier to AI inference through distributed GPU collaboration, increasing network openness and resistance to censorship.
In today's Web3 AI infrastructure landscape, Dolphin combines features of AI, DePIN, and distributed inference networks. Its flagship product, Dolphin Network, enables GPU holders to contribute hash power during idle times to process AI requests and earn token rewards. Developers can tap into the network's inference capabilities without being fully dependent on conventional cloud computing platforms.
Dolphin is a project focused on AI model development and distributed inference, with the primary goal of building an open, decentralized AI inference network. Its main product, Dolphin Network, aggregates global GPU resources to deliver distributed inference services for AI models, using cryptoeconomic mechanisms to coordinate relationships between nodes and users.

Dolphin is not a traditional AI chat application, but rather foundational AI infrastructure. The project aims to give developers easier access to AI inference while reducing dependence on single, centralized cloud platforms. Long-term ambitions include open model deployment, a distributed inference marketplace, and a more autonomous AI infrastructure ecosystem.
At the token level, POD serves as the token abbreviation on trading platforms and is the core token within the project ecosystem, primarily used for inference payments, node incentives, and the network's economic cycle.
The core logic behind Dolphin Network is to distribute AI inference tasks to decentralized GPU nodes. When developers or applications submit inference requests, the network automatically splits these tasks and assigns them to available nodes, then verifies result validity through a robust validation mechanism.
GPU holders can run nodes when their devices are idle, participating in inference tasks across the network. Upon completing tasks, nodes receive POD rewards, which can offset GPU costs or be used within the ecosystem.
To prevent malicious nodes from submitting incorrect results, Dolphin employs random sampling verification, encryption, and economic staking mechanisms to maintain network integrity. This design is similar to validation in traditional blockchain networks, but the focus shifts from transaction data to AI inference results.
POD is the core utility token of the Dolphin network, used for AI inference payments, node rewards, staking, and governance.
At the AI service layer, developers use POD to pay for model inference. At the network layer, GPU nodes earn POD for contributing hash power. In some cases, nodes must stake tokens to participate in network validation, enhancing system security.
POD's design mirrors most DePIN projects—using token incentives to drive real infrastructure growth. As more GPU nodes join, Dolphin's overall inference capacity expands, creating a cyclical relationship between AI infrastructure and the token economy.
DePIN (Decentralized Physical Infrastructure Network) refers to Web3 networks that use token incentives to coordinate real-world infrastructure resources. Typical DePIN projects include decentralized storage, wireless networks, and GPU networks.
Dolphin's core resource is GPU hash power, placing it firmly in the AI DePIN sector. The project incentivizes GPU holders to share idle resources, turning previously scattered hardware into a unified AI inference network.
Compared to traditional cloud platforms, DePIN emphasizes openness and resource sharing. For instance, gamers or GPU owners can participate in the network without building large data centers. This approach helps reduce AI infrastructure centralization and increases global hash power utilization.
Dolphin’s primary use cases are AI inference and open AI services.
At the AI model level, developers can deploy open-source large models using Dolphin and perform distributed inference through the network. The project also supports chatbot and AI Agent use cases, such as open AI assistants and automated inference applications.
Because Dolphin emphasizes openness and control, it is also used in discussions around censorship-resistant AI models and autonomous AI systems. Some Dolphin models allow users to customize system rules, model behaviors, and data controls, rather than relying solely on default policies from centralized AI providers.
Dolphin and Render are both Web3 projects that build infrastructure using distributed GPU resources and are often compared.
However, Dolphin and Render have fundamentally different goals: Render focuses on GPU rendering and digital content generation, while Dolphin is dedicated to building a decentralized AI inference network. They differ significantly in task type, resource scheduling, target users, and network structure.
| Comparison Dimension | Dolphin | Render |
|---|---|---|
| Core Positioning | Decentralized AI Inference Network | Decentralized GPU Rendering Network |
| Main Use Cases | AI Inference, AI Agent, LLM Services | 3D Rendering, Visual Content Creation |
| Core Resource | AI Inference Hash Power | Graphic Rendering Hash Power |
| Target Users | AI Developers, AI Applications | Designers, Animation Teams, Creators |
| Network Side | AI DePIN | GPU Render DePIN |
| Typical Scenarios | AI API, Inference Services, Model Deployment | Blender, OctaneRender, Animation Rendering |
| Open Model Support | Emphasizes Open AI Models | Not Focused on AI Model Openness |
The main difference between Dolphin and traditional AI platforms is in infrastructure and control.
Traditional AI services rely on centralized data centers, with a single platform controlling models, system rules, APIs, and data access. Developers must follow platform restrictions and accept the risk of model or pricing changes by the platform.
Dolphin seeks to reduce this centralization by using a distributed GPU network. Nodes are provided by global users, allowing developers to use more open models and inference environments while maintaining greater data control.
However, this open approach also brings challenges, such as node stability, result validation, network latency, and infrastructure coordination. As a result, decentralized AI networks are still in an early stage of development.
Dolphin’s main advantages are its open GPU network and decentralized AI inference capabilities. Compared to centralized AI platforms, this model can increase GPU utilization and lower some AI service costs.
Open AI networks also offer greater censorship resistance, giving developers more freedom to deploy models and control system behaviors and data strategies.
On the other hand, Dolphin faces practical challenges: performance among distributed GPU nodes can vary, affecting inference stability; AI inference result validation is complex; and the regulatory landscape for open AI models remains uncertain.
Dolphin (POD) is a decentralized AI inference project that combines AI, DePIN, and distributed GPU networks. Its mission is to build open AI infrastructure and incentivize GPU holders worldwide to collaborate through tokens.
As AI model computing demands continue to grow, the concentration of resources in centralized AI cloud platforms is drawing increasing scrutiny. Dolphin’s AI DePIN model seeks to provide new infrastructure solutions for AI inference by leveraging Web3 incentives and open network structures.
Dolphin belongs to both the AI and DePIN sectors, with its core mission focused on delivering AI inference through a distributed GPU network.
GPU holders can run nodes during idle periods, participate in AI inference tasks, and earn token rewards.
Traditional AI platforms depend on centralized data centers, while Dolphin uses a distributed GPU network to provide AI inference services, emphasizing openness and resource sharing.
Yes. Some Dolphin models highlight openness and control, allowing users to customize system rules and model behaviors.





