📢 Exclusive on Gate Square — #PROVE Creative Contest# is Now Live!
CandyDrop × Succinct (PROVE) — Trade to share 200,000 PROVE 👉 https://www.gate.com/announcements/article/46469
Futures Lucky Draw Challenge: Guaranteed 1 PROVE Airdrop per User 👉 https://www.gate.com/announcements/article/46491
🎁 Endless creativity · Rewards keep coming — Post to share 300 PROVE!
📅 Event PeriodAugust 12, 2025, 04:00 – August 17, 2025, 16:00 UTC
📌 How to Participate
1.Publish original content on Gate Square related to PROVE or the above activities (minimum 100 words; any format: analysis, tutorial, creativ
New Trends in AI Layer 1 Track: In-depth Analysis of Six Major Projects
AI Layer1 Research Report: Finding the Fertile Ground for on-chain DeAI
Overview
In recent years, leading tech companies such as OpenAI, Anthropic, Google, and Meta have been driving the rapid development of large language models (LLMs). LLMs have demonstrated unprecedented capabilities across various industries, greatly expanding the realm of human imagination, and even showing the potential to replace human labor in certain scenarios. However, the core of these technologies is firmly held by a few centralized tech giants. With strong capital and control over expensive computing resources, these companies have established insurmountable barriers, making it difficult for the vast majority of developers and innovation teams to compete.
At the same time, in the early stages of the rapid evolution of AI, public opinion often focuses on the breakthroughs and conveniences brought by technology, while paying relatively insufficient attention to core issues such as privacy protection, transparency, and security. In the long run, these issues will profoundly affect the healthy development of the AI industry and social acceptance. If not properly addressed, the debate over whether AI is "for good" or "for evil" will become increasingly prominent, while centralized giants, driven by profit-seeking instincts, often lack the sufficient motivation to actively respond to these challenges.
Blockchain technology, with its decentralized, transparent, and censorship-resistant characteristics, provides new possibilities for the sustainable development of the AI industry. Currently, numerous "Web3 AI" applications have emerged on some mainstream blockchains. However, a deeper analysis reveals that these projects still face many issues: on one hand, the degree of decentralization is limited, and key links and infrastructure still rely on centralized cloud services, making it difficult to support a truly open ecosystem; on the other hand, compared to AI products in the Web2 world, on-chain AI still shows limitations in model capabilities, data utilization, and application scenarios, with the depth and breadth of innovation needing improvement.
To truly realize the vision of decentralized AI, enabling the blockchain to securely, efficiently, and democratically support large-scale AI applications, and to compete with centralized solutions in terms of performance, we need to design a Layer 1 blockchain tailored specifically for AI. This will provide a solid foundation for open innovation in AI, democratic governance, and data security, promoting the prosperous development of a decentralized AI ecosystem.
Core Features of AI Layer 1
AI Layer 1, as a blockchain tailored specifically for AI applications, is designed with its underlying architecture and performance closely aligned with the needs of AI tasks, aiming to efficiently support the sustainable development and prosperity of the on-chain AI ecosystem. Specifically, AI Layer 1 should possess the following core capabilities:
Efficient incentives and decentralized consensus mechanisms The core of AI Layer 1 lies in building an open shared network for resources such as computing power and storage. Unlike traditional blockchain nodes that mainly focus on ledger accounting, the nodes of AI Layer 1 need to undertake more complex tasks, providing not only computing power and completing the training and inference of AI models, but also contributing diverse resources such as storage, data, and bandwidth, thereby breaking the monopoly of centralized giants in AI infrastructure. This places higher demands on the underlying consensus and incentive mechanisms: AI Layer 1 must be able to accurately assess, incentivize, and verify the actual contributions of nodes in tasks such as AI inference and training, achieving network security and efficient resource allocation. Only in this way can the stability and prosperity of the network be ensured, while effectively reducing the overall computing power costs.
Excellent high performance and heterogeneous task support capabilities AI tasks, especially the training and inference of LLMs, impose extremely high demands on computing performance and parallel processing capabilities. Furthermore, the on-chain AI ecosystem often needs to support diverse and heterogeneous task types, including various model architectures, data processing, inference, storage, and other multifaceted scenarios. AI Layer 1 must perform deep optimization at the underlying architecture level to meet requirements for high throughput, low latency, and elastic parallelism, while also presetting native support capabilities for heterogeneous computing resources, ensuring that various AI tasks can operate efficiently and achieve a smooth transition from "single-type tasks" to "complex and diverse ecosystems."
Verifiability and Trustworthy Output Guarantee AI Layer 1 not only needs to prevent security risks such as model malfeasance and data tampering but also must ensure the verifiability and alignment of AI output results from the underlying mechanisms. By integrating cutting-edge technologies such as Trusted Execution Environment (TEE), Zero-Knowledge Proof (ZK), and Multi-Party Computation (MPC), the platform allows each model inference, training, and data processing process to be independently verified, ensuring the fairness and transparency of the AI system. At the same time, this verifiability helps users clarify the logic and basis of AI output, achieving "what is obtained is what is desired" and enhancing user trust and satisfaction with AI products.
Data Privacy Protection AI applications often involve sensitive user data, and the protection of data privacy is especially critical in sectors such as finance, healthcare, and social networking. AI Layer 1 should ensure verifiability while employing cryptographic data processing technologies, privacy computing protocols, and data permission management methods to guarantee the security of data throughout the entire process of inference, training, and storage, effectively preventing data leaks and misuse, and alleviating user concerns regarding data security.
Powerful ecological support and development capabilities As an AI-native Layer 1 infrastructure, the platform not only needs to have technical leadership but also must provide comprehensive development tools, integrated SDKs, operation and maintenance support, and incentive mechanisms for ecological participants such as developers, node operators, and AI service providers. By continuously optimizing platform usability and developer experience, it promotes the landing of diverse AI-native applications and achieves the sustained prosperity of a decentralized AI ecosystem.
Based on the above background and expectations, this article will provide a detailed introduction to six representative AI Layer 1 projects, including Sentient, Sahara AI, Ritual, Gensyn, Bittensor, and 0G, systematically sorting out the latest developments in the field, analyzing the current status of project development, and discussing future trends.
Sentient: Building a Loyal Open Source Decentralized AI Model
Project Overview
Sentient is an open-source protocol platform that is creating an AI Layer 1 blockchain. The initial phase is Layer 2, which will later migrate to Layer 1 (. By combining AI Pipeline and blockchain technology, it aims to build a decentralized artificial intelligence economy. Its core goal is to address issues of model ownership, invocation tracking, and value distribution in the centralized LLM market through the "OML" framework (Open, Monetizable, Loyal), enabling AI models to realize on-chain ownership structure, invocation transparency, and value sharing. Sentient's vision is to allow anyone to build, collaborate, own, and monetize AI products, thereby promoting a fair and open AI Agent network ecosystem.
The Sentient Foundation team brings together top academic experts, blockchain entrepreneurs, and engineers from around the world, dedicated to building a community-driven, open-source, and verifiable AGI platform. Core members include Princeton University professor Pramod Viswanath and Indian Institute of Science professor Himanshu Tyagi, who are responsible for AI safety and privacy protection, respectively, while Polygon co-founder Sandeep Nailwal leads the blockchain strategy and ecosystem layout. Team members come from well-known companies such as Meta, Coinbase, and Polygon, as well as top universities like Princeton University and the Indian Institute of Technology, covering fields such as AI/ML, NLP, and computer vision, working together to promote the project's implementation.
As a second entrepreneurial project of Polygon co-founder Sandeep Nailwal, Sentient was born with a halo, possessing rich resources, connections, and market recognition, providing strong backing for project development. In mid-2024, Sentient completed a $85 million seed round financing, led by Founders Fund, Pantera, and Framework Ventures, with other investors including dozens of well-known VCs such as Delphi, Hashkey, and Spartan.
![Biteye and PANews jointly released AI Layer1 research report: Finding fertile ground for on-chain DeAI])https://img-cdn.gateio.im/webp-social/moments-f4a64f13105f67371db1a93a52948756.webp(
) Design Architecture and Application Layer
Infrastructure Layer
Core Architecture
The core architecture of Sentient consists of two parts: AI Pipeline and on-chain system.
The AI pipeline is the foundation for developing and training "Loyal AI" artifacts, consisting of two core processes:
Blockchain systems provide transparency and decentralized control for protocols, ensuring ownership, usage tracking, revenue distribution, and fair governance of AI artifacts. The specific architecture is divided into four layers:
![Biteye and PANews jointly released AI Layer1 research report: Searching for on-chain DeAI fertile ground]###https://img-cdn.gateio.im/webp-social/moments-a70b0aca9250ab65193d0094fa9b5641.webp(
)## OML Model Framework
The OML framework (Open, Monetizable, Loyal) is the core concept proposed by Sentient, aimed at providing clear ownership protection and economic incentives for open-source AI models. By combining on-chain technology and AI-native cryptography, it has the following features:
AI-native Cryptography
AI-native encryption leverages the continuity, low-dimensional manifold structure, and differentiability characteristics of AI models to develop a "verifiable but non-removable" lightweight security mechanism. Its core technology is:
This method can achieve "behavior-based authorization calls + ownership verification" without the cost of re-encryption.
Model Rights Confirmation and Security Execution Framework
Sentient currently adopts Melange mixed security: combining fingerprint verification, TEE execution, and on-chain contract profit sharing. The fingerprint method is implemented through OML 1.0 as the main line, emphasizing the "Optimistic Security" concept, which assumes compliance by default, and can detect and punish violations.
The fingerprinting mechanism is a key implementation of OML. It generates a unique signature during the training phase by embedding specific "question-answer" pairs. With these signatures, the model owner can verify ownership, preventing unauthorized copying and commercialization. This mechanism not only protects the rights of model developers but also provides a traceable on-chain record of model usage.
In addition, Sentient has launched the Enclave TEE computing framework, which utilizes trusted execution environments (such as AWS Nitro Enclaves) to ensure that models only respond to authorized requests, preventing unauthorized access and usage. Although TEE relies on hardware and has certain security risks, its high performance and real-time advantages make it a core technology for current model deployments.
In the future, Sentient plans to introduce zero-knowledge proofs (ZK) and fully homomorphic encryption (FHE) technologies to further enhance privacy protection and verifiability, providing decentralized deployment for AI models.