💥 Gate Square Event: #PostToWinCGN 💥
Post original content on Gate Square related to CGN, Launchpool, or CandyDrop, and get a chance to share 1,333 CGN rewards!
📅 Event Period: Oct 24, 2025, 10:00 – Nov 4, 2025, 16:00 UTC
📌 Related Campaigns:
Launchpool 👉 https://www.gate.com/announcements/article/47771
CandyDrop 👉 https://www.gate.com/announcements/article/47763
📌 How to Participate:
1️⃣ Post original content related to CGN or one of the above campaigns (Launchpool / CandyDrop).
2️⃣ Content must be at least 80 words.
3️⃣ Add the hashtag #PostToWinCGN
4️⃣ Include a screenshot s
Gensyn Testnet is online, how to make AI training more efficient and more decentralized?
AI is currently the most关注的细分赛道 in the crypto industry, among which the distributed AI computing network Gensyn, led by a16z with a total financing scale of 50 million dollars, is undoubtedly a competitive project. Recently, Gensyn officially launched its Testnet, although it was more than a year later than originally planned, it has finally entered a new phase with the launch of the Testnet.
As a customized Ethereum Rollup designed specifically for machine learning, the Gensyn Testnet integrates off-chain execution, validation, and communication frameworks, aiming to provide key functionalities such as persistent identity, participation tracking, ownership maintenance, payments, remote execution coordination, trustless verification, training process recording, and crowdfunding for large-scale training tasks for decentralized AI systems.
The first phase of the Testnet focuses on tracking participation within the RL Swarm. RL Swarm is an application for collaborative reinforcement learning post-training, where its nodes can be bound to on-chain identities, ensuring that the contributions of each participating node are accurately recorded.
RL Swarm: Core Functions and Collaborative Training
In the Gensyn Testnet, RL Swarm, as a core application, is a model collaborative training system built on a decentralized network. Unlike traditional independent training of a single model, RL Swarm allows multiple models to communicate, critique, and improve each other within the network, thereby enhancing overall performance together. Its core philosophy is “collective intelligence,” which aims to achieve more efficient training results through collaboration and feedback among node models.
It can be simply understood that models like DeepSeek-R1 can iteratively improve their inference performance through self-criticism during inference training, while RL Swarm extends this mechanism to a group of multiple models, achieving the effect of “many hands make light work.”
Based on the RL Swarm system, the model not only relies on its own feedback but also identifies its shortcomings and optimizes itself by observing and evaluating the performance of other models. Each model node that joins the Swarm participates in a three-stage process: first, independently solving the problem and outputting ideas and answers; second, reviewing the answers of other nodes and providing feedback; and finally, the models vote to select the optimal solution, which is then used to correct their own outputs. This collaborative mechanism not only enhances the performance of each model but also drives the evolution of the entire group of models. Models that join the Swarm can retain their improved local weights after leaving, gaining tangible benefits.
In addition, Gensyn has open-sourced RL Swarm’s code, allowing anyone to run a node, start or join an existing Swarm without permission. Swarm’s underlying communication uses the gossip protocol provided by Hivemind, which supports decentralized messaging and learning signal sharing between models. Whether it’s a home laptop or a cloud GPU, you can participate in collaborative training by joining an RL Swarm node.
Infrastructure three pillars: Execution, Communication and Verification
Currently, RL Swarm is still just an experimental demonstration, showcasing a large-scale, scalable machine learning approach rather than a final product form. Over the past four years, Gensyn’s core work has actually been focused on building the underlying infrastructure, which has entered v0.1 phase after the release of the Testnet and is already operational. According to the official introduction, Gensyn’s overall architecture is divided into three parts: execution, communication, and verification.
Execution: Consistency and Distributed Computing
Gensyn believes that the future of machine learning is no longer limited to traditional monolithic models, but is composed of fragmented parameters distributed across devices globally. To achieve this goal, the Gensyn team has developed an underlying execution architecture that ensures consistency across devices. Key technologies involved include:
Communication: Efficient Information Exchange
In large-scale distributed training scenarios, efficient communication between nodes is crucial. Traditional data parallel methods, while able to reduce communication overhead to some extent, face scalability issues due to the requirement for each node to store a complete model, which is limited by memory constraints. To address this, Gensyn has proposed a brand new solution:
Verification: Ensure trust and security
In a trustless distributed network, confirming the authenticity and validity of the computation results submitted by each participant is a significant challenge. Gensyn introduces a specialized verification protocol aimed at ensuring that all computing power providers deliver correct work results through a low-cost and efficient mechanism: