Foresight News reports that the distributed AI laboratory Gradient has released Echo-2, a distributed reinforcement learning framework designed to break through the barriers of AI research training efficiency. The framework decouples Learner and Actor at the architecture level to reduce the post-training costs of large models. According to official data, this framework can reduce the post-training cost of a 30B model from $4,500 to $425. Echo-2 uses compute-storage separation technology for asynchronous training (Async RL), supporting the offloading of sampling computation to unstable GPU instances and heterogeneous GPUs based on Parallax. The framework incorporates techniques such as bounded staleness, fault-tolerant scheduling for instances, and a self-developed Lattica communication protocol to improve training efficiency while maintaining model accuracy. Additionally, Gradient plans to launch the RLaaS (Reinforcement Learning as a Service) platform Logits, which is currently open for reservations to students and researchers.

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
0/400
No comments
  • Pin

Trade Crypto Anywhere Anytime
qrCode
Scan to download Gate App
Community
  • 简体中文
  • English
  • Tiếng Việt
  • 繁體中文
  • Español
  • Русский
  • Français (Afrique)
  • Português (Portugal)
  • Bahasa Indonesia
  • 日本語
  • بالعربية
  • Українська
  • Português (Brasil)