Meituan Launches Open Testing for Trillion-Parameter Model Powered by Domestic Computing Clusters

On April 24, industry sources reported that Meituan’s next-generation foundational model, LongCat-2.0-Preview, has opened for testing, surpassing a total parameter scale of one trillion, placing it among the world’s leading large models. According to insiders, DeepSeek also released its next-generation V4 model on the same day, which has a total parameter count and active parameter count that are largely consistent with Meituan’s LongCat-2.0-Preview. Beyond the parameter scale, a significant breakthrough of Meituan’s new foundational model lies in its training and inference being entirely supported by domestic computing clusters. The sources indicated that the number of computing cards used during this training phase ranged between 50,000 and 60,000, marking the largest large model training task completed on domestic computing resources to date. (Source: Jiemian)

This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
Add a comment
Add a comment
No comments
  • Pin