Delphi Project: Extrapolated 300x Prediction Large Model Training, Error Only 0.2%

AIMPACT News, May 12 (UTC+8), William Barr Held posted a tweet introducing the Delphi project as the first step of Marin. The project pre-trains multiple small models with a single recipe, then extrapolates 300 times, successfully predicting a training run with 25 billion parameters and 600 billion tokens, with an error of only 0.2%. Delphi aims to achieve predictable scaling to train better open-source models. (Source: InFoQ)

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
Add a comment
Add a comment
No comments
  • Pin