Zyphra Open Source ZAYA 1-74B Preview Version: End-to-End Training on All AMD Hardware, 4B Activations, 74B Total Parameters

CoinWorld News reports that Zyphra’s open-source ZAYA 1-74B preview version performs end-to-end training using all-AMD hardware. The model has a total of 74 billion parameters, with 4.0 billion parameters activated per instance. Built on a Mixture of Experts (MoE) architecture, the entire pretraining and context expansion process is completed on AMD MI300X accelerator cards. To improve efficiency for long texts, the model replaces the global attention layers with sliding window attention (SWA) using a 4K window size. Official tests show that this design significantly reduces KV cache usage without sacrificing performance. During training, pretraining corpora totaling 150 trillion tokens were used, and in the 30 trillion-token intermediate training, the context window was gradually expanded to 256K. Zyphra has chosen to publish PASS scores to demonstrate that this base model can generate correct reasoning steps. The full, fully-fledged version of ZAYA 1-74B is expected to be released within the next few weeks.

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
Add a comment
Add a comment
No comments
  • Pin