$AMD continues to make progress on the software


The company introduced vLLM-ATOM, a plugin designed to make major AI models run better on $AMD Instinct GPUs, including MI350 and MI400
Developers can keep using the same vLLM commands, APIs, and workflows, while ATOM works in the background to improve performance on AMD hardware, requiring no new tools or complex configurations
It also gives users instant access to AMD’s latest optimizations, including FP4 support on MI355X, rack-scale inference on MI400, fused attention, custom AllReduce, and other kernel improvements
ATOM also acts as an innovation sandbox, where AMD can test new optimizations before they are later added to the main vLLM ROCm backend
post-image
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
Add a comment
Add a comment
No comments
  • Pin