DeepSeek Cuts API Prices by 90%, Runs V4 on Huawei Chips, and Pushes AI Inference Into a Full-Blown Price War


DeepSeek cut API prices by 90% on input cache hits and is offering a 75% discount on V4-Pro until May 5
That takes V4-Pro cache-hit pricing to around $0.0036 per million tokens, while output pricing sits far below Western frontier models charging $12–$25 per million tokens
V4-Pro has 1.6T total parameters, with 49B active per inference pass. V4-Flash is the smaller 284B parameter version
V4 runs on Huawei Ascend chips, not NVIDIA
It also uses far less compute. At a 1M-token context window, V4-Pro reportedly needs only 27% of the compute required by V3.2
Performance is still slightly behind GPT-5.4 and Gemini 3.1 Pro
post-image
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
Add a comment
Add a comment
No comments
  • Pin