DeepSeek V4 ignites debate between two U.S. factions: think tanks say reliance on prohibited chips leaves the country half a year behind, while a Silicon Valley CEO says it is open innovation

robot
Abstract generation in progress

According to Beating Monitoring, Chris McGuire, senior researcher for China and emerging technologies at the Council on Foreign Relations (CFR) (former White House National Security Council and Department of Defense member), said in a post that V4 has not changed the AI competitive landscape between China and the United States. He cited the original text from the V4 report, noting that DeepSeek itself admits that its reasoning capability is “about 3 to 6 months behind the cutting edge models,” benchmarked against GPT-5.2 and Gemini 3.0 Pro released half a year earlier. He also questioned that although the V4 report disclosed that the reasoning end is adapted to NVIDIA GPUs and Huawei Ascend NPUs, it did not publicly reveal the specific GPU models and costs used in training (V3 previously claimed it used 2,000 H800 cards, at a cost of $5.57 million). He believes the silence implies that export-controlled NVIDIA Blackwell chips were used. Earlier, U.S. government officials anonymously raised similar claims in February; NVIDIA called them “far-fetched.” DeepSeek denied using Blackwell, stating that the model was trained on NVIDIA H800 and Huawei Ascend 910C.

Replit CEO Amjad Masad directly countered, saying that while U.S. politicians and lobbyists are stoking panic over “Chinese distillation,” Chinese scientists are openly sharing genuine AI breakthroughs. He cited the structural innovations listed in DeepSeek’s official tweet, including token-level attention compression (DeepSeek Sparse Attention) and a significant improvement in long-context computation efficiency. He pointed out that at 1M context length, V4-Pro’s per-token inference compute power and KV cache usage are both far lower than V3.2. Masad believes that innovations at the architectural level like these are completely unrelated to training-data distillation, and that everyone can benefit from open-source, including U.S. labs of all sizes.

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
Add a comment
Add a comment
No comments
  • Pin