Ridiculous! Google's paper crashes—62 billion in memory stocks, and they actually copied Chinese scholars' work? Netizens: They promised not to do evil, but did they break that promise?

robot
Abstract generation in progress

(Source: Algorithm Enthusiasts)

On March 27, 2026, Google, which had just ignited global memory-related stocks with a single paper and triggered market turmoil with a market value swing of over $90 billion, suddenly found itself embroiled in a dual scandal of academic plagiarism + data fabrication. It was publicly accused of academic bullying by Chinese scholars. In a single night, it fell from the “altar” of AI breakthroughs, sparking an uproar across both the tech industry and the academic world.

  1. Event Prequel: Google’s “bloodletting” of memory stocks—overnight stardom

On March 26, Google Research Institute released an ICLR 2026 accepted paper titled 《TurboQuant: Online Vector Quantization with Near-Optimal Distortion Rate》 on the arXiv preprint platform, and heavily promoted it through official channels.

  • Core gimmick: It claims that without any training fine-tuning, it can compress the memory footprint of a large model’s KV cache by 83% and increase inference speed by 8x, directly targeting the core demand for AI memory and storage chips.

  • Market shock: The moment the news broke, Wall Street and China’s A-share storage sector all collapsed across the board. Micron fell 4%, Western Digital fell 4.4%, and Seagate fell 5.6%; on China’s A-shares, GigaDevice, BFD Storage, Jiangbo Long, etc. all dropped by more than 5%. Global storage giants saw their combined market value evaporate by over $90 billion in a single day.

  • Industry praise: Cloudflare CEO said it was Google’s “DeepSeek moment,” and for a time in the industry it was hailed as a revolutionary breakthrough that “rewrites the AI compute landscape.”

  1. Explosion on March 27: Chinese scholar files a real-name complaint—plagiarism + fabrication nailed

The breaking point came from a postdoc at ETH (Swiss Federal Institute of Technology in Zurich) and Chinese scholar Gao Jianyang (RaBitQ algorithm’s first author). He simultaneously published a long post on ICLR OpenReview, Zhihu, and X, attaching email evidence and technical comparisons, directly pointing to three major core problems with TurboQuant:

  1. Core technical plagiarism—deliberately erasing prior achievements

TurboQuant’s core mechanism, “random rotation + quantization transform,” was not invented by Google. As early as 2024, Gao Jianyang’s team had already proposed the complete RaBitQ algorithm, publishing two top-conference papers in succession, with the code fully open-sourced.

Google’s paper not only failed to cite or objectively indicate technical connections, but instead maliciously categorized RaBitQ as “ordinary grid-product quantization,” intentionally skipping its core innovations, packaging existing results as Google’s “original breakthrough.”

  1. Theoretical data fabrication—maliciously disparaging prior work
  • Without any basis, Google’s paper characterized RaBitQ’s theoretical results as “suboptimal,” downgrading its performance to highlight TurboQuant’s “superiority.”

  • Key evidence: In May 2025, Gao Jianyang’s team communicated by email with TurboQuant’s second author, Majid Daliri. They clarified technical errors point by point. The other side explicitly admitted the mistakes and stated that they had informed all authors, but from submission to acceptance and then to public promotion, the paper never corrected any error.

  1. Unfair experimental conditions—manipulating the comparison outcomes

When testing performance, the paper used double standards:

  • Test RaBitQ: use a non-official Python version, limit to a single-core CPU, and disable multi-threading;

  • Test TurboQuant: accelerate using an NVIDIA A100 GPU;

These deliberately manufactured unfair conditions directly led to seriously distorted experimental data, misleading both the industry and the market regarding how the two compare in performance.

  1. Google’s response: arrogant deflection—escalating the conflict

Faced with the hard-evidence accusations, Google’s first author of TurboQuant, Amir Zandieh, replied only briefly:

“Random rotation and JL transformations are already standard techniques in the field, so it’s impossible to cite every related method.”

It only promised to correct some experimental details, refused to acknowledge technical plagiarism, refused to add the core citations for RaBitQ, and required that corrections be made after the ICLR 2026 conference ends—trying to delay and cover up the scandal.

  1. The essence of the incident: academic hegemony by big tech—Chinese scholars fighting for rights with tears and blood

In his accusation, Gao Jianyang said plainly: this is the naked bullying of academic “small fry” by big technology companies.

  • The original achievements of Chinese scholars—spending two years, producing two top-conference papers, and open-sourcing the code—were packaged into their own breakthroughs by Google’s team using a “take-and-use” approach;

  • Although they knew there were errors, they refused to correct them; they used the halo of big-tech to monopolize discourse power and mislead the market and the industry;

  • Private communication was ineffective; there was no way to file a complaint to the conference organizing committee, and ultimately they could only expose the issue publicly, defending their rights in the form of a “tears-and-blood complaint.”

In just 24 hours, Google went from an “AI technology disruptor” to an “academic plagiarizer.” This storm caused by a paper not only punctured the “innovation myth” of a tech giant, but also exposed the brutal reality of power imbalance in the academic world.

(This article is optimized by AI)

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
Add a comment
Add a comment
No comments
  • Pin