Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Launchpad
Be early to the next big token project
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
Ridiculous! Google's paper crashes—62 billion in memory stocks, and they actually copied Chinese scholars' work? Netizens: They promised not to do evil, but did they break that promise?
(Source: Algorithm Enthusiasts)
On March 27, 2026, Google, which had just ignited global memory-related stocks with a single paper and triggered market turmoil with a market value swing of over $90 billion, suddenly found itself embroiled in a dual scandal of academic plagiarism + data fabrication. It was publicly accused of academic bullying by Chinese scholars. In a single night, it fell from the “altar” of AI breakthroughs, sparking an uproar across both the tech industry and the academic world.
On March 26, Google Research Institute released an ICLR 2026 accepted paper titled 《TurboQuant: Online Vector Quantization with Near-Optimal Distortion Rate》 on the arXiv preprint platform, and heavily promoted it through official channels.
Core gimmick: It claims that without any training fine-tuning, it can compress the memory footprint of a large model’s KV cache by 83% and increase inference speed by 8x, directly targeting the core demand for AI memory and storage chips.
Market shock: The moment the news broke, Wall Street and China’s A-share storage sector all collapsed across the board. Micron fell 4%, Western Digital fell 4.4%, and Seagate fell 5.6%; on China’s A-shares, GigaDevice, BFD Storage, Jiangbo Long, etc. all dropped by more than 5%. Global storage giants saw their combined market value evaporate by over $90 billion in a single day.
Industry praise: Cloudflare CEO said it was Google’s “DeepSeek moment,” and for a time in the industry it was hailed as a revolutionary breakthrough that “rewrites the AI compute landscape.”
The breaking point came from a postdoc at ETH (Swiss Federal Institute of Technology in Zurich) and Chinese scholar Gao Jianyang (RaBitQ algorithm’s first author). He simultaneously published a long post on ICLR OpenReview, Zhihu, and X, attaching email evidence and technical comparisons, directly pointing to three major core problems with TurboQuant:
TurboQuant’s core mechanism, “random rotation + quantization transform,” was not invented by Google. As early as 2024, Gao Jianyang’s team had already proposed the complete RaBitQ algorithm, publishing two top-conference papers in succession, with the code fully open-sourced.
Google’s paper not only failed to cite or objectively indicate technical connections, but instead maliciously categorized RaBitQ as “ordinary grid-product quantization,” intentionally skipping its core innovations, packaging existing results as Google’s “original breakthrough.”
Without any basis, Google’s paper characterized RaBitQ’s theoretical results as “suboptimal,” downgrading its performance to highlight TurboQuant’s “superiority.”
Key evidence: In May 2025, Gao Jianyang’s team communicated by email with TurboQuant’s second author, Majid Daliri. They clarified technical errors point by point. The other side explicitly admitted the mistakes and stated that they had informed all authors, but from submission to acceptance and then to public promotion, the paper never corrected any error.
When testing performance, the paper used double standards:
Test RaBitQ: use a non-official Python version, limit to a single-core CPU, and disable multi-threading;
Test TurboQuant: accelerate using an NVIDIA A100 GPU;
These deliberately manufactured unfair conditions directly led to seriously distorted experimental data, misleading both the industry and the market regarding how the two compare in performance.
Faced with the hard-evidence accusations, Google’s first author of TurboQuant, Amir Zandieh, replied only briefly:
It only promised to correct some experimental details, refused to acknowledge technical plagiarism, refused to add the core citations for RaBitQ, and required that corrections be made after the ICLR 2026 conference ends—trying to delay and cover up the scandal.
In his accusation, Gao Jianyang said plainly: this is the naked bullying of academic “small fry” by big technology companies.
The original achievements of Chinese scholars—spending two years, producing two top-conference papers, and open-sourcing the code—were packaged into their own breakthroughs by Google’s team using a “take-and-use” approach;
Although they knew there were errors, they refused to correct them; they used the halo of big-tech to monopolize discourse power and mislead the market and the industry;
Private communication was ineffective; there was no way to file a complaint to the conference organizing committee, and ultimately they could only expose the issue publicly, defending their rights in the form of a “tears-and-blood complaint.”
In just 24 hours, Google went from an “AI technology disruptor” to an “academic plagiarizer.” This storm caused by a paper not only punctured the “innovation myth” of a tech giant, but also exposed the brutal reality of power imbalance in the academic world.
(This article is optimized by AI)