Rising against the trend! Storage chips, breaking news! Institutions: A buying opportunity

robot
Abstract generation in progress

A Google paper on a new algorithm has left storage chip concept stocks “badly hurt”!

On Friday, against the backdrop of a sharp decline in major U.S. stock indices, U.S. storage chip concept stocks rose against the trend. During the session, SanDisk rose more than 5%, and Micron Technology rose more than 3%. By the close, SanDisk was up 2.10%, Micron Technology rose 0.50%, Seagate Technology rose 0.34%, and Western Digital rose 0.73%. The day before, these stocks had already experienced a round of large-scale sell-offs. At the close on Thursday, SanDisk plummeted over 11%, Seagate Technology fell over 8%, Western Digital dropped over 7%, and Micron Technology fell nearly 7%.

Analysts indicated that the significant drop in storage chip stocks on Thursday might have been due to a market misinterpretation. The ultra-efficient AI memory compression algorithm TurboQuant mentioned in the Google paper only applies to the key-value cache during the inference stage and does not affect the high-bandwidth memory (HBM) occupied by model weights, nor is it related to AI training tasks.

Another analyst stated that advanced compression technology merely reduces bottlenecks and will not destroy the demand for DRAM/flash memory. Investors may have taken profits based on Google’s news, but the market consumption of memory remains very strong. The short-term pullback in memory stocks is seen as an “entry opportunity,” rather than a turning point in stock prices.

Storage chip stocks hit by Google’s new algorithm

The AI market “ghost story” is back, as Google has publicly revealed research results of a new algorithm that can significantly reduce memory usage, leading to a severe decline in storage chip stocks recently.

On Thursday, SanDisk plummeted over 11%, Micron Technology fell nearly 7%, SK Hynix dropped over 6%, Samsung Electronics fell nearly 5%, and Kioxia dropped nearly 6%. It was estimated that the market capitalization of major global memory giants evaporated by over $90 billion in a single day on Thursday. On Friday, in the U.S. stock market, storage chip concept stocks rose against the trend, with SanDisk up over 2% and Micron Technology up 0.50%.

In recent months, storage chip companies have performed strongly, as a surge in investment in artificial intelligence infrastructure has led to supply shortages, causing chip prices to soar and profits to grow. As of Wednesday this week, the stock prices of SK Hynix and Samsung Electronics have skyrocketed by over 50% this year, while Kioxia’s stock price has more than doubled.

The trigger for this decline was the paper “TurboQuant” that Google Research is set to officially present at the International Conference on Learning Representations (ICLR 2026). The Google team claims that through two innovative technologies, PolarQuant (polar coordinate quantization) and QJL (quantized JL transformation), they achieved a compression of KV Cache to 3-bit precision under the premise of “zero loss,” reducing memory usage by at least six times. This algorithm also achieved up to an 8-fold performance improvement on H100 GPU accelerators compared to unquantized key-values.

Google promoted this research on the X platform this week, even though it was initially released last year. Investors may be concerned that this will reduce the demand for memory from hyperscale data center operators, thus driving down the prices of components that are also used in smartphones and consumer electronics.

Institutions: Market may have misinterpreted

Morgan Stanley stated in a recent research report that the market may have misread the situation. This technology only applies to the key-value cache during the inference stage and does not affect the high-bandwidth memory (HBM) occupied by model weights, nor is it related to AI training tasks. Analysts emphasized that the so-called “6-fold compression” does not indicate a reduction in total storage demand but rather an increase in throughput per GPU through efficiency improvements.

Morgan Stanley analyst Shawn Kim pointed out that the impact of Google’s research on the industry should be seen as more positive, as it addresses a key bottleneck. This technology enhances the efficiency of the key-value cache used for inference (i.e., running AI models). He wrote: “If models can operate with significantly reduced memory requirements without losing performance, the service cost for each query will decrease significantly, making AI deployment more profitable.” Kim stated that considering the investment return opportunities, TurboQuant is beneficial for hyperscale enterprises. In the long run, this may also be advantageous for memory manufacturers, as “lower single token costs can lead to higher product adoption demand.”

Morgan Stanley cited the “Jevons Paradox” in economics to explain the long-term impact: while technological efficiency improvements reduce unit costs, they often lead to overall demand expansion due to lowered usage thresholds.

KC Rajkumar, an analyst at Lynx Equity Strategies, pointed out that some media reports contain exaggerations. Current inference models have widely adopted 4-bit quantized data, and Google’s so-called “8-fold performance improvement” is based on comparisons with outdated 32-bit models. “However, due to extreme supply constraints, this will hardly reduce the demand for memory and flash memory in the next 3 to 5 years,” Rajkumar wrote, noting that advanced compression technology merely reduces bottlenecks and will not destroy the demand for DRAM/flash memory.

Wells Fargo analyst Andrew Rocha stated that the existence of compression algorithms has never fundamentally changed the overall scale of hardware procurement. By significantly reducing the service cost per query, such technologies enable models that could only run on expensive cloud clusters to be migrated to local environments, effectively lowering the barriers to AI scaling deployment.

Four hyperscale enterprises, led by Amazon and Google, plan to invest about $650 billion this year in building data centers, purchasing NVIDIA’s AI accelerators and related storage chips. SK Group Chairman Chey Tae-won recently stated that the tight supply of storage chips will continue until 2030.

From a supply chain perspective, DRAM demand for servers is expected to grow by 39% in 2026, and HBM demand is expected to increase by 58% annually. The optimization effects of TurboQuant may be overshadowed by the industry’s growth wave.

Jordan Klein, an expert at Mizuho Securities, believes that the current pullback in memory stocks is more like an “entry opportunity” rather than a turning point in stock prices. Klein wrote in a report that after experiencing strong gains in 2025 and early 2026, bulls in memory stocks began to waver. Although the memory industry is known for its dramatic cyclical fluctuations, he emphasized that the recent sell-off fits a familiar pattern.

Mizuho noted that such sell-offs occur every few months and are not signals of a peak or reasons for selling. In fact, buying on dips can actually be profitable.

Proofread by: Yao Yuan

(责任编辑:董萍萍)

     【Disclaimer】This article only represents the author's personal views and is unrelated to Hexun.com. Hexun.com maintains neutrality regarding the statements and viewpoints contained in this article and does not provide any explicit or implied guarantees regarding the accuracy, reliability, or completeness of the content. Readers should use it for reference only and assume full responsibility. Email: news_center@staff.hexun.com

Report

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
Add a comment
Add a comment
No comments
  • Pin