Rising against the trend! Storage chips, breaking news! Institutions: A buying opportunity

robot
Abstract generation in progress

A Google paper about a new algorithm has caused storage-chip concept stocks—“to take a major hit”!

On Friday, amid broad declines across major U.S. stock indexes, U.S. storage-chip concept stocks rose against the trend. During the day, SanDisk was up more than 5% at one point, while Micron Technology was up more than 3%. By the close, SanDisk was up 2.10%, Micron Technology was up 0.50%, Seagate Technology was up 0.34%, and Western Digital was up 0.73%. And the day before that, the above stocks had already been hit by a large-scale selloff. At Thursday’s close, SanDisk plunged more than 11%, Seagate Technology fell more than 8%, Western Digital fell more than 7%, and Micron Technology dropped nearly 7%.

Some analysts said the sharp drop in storage-chip stocks on Thursday may have been due to a misunderstanding in the market. The TurboQuant ultra-efficient AI memory compression algorithm mentioned in the Google paper applies only to key-value cache during the inference phase, does not affect the high-bandwidth memory (HBM) used for model weights, and is unrelated to AI training tasks.

Other analysts said that advanced compression technology only reduces bottlenecks and does not destroy demand for DRAM/flash. Investors may have taken profits from Google’s news, but demand for memory remains very strong. A near-term pullback in memory stocks is a “chance to get on board,” not a turning point in the stock price.

Storage-chip stocks take a hit from Google’s new algorithm

The “AI horror story” is back again. Google has publicly released research results for a new algorithm that can greatly reduce memory usage. As a result, the recent downtrend in storage-chip-related shares has been severe.

On Thursday, SanDisk fell more than 11%, Micron Technology fell nearly 7%, SK hynix fell nearly 6%, Samsung Electronics fell nearly 5%, and Kioxia fell nearly 6%. According to estimates, the market value of the world’s major memory giants evaporated by more than $90 billion in a single day on Thursday. On Friday, in the U.S. stock market, storage-chip concept stocks rose against the trend; SanDisk rose more than 2%, and Micron Technology rose 0.50%.

In the past few months, storage-chip companies performed strongly because a surge in investment in artificial intelligence infrastructure led to supply shortages, driving chip prices higher and boosting profits. By this Wednesday, SK hynix and Samsung Electronics’ stock prices had risen more than 50% this year, while Kioxia’s stock price had increased by more than double.

The trigger for this downturn was the paper “TurboQuant” that Google’s research institute is set to officially present at the International Conference on Learning Representations (ICLR 2026). Google’s team said that, through two innovative technologies—PolarQuant (polar coordinate quantization) and QJL (quantized JL transform)—they achieved KV Cache compression to 3-bit precision on the premise of “zero loss,” shrinking memory usage by at least 6 times. On H100 GPU accelerators, the algorithm delivered up to an 8x performance improvement compared with unquantized key-value.

This week, Google promoted the research on the X platform, even though it was originally released last year. Investors may worry that this could reduce demand from hyperscale data center operators for memory, thereby lowering the prices of the same components used in smartphones and consumer electronics.

Institution: The market may be misreading it

In its latest research note, Morgan Stanley said the market may be misreading the situation. The technology applies only to key-value cache during the inference phase and does not affect the high-bandwidth memory (HBM) used for model weights, nor is it related to AI training tasks. The analyst emphasized that the so-called “6x compression” is not a reduction in total storage demand; rather, it increases throughput per GPU through efficiency improvements.

Morgan Stanley analyst Shawn Kim said the industry impact of Google’s research should be more positive, because it affects a key bottleneck. The technology improves the efficiency of so-called key-value cache used for inference (i.e., running AI models). He wrote, “If the model can run with a major reduction in memory demand without losing performance, the service cost per query would decline significantly, making AI deployment more profitable.” Kim said that, given the opportunity for investment returns, TurboQuant is positive for hyperscale enterprises. In the long run, this may also benefit memory manufacturers, because “lower single-token costs can bring higher product adoption demand.”

Morgan Stanley cited the “Jevons paradox” from economics to explain the long-term impact: although technical efficiency improves and lowers unit costs, overall demand often expands due to a decline in the usage threshold.

Lynx Equity Strategies analyst KC Rajkumar said that some media reports contain exaggeration. Current inference models already widely adopt 4-bit quantized data. Google’s so-called “8x performance improvement” is based on comparisons with outdated 32-bit models. “However, due to extremely tight supply, this almost certainly will not reduce future demand for memory and flash over the next 3–5 years.” Rajkumar wrote that advanced compression technology merely reduces bottlenecks and does not destroy demand for DRAM/flash.

Wells Fargo analyst Andrew Rocha said that the existence of a compression algorithm has never fundamentally changed the overall scale of hardware procurement. By sharply lowering the service cost per query, such technologies allow models that would otherwise only run on expensive cloud clusters to be migrated locally, effectively lowering the threshold for scaling AI deployments.

Four hyperscale companies led by Amazon and Google plan to spend about $650 billion this year to build data centers and secure NVIDIA’s AI accelerators and related storage-chip products. SK Group Chairman Choi Tae-won recently said that the tight supply situation for storage chips will continue until 2030.

From a supply-chain perspective, 2026 server DRAM demand is expected to grow by 39%, and HBM demand is expected to increase by 58% year over year. The optimization effect of TurboQuant may be overwhelmed by the industry’s growth wave.

Mizuho Technology expert Jordan Klein believes that the current pullback in memory stocks is more like a “chance to get on board,” rather than a stock-price turning point. In his report, Klein wrote that after experiencing strong rallies in 2025 and the beginning of 2026, the memory-stock bulls began to waver. He emphasized that although the memory industry is known for dramatic cyclical volatility, the recent selloff fits a familiar pattern.

Mizuho said this selloff occurs every few months. This is not a signal of a market peak, nor is it a reason to sell. In fact, buying on dips can make money.

A massive amount of information, precise analysis—everything on the Sina Finance app

责任编辑:韦子蓉

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
Add a comment
Add a comment
No comments
  • Pin