Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Launchpad
Be early to the next big token project
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
AI has simultaneously caused shortages and surpluses of memory.
On March 29, Huaqiangbei and the U.S. retail market simultaneously saw a cliff-like drop in memory module prices. Corsair’s 32GB DDR5-6400 kit fell from $490 to $380, a 22% drop in a single day. Domestically, 32GB DDR5 high-frequency kits plunged by 800 yuan in a single week. Channel dealers panicked and dumped inventory. One distributor said, “In a single day, it dropped by over a hundred bucks.”
But when you put that number on a longer timeline, the picture is completely different: even after the decline, today’s DDR5 prices are still four times those in July 2025. It was a precise supply-demand mismatch in the AI industry chain—one and the same force first created shortages, then created panic over oversupply.
Roller coaster: up 540% in eight months, down 22% in one month
In July 2025, a mainstream 32GB DDR5-6000 kit on the U.S. retail market cost just $77. By January 2026, the same kit’s price had surged to $490. In eight months, a 540% increase.
The price hike wasn’t because consumers suddenly went on a frantic PC upgrade spree. According to TrendForce data, in the first quarter of 2026, DRAM contract prices rose 90%-95% quarter over quarter. Among them, PC DRAM rose by more than 100%, setting the largest quarterly increase in recorded history. Driving all of this was the AI infrastructure buildout’s intense demand for a specific type of memory.
Then, on March 25, Google released a compression algorithm called TurboQuant. Four days later, memory prices collapsed.
Where did the capacity go? HBM has eaten your memory sticks
To understand this round of price increases, you have to first understand a key technical parameter. HBM (high-bandwidth memory, specialized memory for NVIDIA AI chips) consumes three times the wafer area per GB compared with standard DDR5. According to a report by Tom’s Hardware, this means that from the same wafer, HBM can only produce one-third the capacity of DDR5.
Samsung, SK hynix, and Micron—the three major memory manufacturers—made a rational choice in the face of HBM’s higher profit margins: up to 40% of advanced-process wafer capacity was redirected to HBM production. According to TrendForce data, by the first quarter of 2026, DDR5’s profit margin is expected to exceed HBM3e for the first time, reflecting just how severely supply of consumer memory has been squeezed.
Micron’s choice is the most aggressive. In December 2025, the company announced the closure of its 29-year-old consumer brand, Crucial, fully exiting the consumer memory and storage market, and shifting entirely to enterprise and AI customers. According to Micron’s investor relations announcement, Micron’s total revenue for fiscal 2025 was $37.38 billion, and data center and AI applications accounted for 56% of total revenue. The consumer market isn’t worth it anymore.
SK hynix’s HBM capacity is already sold out through the end of 2026. Samsung plans to increase HBM monthly capacity from 170,000 wafers to 250,000 wafers by the end of 2026. New wafer fabs (Samsung P4L and SK hynix M15X) won’t reach mass production until at the earliest 2027-2028. In other words, the shortage in supply of consumer-grade DRAM is structural, not something that can be alleviated in just one or two quarters.
A reversal of the landscape: SK hynix breaks Samsung’s 40-year dominance
This capacity shift also rewrote the power map in the memory industry. According to TrendForce data, in the second quarter of 2025, SK hynix captured 62% of the HBM market by leveraging its deep ties with NVIDIA, while Samsung had just 17% and Micron 21%.
Even more milestone-worthy is the revenue-side reversal. According to TrendForce’s Q3 2025 report, SK hynix topped the DRAM revenue chart for the first time with $13.75 billion in quarterly DRAM revenue, with Samsung following closely at $13.50 billion. The gap between them is only $250 million, but this marks the first time Samsung has lost the #1 position in memory revenue in nearly 40 years. According to CNBC, SK hynix’s full-year operating profit in 2025 also surpassed Samsung for the first time.
HBM’s first-mover advantage gave SK hynix enough leverage in the race, but the competition is far from over. Samsung is working full throttle to catch up with HBM4’s mass-production timeline. Micron, while abandoning the consumer market, is seeing the fastest revenue growth in the enterprise and AI space (Q3 quarter over quarter: +53.2%) among the three memory makers.
How can one algorithm shake the logic behind the price surge?
On March 25, Google presented the TurboQuant algorithm at ICLR 2026. This algorithm did one thing: it compressed the KV cache used during large language model inference (key-value cache—the part that takes up the most memory during inference) from FP16 precision down to 3-bit. Memory usage was reduced by at least 6x, while achieving up to 8x attention computation acceleration on H100 GPUs. According to Google’s research blog, in five long-context benchmark tests such as Needle-in-a-Haystack, there is zero loss in precision.
The market quickly did the math. If TurboQuant—or similar algorithms—were widely adopted by mainstream AI companies, the incremental DRAM demand for AI inference would shrink significantly. The core narrative that supported memory price hikes over the past half year has precisely been: “AI infrastructure consumes too much memory capacity.”
Four days later, channel confidence collapsed.
It’s worth pointing out that TurboQuant targets the KV cache on the AI inference side, not the HBM demand on the training side. The supply-demand relationship for HBM will not change in the short term because of an inference optimization algorithm. But the market doesn’t always distinguish between the two. According to Sina Finance and Economics, in the early phase before the plunge, domestic channels were flooded with stockpilers who were “outside the circle” driven by the price increases. The high prices caused retail sales to drop by more than 60%. Under tight funding chains, a chain reaction of forced selling amplified the decline.
One AI industry chain, simultaneously manufacturing both memory shortages and oversupply panic. The squeeze of HBM’s physical production capacity led to a shortfall in consumer memory supply, while TurboQuant’s algorithmic efficiency breakthrough caused expectations for AI memory demand to drop sharply. The force that manufactured the price hikes—and the one that manufactured the crash—is the same.