Apart from the Middle East, Nvidia also caused a crash in the South Korean stock market.

robot
Abstract generation in progress

Over the past two days, South Korea’s benchmark KOSPI index has fallen more than 10% each day, marking the largest two-day decline since 2008.

The market generally believes that the global risk-averse sentiment triggered by Trump’s military actions against Iran has caused Asian stock markets, including Korea, to suffer heavy losses. However, recent analysis suggests that Nvidia has also contributed to Korea’s sharp decline.

A technical rumor about Nvidia has precisely impacted Korean domestic stocks. According to analyst Jukan from Citrini7, citing independent research firm KIS, there are reports that Nvidia is developing a new inference chip utilizing Groq’s on-chip SRAM architecture, with plans to announce it at the GTC conference in March.

This news caused Korean domestic stocks to weaken, as investors worry that the use of SRAM will reduce demand for main memory, including HBM.

However, the Korean stock market rebounded strongly today. Latest data shows that the KOSPI index rose by 11%, with tech giant Samsung Electronics surging 13% and SK Hynix soaring 15%.

SRAM inference chips impact on HBM, DRAM? Possibly a misjudgment

However, the market may have misjudged the impact of SRAM inference chips.

KIS clearly states: “The claim that the emergence of ‘low-cost’ SRAM inference chips will reduce the use of existing main memories like HBM reflects a poor understanding of memory technology.”

From a physical perspective, SRAM cells are larger and less dense than DRAM, resulting in significantly higher cost per bit. For the same capacity, SRAM typically requires 5 to 10 times the die area of DRAM. Historically, SRAM has been used for caches or on-chip buffers requiring extremely low latency, rather than as the main memory for storing large amounts of data.

SRAM may drive diversification of memory hierarchy

SRAM architecture is not a replacement for DRAM but an independent option. Compared to DRAM, SRAM-based architectures offer much lower access latency and minimal data movement.

KIS analysis states that Nvidia’s plan to utilize Groq architecture is aimed at optimizing specific inference workloads that are difficult for GPUs to handle. Adopting SRAM architecture should be understood as a specialized choice for certain data center workloads requiring ultra-low latency, as well as real-time physical AI edge applications such as robotics and autonomous driving. In fact, OpenAI has already deployed Cerebras’ SRAM chips in its data centers, and inference services built on these chips charge higher API fees than standard GPU inference services.

As AI industry advances, the adoption of Groq-based SRAM architectures will further diversify memory tiers within AI infrastructure. HBM and DRAM will continue to serve as the main memory for large-scale model training and general inference servers. KIS concludes: “Memory hierarchies encompassing SRAM, HBM, and DRAM will become increasingly multi-layered, ultimately expanding the total addressable market (TAM) for the entire memory industry.”

Risk Warning and Disclaimer

Market risks exist; investments should be made cautiously. This article does not constitute personal investment advice and does not consider individual users’ specific investment goals, financial situations, or needs. Users should consider whether any opinions, viewpoints, or conclusions herein are suitable for their particular circumstances. Investment is at your own risk.

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
0/400
No comments
  • Pin