Futures
Hundreds of contracts settled in USDT or BTC
TradFi
Gold
Trade global traditional assets with USDT in one place
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Futures Kickoff
Get prepared for your futures trading
Futures Events
Participate in events to win generous rewards
Demo Trading
Use virtual funds to experience risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Launchpad
Be early to the next big token project
Alpha Points
Trade on-chain assets and enjoy airdrop rewards!
Futures Points
Earn futures points and claim airdrop rewards
Investment
Simple Earn
Earn interests with idle tokens
Auto-Invest
Auto-invest on a regular basis
Dual Investment
Buy low and sell high to take profits from price fluctuations
Soft Staking
Earn rewards with flexible staking
Crypto Loan
0 Fees
Pledge one crypto to borrow another
Lending Center
One-stop lending hub
VIP Wealth Hub
Customized wealth management empowers your assets growth
Private Wealth Management
Customized asset management to grow your digital assets
Quant Fund
Top asset management team helps you profit without hassle
Staking
Stake cryptos to earn in PoS products
Smart Leverage
New
No forced liquidation before maturity, worry-free leveraged gains
GUSD Minting
Use USDT/USDC to mint GUSD for treasury-level yields
Apart from the Middle East, Nvidia also caused a crash in the South Korean stock market.
Over the past two days, South Korea’s benchmark KOSPI index has fallen more than 10% each day, marking the largest two-day decline since 2008.
The market generally believes that the global risk-averse sentiment triggered by Trump’s military actions against Iran has caused Asian stock markets, including Korea, to suffer heavy losses. However, recent analysis suggests that Nvidia has also contributed to Korea’s sharp decline.
A technical rumor about Nvidia has precisely impacted Korean domestic stocks. According to analyst Jukan from Citrini7, citing independent research firm KIS, there are reports that Nvidia is developing a new inference chip utilizing Groq’s on-chip SRAM architecture, with plans to announce it at the GTC conference in March.
This news caused Korean domestic stocks to weaken, as investors worry that the use of SRAM will reduce demand for main memory, including HBM.
However, the Korean stock market rebounded strongly today. Latest data shows that the KOSPI index rose by 11%, with tech giant Samsung Electronics surging 13% and SK Hynix soaring 15%.
SRAM inference chips impact on HBM, DRAM? Possibly a misjudgment
However, the market may have misjudged the impact of SRAM inference chips.
KIS clearly states: “The claim that the emergence of ‘low-cost’ SRAM inference chips will reduce the use of existing main memories like HBM reflects a poor understanding of memory technology.”
From a physical perspective, SRAM cells are larger and less dense than DRAM, resulting in significantly higher cost per bit. For the same capacity, SRAM typically requires 5 to 10 times the die area of DRAM. Historically, SRAM has been used for caches or on-chip buffers requiring extremely low latency, rather than as the main memory for storing large amounts of data.
SRAM may drive diversification of memory hierarchy
SRAM architecture is not a replacement for DRAM but an independent option. Compared to DRAM, SRAM-based architectures offer much lower access latency and minimal data movement.
KIS analysis states that Nvidia’s plan to utilize Groq architecture is aimed at optimizing specific inference workloads that are difficult for GPUs to handle. Adopting SRAM architecture should be understood as a specialized choice for certain data center workloads requiring ultra-low latency, as well as real-time physical AI edge applications such as robotics and autonomous driving. In fact, OpenAI has already deployed Cerebras’ SRAM chips in its data centers, and inference services built on these chips charge higher API fees than standard GPU inference services.
As AI industry advances, the adoption of Groq-based SRAM architectures will further diversify memory tiers within AI infrastructure. HBM and DRAM will continue to serve as the main memory for large-scale model training and general inference servers. KIS concludes: “Memory hierarchies encompassing SRAM, HBM, and DRAM will become increasingly multi-layered, ultimately expanding the total addressable market (TAM) for the entire memory industry.”
Risk Warning and Disclaimer
Market risks exist; investments should be made cautiously. This article does not constitute personal investment advice and does not consider individual users’ specific investment goals, financial situations, or needs. Users should consider whether any opinions, viewpoints, or conclusions herein are suitable for their particular circumstances. Investment is at your own risk.