Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Pre-IPOs
Unlock full access to global stock IPOs
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
Promotions
AI
Gate AI
Your all-in-one conversational AI partner
Gate AI Bot
Use Gate AI directly in your social App
GateClaw
Gate Blue Lobster, ready to go
Gate for AI Agent
AI infrastructure, Gate MCP, Skills, and CLI
Gate Skills Hub
10K+ Skills
From office tasks to trading, the all-in-one skill hub makes AI even more useful.
GateRouter
Smartly choose from 30+ AI models, with 0% extra fees
AI-driven storage continues to iterate! Samsung PIM technology mass production is imminent, potentially bypassing CPUs and GPUs for direct computation
AI is reshaping the supply and demand landscape of the storage market with unprecedented power, while also giving rise to a new wave of technologies. Following the emergence of “black technologies” like HBF and H³, a new direction is now budding in the storage field.
According to media reports, Samsung Electronics plans to apply PIM technology to LPDDR5X memory. Currently, Samsung is collaborating with major clients to develop LPDDR5X PIM technology, with samples expected to be available in the second half of this year. Additionally, both parties are actively exploring the specific standards for applying PIM technology to the next-generation standard LPDDR6.
PIM, short for Processing in Memory, refers to integrating processing units directly within memory modules. It places computational units (ALUs) at the memory storage level. Traditional methods often require transferring data to CPUs or GPUs for processing, but PIM performs computations directly within memory, which is expected to break through the “memory wall.”
In a recent keynote speech at SEMICON Korea 2026 held in South Korea, Samsung Electronics DRAM design team leader Sun Gyo-min emphasized the necessity of PIM technology, stating: “Currently, due to insufficient memory bandwidth, AI cannot fully leverage GPU performance.” In his view, PIM can not only significantly increase bandwidth but also greatly improve energy efficiency.
Currently, Samsung has completed proof-of-concept (PoC) testing for HBM-PIM and other products, and is moving into the commercialization phase, preparing for mass production. The core product of this technology is the LPDDR series, which has been optimized for smartphones and terminal AI devices.
In addition, SK Hynix is also actively developing PIM. At the “CES 2026” exhibition in the United States this year, SK Hynix showcased several innovative products and technologies, including AiMX based on PIM architecture. Shanghai Securities pointed out that to accelerate AI deployment and drive growth in information flow, storage chips have evolved from ordinary components into core value products of the AI industry. Through technological breakthroughs and ecosystem collaboration, they aim to build core competitiveness in AI storage.
China Post Securities states that as a new computing architecture, the core of integrated storage and computing is to fully merge storage and computation, overlay computing capabilities within memory, and perform two-dimensional and three-dimensional matrix calculations using new efficient computing architectures. Coupled with advanced packaging and new memory devices in the post-Moore era, this approach can effectively overcome the Von Neumann architecture bottleneck and achieve an order-of-magnitude improvement in computational efficiency. PIM embeds processing units into memory chips, endowing memory with certain computing capabilities, making it suitable for data-intensive tasks and significantly improving data processing efficiency and energy efficiency.
CITIC Securities notes that analyzing current memory architectures for computing, DRAM performance (“bandwidth” and “capacity”) is the biggest bottleneck. The larger the model during training, the higher the memory capacity required; during inference, the more concurrent users, the greater the bandwidth demand (training is more limited by “capacity,” inference by “bandwidth”). Upgrading is urgently needed. In the AI era, the necessary storage upgrade—“integrated storage and computing”—is an inevitable long-term trend, and “near-memory computing” (PNM) is the currently effective path.
(Article source: Caixin)