Edge AI enters a period of rapid growth, with Jiang Bolong's "Integrated Storage" leading the innovation in AI PC/phone and wearable storage.

(Source: Elecfans)

From DeepSeek AI reasoning large models to the AI agent OpenClaw that blew up this year, the on-device AI market has been completely ignited. If training AI large models has accomplished HBM high-bandwidth memory breakthroughs, then on-device AI inference will inevitably redefine on-device AI storage as well. Among them, the “integrated storage” from domestic storage leader JiangboLong is absolutely a standout existence.

At the CFM|MemoryS2026 Flash Memory Summit held recently, JiangboLong’s chairman and general manager, Cai Huabo, delivered a keynote speech, focusing on integrated storage and exploring on-device AI. During the summit, JiangboLong’s two executives also accepted media interviews, providing a detailed explanation of JiangboLong’s on-device AI storage capabilities. Meanwhile, the company’s release of a series of storage new products and technologies around on-device AI has drawn significant attention from the outside world.

In his speech, Cai Huabo pointed out that as AI inference applications accelerate, the core difference in storage services between cloud and on-device AI is becoming clearly defined through AI tiered storage. Among them, cloud AI focuses on specialized storage services for GPUs, while on-device AI is built around three core needs: high-performance capacity, SiP system-level integrated packaging, and customized services. Its requirements for storage are fundamentally different from those of the traditional standardized storage ecosystem.

Just as consumer-grade GPUs and AI-dedicated GPUs belong to entirely different systems—where the former relies on a general-purpose chip ecosystem and the latter is built for complete AI system products—on-device AI also requires deeply integrated customized storage solutions rather than generic standardized storage products. Based on this precise positioning, JiangboLong focuses on an on-device AI integrated storage solution, precisely matching diverse scenarios such as AI phones, AI assisted driving, AI wearables, AI PCs, and humanoid robots, anchoring clear scenario-driven storage innovation for on-device AI, forming complementary advantages with cloud AI storage.

SPU+iSA enables deep storage optimization for AI PCs, down-tier SSD for warm/cold data, significantly reducing DRAM capacity

Yan Shuyin, JiangboLong’s vice president and general manager of the enterprise storage business unit, said that with the continuous development of AI technology, the traditional data tiering model is changing. Previously, data was mostly divided into cold and hot data; now warm-data scenarios are increasingly emerging. In response to this change, JiangboLong introduced SPU (Storage Processing Unit, storage processing unit), iSA (Intelligence Storage Agent, storage intelligent agent), and HLC advanced cache technology to realize intelligent scheduling capabilities. The company also jointly developed with OEM/host manufacturers to deepen co-optimization of both software and hardware.

Different from conventional SSD controller chips, SPU is a dedicated processing unit built for an intelligent storage architecture. The chip is manufactured using a 5nm advanced process, with a maximum capacity of 128TB per single drive. Currently, the maximum capacity of mainstream cSSD is only up to 8TB. Since large-capacity eSSD solutions are costly, SPU effectively balances the capacity-and-cost challenge. It can efficiently replace HDD to offer customers new possibilities for eSSD solutions, and is also expected to significantly reduce the overall cost of ownership.

SPU’s core includes two key capabilities: in-storage lossless compression and HLC (High Level Cache) advanced caching technology. The average compression ratio of in-storage lossless compression reaches 2:1. In real testing, it covers multiple types of data such as text, code, and databases, greatly saving SSD capacity and costs. It can also use HLC technology to down-tier warm/cold data to SSD, saving nearly 40% of DRAM capacity requirements.

At the exhibition site, the author saw real testing data from JiangboLong and AMD’s jointly tuned agent host based on the Ryzen AI Max+ 395 processor. The device achieves 397B model local deployment. In a 256K ultra-long context (122B) scenario, it runs smoothly with 128GB memory, thereby reducing DRAM usage by nearly 40%. This provides an innovative practical solution for efficient local deployment and large-scale application of ultra-large models.

Yan Shuyin analyzed that warm data in KV cache is typically stored on local SSD. Balancing capacity and access speed is an important direction for on-device AI storage. On AI PC terminals, HLC technology relies on SPU to implement tiered design: the performance tier builds an AI-dedicated high-speed cache area to realize large-model expert / key-value offloading, while the storage tier is responsible for the operating system and generic data storage. Through high-priority read/write and low-priority I/O scheduling, it optimizes the AI experience while reducing terminal DRAM capacity demand and costs. In simple terms, by using HLC technology to intelligently schedule heterogeneous SSDs, it schedules SLC or QLC for different scenarios, achieving a better balance between performance and cost.

Of course, besides the SPU storage processing unit at the hardware layer, JiangboLong builds an iSA storage intelligent agent at the software layer. Together, they close the technical loop through coordinated software-hardware integration. As SPU’s “brain,” iSA storage intelligent agent is an intelligent scheduling engine for on-device AI inference. To address problems such as MoE large models having massive parameters, KV cache rapidly expanding, and I/O latency affecting inference smoothness, it efficiently solves the storage scheduling challenges for on-device AI inference through MoE expert offloading, KV cache intelligent management, and intelligent prefetching algorithms. In the jointly tuned case with AMD agent hosts, after the host installs the iSA storage intelligent agent, it can effectively coordinate and tune with SSDs, improving the overall on-device AI performance of the entire system.

AI phone storage: HLC technology coordinated with UFS, enabling extreme-size penetration into multiple mainstream AI glasses

In the embedded domain, HLC advanced cache technology and deep integration with UFS enable deployment in embedded on-device scenarios. Based on JiangboLong and Unisoc (Spreadtrum) jointly developed solution, and according to tested data on the Unisoc chip platform: after 4GB DDR is paired with HLC technology, the startup response time of 20 apps is only 851ms, approaching the normal configuration level of 6GB/8GB DDR. In addition, JiangboLong’s UFS 2.2 product, which uses the 14nm process WM7200 controller, supports sequential read/write up to 1070MB/s and 1000MB/s, while random read/write IOPS reach up to 240K and 210K respectively, exceeding industry mainstream levels. Under the premise of ensuring a smooth experience and device lifetime, it effectively reduces terminal DRAM capacity requirements and optimizes BOM costs.

Huang Qiang, JiangboLong’s vice president and general manager of the embedded storage business unit, said that from the perspective of embedded products, in the future on-device AI storage demand will mainly concentrate in three directions: high performance and large capacity, SIP system-level integration, and customized services.

JiangboLong’s wearable product line precisely follows this trend, with continuous innovation and rapid application progress. As introduced, JiangboLong is one of the few storage vendors in China that masters the full-process design capability of system-level packaging. JiangboLong can integrate multiple types of chips, such as SoC, eMMC/UFS, LPDDR, WiFi, Bluetooth, and NFC, all within a single package. The 5.8mm × 6.3mm eMMC unveiled this time is the smallest eMMC product currently publicly available in the industry. With an ultra-extreme packaging design, JiangboLong highly integrates flash memory chips and its self-developed controller. Compared with JiangboLong’s previous-generation 7.2*7.2mm ultra-small eMMC, it further reduces the occupied motherboard area by about 30%, releasing precious space for wearable structures such as smart glasses and smartwatches.

Another latest product, ePOP5x, stacks high-performance eMMC and LPDDR5x DRAM vertically inside a single package. The transmission rate of LPDDR5x is 8533Mbps, with a thickness of only 0.5mm. It represents another breakthrough in current embedded storage packaging process technology. Thanks to its ultra-thin characteristic, the product can perfectly fit into the end section of the temples of ultra-light AI glasses, while enabling high-speed data access and high-frequency memory operation. This provides key hardware support for delivering a “no-feel wearing” experience for terminal products.

In terms of power consumption performance, both new products feature JiangboLong’s next-generation self-developed eMMC controller chips, HuiYiWeiXin. Through deep optimization of read/write strategies and flash space management methods, the static power consumption of the new products is significantly reduced by about 250% compared with the previous generation. This effectively alleviates the “charge once a day” battery anxiety of smart wearable devices, providing a solid hardware foundation for always-on scenarios such as round-the-clock AI voice wake-up and health monitoring.

At the packaging and testing level, Yuan Cheng Technology under JiangboLong, as the company’s high-end packaging and testing manufacturing base, provides full-process assurance for wearable storage products—from wafer-level packaging to system-level testing. Yuan Cheng Technology has an ESAT dedicated line customized for wearable chip characteristics, capable of completing ultra-thin, high-precision stacked packaging and heterogeneous stacking manufacturing. It enables stringent reliability testing across a wide temperature range from -40°C to 125°C. Every storage chip shipped from the factory undergoes complete verification of electrical performance, aging, thermal cycling, drop tests, and more, ensuring long-lasting stable operation even in harsh usage environments such as outdoor glasses and sports watches.

Huang Qiang said that JiangboLong’s embedded storage development has already lasted for 15 years. Against the backdrop of today’s AI wave, embedded storage is shifting from single standardized storage to system integration. JiangboLong builds an end-to-end closed loop from chip definition to finished product delivery through “self-developed controllers + firmware algorithms + advanced packaging and testing.” This Foundry model of end-to-end customized AI storage services—“self-developed controllers + firmware optimization + independent packaging and testing”—is JiangboLong’s core competitiveness that distinguishes it from traditional storage vendors. It enables JiangboLong to deeply collaborate with customers already at the product definition stage, performing end-to-end customized design from wafer to finished product to meet compute requirements, power consumption, and structural constraints for specific wearable platforms. It turns “generic storage” into “scenario-defined storage.”

As introduced, JiangboLong’s wearable storage has already entered the supply chains of multiple mainstream AI glasses manufacturers. It is expected that, with the rapid growth of this breakout product category, it will become a potential growth driver for the company.

Summary

Whether it is AI phones, AI PCs, or the “loud lobster box,” although on-device AI seems close to us, it has not truly been within our reach. The storage system-level solutions that can support local execution of AI large models and intelligent agents are exactly the direction the industry is working toward. JiangboLong’s integrated storage, backed by innovative technology and outstanding performance, will continue to empower the on-device AI wave.

A massive amount of information and precise interpretation—available in Sina Finance APP

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
Add a comment
Add a comment
No comments
  • Pin