Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Pre-IPOs
Unlock full access to global stock IPOs
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
Promotions
AI
Gate AI
Your all-in-one conversational AI partner
Gate AI Bot
Use Gate AI directly in your social App
GateClaw
Gate Blue Lobster, ready to go
Gate for AI Agent
AI infrastructure, Gate MCP, Skills, and CLI
Gate Skills Hub
10K+ Skills
From office tasks to trading, the all-in-one skill hub makes AI even more useful.
GateRouter
Smartly choose from 40+ AI models, with 0% extra fees
Asteras Labs publicly releases the 'Scorpio X Series'…… Reducing AI data center connection bottlenecks
One of the biggest challenges for AI data centers is “connectivity.” If there is latency in data transfer between semiconductors, expensive AI accelerators can only remain idle while waiting. To address this, Astera Labs has launched a new series of switch products to reduce this latency.
Network chip company Astera Labs announced the release of the latest “Scorpio X Series” intelligent data structure switches. The company claims this product is the largest open “memory semantics” structure switch in the industry. It is explained that this product focuses on helping hyperscale data center operators expand computing resources to a greater extent while reducing latency issues.
Along with this release, the existing “Scorpio P Series” PCIe structure switch product line has also been expanded. The new P series offers various configurations from 32 channels to 320 channels. This provides more options for data center designers who need to efficiently transfer large-scale data between AI processor clusters.
In the era of hyperscale AI, the bottleneck is not the GPU but data transfer
Astera Labs believes that in AI system expansion, the core issue is no longer just chip performance, but the efficiency of inter-chip connections. Recently, large language models have become extremely massive, with trillions of parameters, making it difficult to fit all computations into a single server rack. Ultimately, hundreds or thousands of GPUs must be integrated into a huge cluster to operate.
The problem is that, in this process, data constantly shuttles back and forth between chips, causing congestion to worsen. When the required data is waiting to arrive from other parts, the GPU remains idle. Considering that the hourly operating cost of AI clusters can reach thousands of dollars, equivalent to millions of Korean won, this waiting time can seriously reduce data center profitability and efficiency.
Accessing structures as easily as “memory access”… reducing latency, improving processing efficiency
The flagship product, the “Scorpio X Series 320-Channel Intelligent Structure Switch,” features a redesigned interaction method between switches and chips. Based on a “memory semantics” architecture, this product allows GPUs and other AI accelerators to access resources distributed throughout the structure via simple load/store operations. In short, it is a structure that makes remote resources accessible as if they were local memory.
As a result, the entire structure operates like a unified memory pool. It is expected to reduce overhead caused by traditional packet translation processes, thereby lowering latency. For AI data centers, this means the potential to handle more tasks with the same computing resources.
Additionally, the product incorporates Astera Labs’ proprietary technologies “Hypercast” and “In-Network Compute.” These are switch structures capable of not only transmitting data but also directly performing some processing tasks. Specifically, they can handle “aggregation operations” such as data merging or distribution at the network level. The company explains that these operations can be twice as fast as before. This could directly impact the “token economy” of AI workloads, i.e., processing efficiency per unit cost.
High-density 320-channel design replaces traditional switches… also supports open standards
Another major advantage of the Scorpio X Series is its “high radix” design. It offers 320-channel connections based on PCIe 6 on a single chip, capable of replacing multiple traditional data center switches. This simplifies network architecture and shortens the physical distance data must travel, reducing overall system complexity.
The expanded Scorpio P Series complements the X Series. Astera Labs states that these products are designed to support the construction of front-end networks and AI computing systems collaboratively. The company emphasizes that they support not only open standards but also platform-specific protocols such as NVIDIA’s NVLink Fusion and UALink, enabling a network structure that can be practically applied across various AI processors.
CEO Jitendra Mohan said, “Leading today’s most demanding AI applications requires a connectivity infrastructure that matches the performance of their accelerators.” This means that for AI industry development, avoiding chip connection bottlenecks is essential.
Astera Labs’ release indicates that the focus of AI competition is rapidly shifting from semiconductor performance itself to the connection architecture of entire systems. In the future, in AI data centers, how to efficiently integrate these faster chips will likely become a core competitive advantage.
TP AI Notice: This article is summarized based on the TokenPost.ai language model. The main content may be omitted or factually inaccurate.