Google releases the eighth-generation TPU, with training and inference now first separated into two independent chips.

robot
Abstract generation in progress

ME News Report, April 22 (UTC+8), according to Beating Monitoring, Google CEO Sundar Pichai announced the eighth-generation TPU at Cloud Next 2026, marking the first time training and inference are split into two separate chips. TPU 8t is designed for training. A single super node can connect to 9,600 TPUs, providing 121 ExaFlops of computing power and 2PB of shared high-bandwidth memory, with processing performance three times that of the previous Ironwood generation and energy efficiency improved by up to two times. Inter-chip interconnect bandwidth has doubled, and combined with the newly launched Virgo network topology, up to one million chips can form a single logical cluster, achieving near-linear scalability. Google states the goal is to reduce the development cycle of cutting-edge models from several months to a few weeks. TPU 8i is designed for inference. A single pod connects 1,152 TPUs, equipped with 288GB of high-bandwidth memory and 384MB of on-chip SRAM, the latter three times that of Ironwood, to keep active model data as much as possible on the chip. The new Boardfly network topology significantly reduces latency, and Google claims that at the same cost, it can serve nearly twice the number of customers, aiming to support millions of agents running simultaneously. Both chips are hosted on Google’s self-developed Arm architecture Axion CPU, paired with fourth-generation liquid cooling. The plan is to officially launch later in 2026 on the Google Cloud AI Hypercomputer platform, alongside NVIDIA GPU instances. (Source: BlockBeats)

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
Add a comment
Add a comment
No comments
  • Pin