I recently noticed that Nvidia has taken a very strategic step in the inference market. Last December, it acquired Groq’s inference chip division for $20 billion, and Groq’s founder Jonathan Ross and his team joined Nvidia, while Groq continued as an independent company with other business.



What’s exciting here is that Huang Renxun has just explained the real reason behind this decision. The motivation isn’t just to obtain technology, but to completely reclassify the inference market. Previously, all efforts were focused on only one side: increasing productivity. But the situation has changed radically.

Now, different users are ready to pay entirely different prices based on response speed. If I’m a software engineer and want (tokens) with higher response speed so I can work more efficiently, I’m willing to pay a premium for that. This market didn’t exist a few years ago, but it has now emerged strongly.

At the GTC conference in March, Nvidia released its first chip after the acquisition: Groq 3 LPU, built using Samsung’s 4-nanometer technology. The performance is truly impressive—on inference throughput per megawatt on trillion-parameter models, it reaches 35 times that of Blackwell NVL72.

What Nvidia is doing is adding a completely new sector to the market map: low latency and high price. Groq’s LPU architecture is known for its expected low and predictable latency, which perfectly complements Nvidia’s existing high-throughput line. Even though productivity may be lower, the price per unit easily makes up for it. The acquisition of Groq has truly filled the missing gap in inference products.
View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
Add a comment
Add a comment
No comments