Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Pre-IPOs
Unlock full access to global stock IPOs
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
Promotions
AI
Gate AI
Your all-in-one conversational AI partner
Gate AI Bot
Use Gate AI directly in your social App
GateClaw
Gate Blue Lobster, ready to go
Gate for AI Agent
AI infrastructure, Gate MCP, Skills, and CLI
Gate Skills Hub
10K+ Skills
From office tasks to trading, the all-in-one skill hub makes AI even more useful.
GateRouter
Smartly choose from 40+ AI models, with 0% extra fees
Cerebras launches IPO roadshow, targeting $115-$125 per share
Cerebras Systems will start pitching its stock to investors on Monday, with plans to sell shares at somewhere between $115 and $125 each, according to someone with knowledge of the plans who spoke to Reuters.
The artificial intelligence chip maker is trying to go public for the second time. The company pulled its first attempt in October last year.
Cerebras reported stronger financial results for the year that ended December 31. The company brought in $510 million in revenue, a jump from $290.3 million the year before. It also made a profit of $1.38 for each share, compared to losing $9.90 per share in the previous year.
Morgan Stanley, Citigroup, Barclays and UBS are handling the stock sale.
Industry is taking a shift
Cerebras’ strategy is not random. The AI industry is taking a shift from the development of new AI models to running them for actual use. This shift is a golden chance for small companies competing with Nvidia’s (NASDAQ: NVDA) monopoly. As reported by Cryptopolitan, even OpenAI isn’t convinced by Nvidia’s inference hardware.
This is because running AI models, known as inference, requires different capabilities than training them. This creates openings for specialized chip makers to find their spot in the market. Processing large batches of information needs a different balance of computing power, memory and data transfer speeds than running an AI chatbot or coding assistant.
This variety in requirements has made the inference market more diverse. Some tasks work better on traditional graphics chips, while others need more advanced equipment.
Nvidia’s purchase of Groq last December for $20 billion shows how this is playing out. Groq built chips packed with fast SRAM memory that could process AI responses faster than standard graphics chips. But the company struggled to scale up because its chips had limited computing power and were built on older technology.
Nvidia solved this problem by splitting the work. It uses its regular graphics chips for the heavy computing part of generating AI responses, called prefill, while using Groq’s chips for the faster decode step that requires less computing but needs quick data access.
Other big companies are doing something similar. Amazon Web Services announced its own split system shortly after a major tech conference. It combines its custom Trainium chips for prefill work with Cerebras’ wafer-sized chips for decode operations.
Intel joined in too, revealing plans to pair graphics chips with processors from another startup called SambaNova. The graphics chips will handle prefill while SambaNova’s chips tackle decode.
Most of the smaller chip companies have found success with decode work. SRAM memory doesn’t hold much information, but it’s extremely fast. With enough chips, or one very large chip like Cerebras makes, these systems excel at decode tasks. But companies aren’t stopping there.
New technologies challenge split-chip approach
Lumai, another startup, announced this week it built a chip that uses light instead of electricity for the math operations at the core of AI work. This approach uses much less power than traditional chips.
The company expects its upcoming Iris Tetra systems to deliver an exaOPS of AI performance while using just 10 kilowatts of power by 2029.
The chips mix light-based and electrical components, but light handles most of the work during inference. Lumai plans to use these chips first as standalone replacements for graphics chips in batch processing jobs. Later, the company wants to use them for prefill work too.
Not everyone thinks splitting the work between different chips makes sense. Tenstorrent rolled out its Galaxy Blackhole systems this week, and CEO Jim Keller criticized the approach.
“Every company in the industry is pairing up to build the accelerator accelerator accelerator. CPUs run code. GPUs accelerate CPUs. TPUs accelerate GPUs. LPUs accelerate TPUs. And so on. This leads to complex solutions which are unlikely to be compatible with changes in AI models and uses. At Tenstorrent, we thought something more general and simpler would work,” Keller said.
The smartest crypto minds already read our newsletter. Want in? Join them.