Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Launchpad
Be early to the next big token project
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
Oracle(ORCL.US) certification has become a key bargaining chip. Can AI chip unicorn Cerebras leverage the giant's halo to relaunch its IPO?
Bloomberg News has learned that as AI chip manufacturer Cerebras seeks an eventual IPO, the company appears to have secured a major cloud computing client: Oracle (ORCL.US).
During the analyst call after Oracle announced its quarterly earnings on Tuesday, one of the company’s two CEOs, Clay Magouyrk, stated that its infrastructure includes Cerebras chips, as well as GPUs from market leader Nvidia and competitor AMD.
Magouyrk said, “Our infrastructure is flexible and versatile, capable of supporting a wide range of workloads from the smallest to the largest. We continuously deliver the latest accelerators, from Nvidia and AMD’s newest products to emerging designs from Cerebras and Positron (another AI hardware startup).”
Cerebras offers cloud services utilizing its large-scale WSE-3 chips. The company previously filed for an IPO in 2024 but withdrew it last October. A few days later, the company announced an $1.1 billion funding round, valuing it at $8.1 billion; CEO Andrew Feldman stated that Cerebras still plans to go public.
One of the most notable concerns in Cerebras’ original prospectus for potential investors is its reliance on a single Middle Eastern customer. G42, supported by Microsoft and headquartered in Abu Dhabi, United Arab Emirates, contributed 87% of Cerebras’ revenue in the first half of 2024.
Expansion of Partnerships
Having a name like Oracle to bolster its customer list could be a significant boost for Cerebras, especially after a major announcement earlier this year. In January, Cerebras announced it had secured a $10 billion commitment from OpenAI and other companies, with OpenAI relying on cloud services from Oracle and others. The following month, OpenAI announced a collaboration with Cerebras to launch a research preview of Codex-Spark for ChatGPT Pro customers, a fast-response AI model designed for software development.
Oracle’s earnings call took place after the company reported better-than-expected results. It raised its guidance for fiscal 2027 and stated that its remaining performance obligations (RPO) increased more than threefold from a year earlier, reaching $553 billion.
After mentioning Cerebras and other chip manufacturers, Magouyrk said, “Overall, we are confident that our investments in data centers, computing power, and customer relationships will only become more valuable over time.”
Market Competition and Reasoning Technologies
Although Cerebras is trying to compete as a new player against the world’s most valuable company, the market for AI model development is nearly insatiable in its demand for computing power, as developers continually scale up to respond quickly to user needs.
Nvidia is leveraging its massive cash reserves to expand into new product areas. In December last year, the company acquired core assets of AI chip startup Groq for about $20 billion.
Nvidia plans to unveil a new architecture inspired by Groq’s technology at the GTC developer conference in California next week.
Magouyrk mentioned during the call that GTC will feature some “key releases.” He also said that responding to requests requires not only strategic data center deployment but also innovative technology.
“It depends on the type of hardware deployed, which is why you see so much innovation around these AI accelerators,” he said. “If you look at what Groq, Cerebras, or Positron are doing, all these different types of customers are asking: how can we not only reduce inference costs but also significantly lower inference latency?”