Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Pre-IPOs
Unlock full access to global stock IPOs
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
Elon Musk predicts: Severe shortages of AI chips and memory, Terafab built at Giga Texas, spending $3 billion to run Intel 14A and deliver SpaceX
Elon Musk, CEO of Tesla, warned during the Q1 2026 earnings call: both logic chips and memory chips will face severe shortages. If Tesla does not manufacture them itself, it will inevitably run into capacity bottlenecks—this is precisely the reason Terafab was created. In the earnings call, he first disclosed the site of the research facility in the Giga Texas campus, with a budget of about $30 billion USD, using the Intel 14A process, producing thousands of wafers per month. He also revealed the core logic of the integrated architecture: mask making, logic, memory, and packaging are all put into the same building, while the “scaling phase” from expanding the research facility to full-scale mass production will be handled by SpaceX.
(Background: What is Terafab? Musk calls for global chip shortages to be 2% in demand; how do you build a factory “bigger than TSMC”?)
(Background update: Musk responds from afar to Wei Zhejia: Terafab will always be TSMC’s largest customer, not a competitor)
Table of Contents
Toggle
“In terms of industry growth speed, logic chips—and even more so memory chips—we expect that if we don’t manufacture chips ourselves, we will encounter bottlenecks. This is the reason Terafab was created.” On April 22, during Tesla’s Q1 2026 earnings call, Musk’s words gave Terafab’s plan the most straightforward characterization: not a negotiating chip to pressure suppliers, but a recognition that Tesla sees no path to rely on external supply amid the explosive growth of AI computing power—building it in-house is the only solution.
Research facility in Giga Texas: $30 billion, Intel 14A, thousands of wafers per month
During the earnings call, Musk disclosed for the first time the specific outline of the research facility: the location has been confirmed as the Giga Texas campus. The budget is about $30 billion USD, with monthly production capacity of thousands of wafers, using the Intel 14A process. Musk explained that Terafab plans to start from the research facility and then gradually scale up to mass production. When Terafab enters the “initial scaling phase,” the Intel 14A process is expected to be in a state ready for mature mass production—the timing lines up.
It is worth noting that the “scaling phase” from the research facility to full mass production will not be handled by Tesla alone—Musk clearly stated that this part will be taken over and executed by SpaceX. The mass production expansion and systems integration experience SpaceX has accumulated in rocket manufacturing are viewed as a key leverage for Terafab to make the leap from lab-scale to industrial-scale.
Nowhere else in the world: memory, logic, masks, and packaging all under one roof
The most disruptive design of the research facility is the ultimate in vertical integration: lithography mask creation, logic chips, memory chips, and packaging—all concentrated within the same building. Musk said plainly during the earnings call: “There is no other place in the world that puts these four things under one roof.”
Behind this integrated architecture is a clear engineering logic—a quick iteration loop. In traditional chip development, mask making, logic design, memory integration, and packaging testing are distributed across different suppliers and locations. Every back-and-forth between each step incurs time costs measured in weeks or months. Terafab compresses the entire chain into a single building, aiming to allow Tesla to iterate AI chip designs at several times the industry’s speed, supporting the rapid evolution of multiple product lines such as Optimus robots, FSD autonomous driving, and xAI computing power.
Intel joins, recruits in Taiwan, states a position on TSMC: Terafab has started mobilization
The new material coming out of the Q1 earnings call is the latest checkpoint in a recent series of Terafab moves. Earlier, Intel has officially announced that it is joining the Terafab program, working together with SpaceX and Tesla to build terawatt-class AI computing infrastructure. On the talent front, Tesla has been actively recruiting in Taiwan, offering triple the salary to attract engineers familiar with 2nm process and advanced CoWoS packaging, directly targeting the core talent pool at TSMC.
In response to external doubts about competitive relations with TSMC, Musk made his stance clear: “Terafab will always be TSMC’s largest customer, not a competitor.” This statement echoes Terafab’s positioning logic: the goal is to fill the absolute gap in AI computing power—not to replace the existing wafer foundry ecosystem. He had proposed earlier that the global annual AI computing capacity is about 20 GW, which can only satisfy about 2% of his estimated long-term demand. The scale of the gap is so large that the current supply chain cannot even begin to imagine it.
The infrastructure battlefield is being rewritten
Judging from the numbers in the earnings report, Terafab’s $30 billion USD budget for the research facility is only the starting point. External estimates suggest that to truly close the computing-power shortfall described by Musk, the full construction cost could be as high as $5 trillion USD. Tesla’s official total budget range is $250 to $400 billion USD. The first-phase project is planned to begin production in the second half of 2027, move into mass production in 2028, and be fully completed in 2030.
When one of the most high-profile tech entrepreneurs in the world decides to personally step in and build a wafer fab, the signal conveyed by this move itself goes beyond any earnings figure: the AI computing race is no longer just a contest of software, models, or cloud computing. Whoever controls the physical infrastructure for chip manufacturing will determine the rules of the game in the next round.