Futures
Hundreds of contracts settled in USDT or BTC
TradFi
Gold
Trade global traditional assets with USDT in one place
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Futures Kickoff
Get prepared for your futures trading
Futures Events
Participate in events to win generous rewards
Demo Trading
Use virtual funds to experience risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Launchpad
Be early to the next big token project
Alpha Points
Trade on-chain assets and enjoy airdrop rewards!
Futures Points
Earn futures points and claim airdrop rewards
Investment
Simple Earn
Earn interests with idle tokens
Auto-Invest
Auto-invest on a regular basis
Dual Investment
Buy low and sell high to take profits from price fluctuations
Soft Staking
Earn rewards with flexible staking
Crypto Loan
0 Fees
Pledge one crypto to borrow another
Lending Center
One-stop lending hub
VIP Wealth Hub
Customized wealth management empowers your assets growth
Private Wealth Management
Customized asset management to grow your digital assets
Quant Fund
Top asset management team helps you profit without hassle
Staking
Stake cryptos to earn in PoS products
Smart Leverage
New
No forced liquidation before maturity, worry-free leveraged gains
GUSD Minting
Use USDT/USDC to mint GUSD for treasury-level yields
From inference to training: Meta(META.US) announces upgrade of in-house chip strategy, CFO states custom chips are a "core pillar"
Bloomberg News has learned that despite Meta Platforms Inc. (META.US) recently reaching significant deals with top chip manufacturers, the company’s CFO Susan Li clearly stated this Wednesday that the company is committed to expanding the application boundaries of custom chips. She pointed out that because some of Meta’s workloads are highly customized, developing in-house chips can better meet specific internal algorithm needs. Currently, Meta has achieved large-scale deployment of custom chips in its core ranking and recommendation systems, and its future strategic focus will be to gradually extend this capability to the training of artificial intelligence models.
Although not a traditional cloud service provider, Meta is one of the world’s largest data center operators for training and running AI models. In recent weeks, the company has reached multiple large-scale agreements with industry leaders NVIDIA (NVDA.US) and competitor AMD (AMD.US) to purchase chips and equipment to support AI workloads. Meanwhile, the social media parent company continues to push forward with its internal AI processor development.
Susan Li emphasized that Meta is adapting to diverse task requirements by procuring different types of chips. “Based on current understanding and practical needs, we are systematically evaluating the most suitable chip solutions for each application scenario,” she said. “Custom chips have always been a core pillar of this strategic layout.”
This statement marks a critical advancement stage for Meta’s in-house chip project (MTIA). Since the MTIA plan was publicly announced in 2023, Meta’s initial R&D focus has been mainly on inference, aiming to improve the computational efficiency of Facebook and Instagram recommendation systems and reduce dependence on NVIDIA’s general-purpose GPUs.
With the explosive growth of generative AI, Meta’s demand for computing power has increased exponentially. Merely focusing on inference is no longer sufficient to support its large model strategy. Susan Li’s latest remarks send a clear signal to the market: although there are doubts about the R&D barriers for top-tier AI training chips, Meta remains firmly committed to “self-developing training chips” as the ultimate goal of its infrastructure transformation.
However, the path to autonomous computing power is not smooth. Recent market reports suggest that Meta has encountered technical bottlenecks in developing cutting-edge training chips, and there are rumors that some of its high-performance projects may face schedule adjustments. To balance the immediate high-performance computing gap with long-term self-research goals, Meta is currently adopting a flexible, diversified supply strategy.
On one hand, Meta has been reported to have reached an agreement with Google to rent TPU resources to accelerate the development of large models; on the other hand, the company maintains a deep procurement relationship with NVIDIA. Susan Li’s emphasis on “gradually expanding over time” hints that Meta will adopt a steady transition approach—initially achieving breakthroughs in specific custom tasks, then ultimately conquering the computational power needed for general large model training.
From an industry perspective, Meta’s chip development reflects the common logic among large-scale cloud providers in the AI era—full-stack self-research. By deeply coupling chip architecture with proprietary models like Llama, Meta aims to significantly reduce hardware procurement and energy costs in the long run, while avoiding dependency on supply chain fluctuations.
Although transitioning from inference in recommendation systems to training complex models presents significant architectural challenges, Meta, with its vast application scenarios and abundant cash flow, is attempting to redefine the power balance between internet giants and hardware suppliers.