From inference to training: Meta(META.US) announces upgrade of in-house chip strategy, CFO states custom chips are a "core pillar"

robot
Abstract generation in progress

Bloomberg News has learned that despite Meta Platforms Inc. (META.US) recently reaching significant deals with top chip manufacturers, the company’s CFO Susan Li clearly stated this Wednesday that the company is committed to expanding the application boundaries of custom chips. She pointed out that because some of Meta’s workloads are highly customized, developing in-house chips can better meet specific internal algorithm needs. Currently, Meta has achieved large-scale deployment of custom chips in its core ranking and recommendation systems, and its future strategic focus will be to gradually extend this capability to the training of artificial intelligence models.

Although not a traditional cloud service provider, Meta is one of the world’s largest data center operators for training and running AI models. In recent weeks, the company has reached multiple large-scale agreements with industry leaders NVIDIA (NVDA.US) and competitor AMD (AMD.US) to purchase chips and equipment to support AI workloads. Meanwhile, the social media parent company continues to push forward with its internal AI processor development.

Susan Li emphasized that Meta is adapting to diverse task requirements by procuring different types of chips. “Based on current understanding and practical needs, we are systematically evaluating the most suitable chip solutions for each application scenario,” she said. “Custom chips have always been a core pillar of this strategic layout.”

This statement marks a critical advancement stage for Meta’s in-house chip project (MTIA). Since the MTIA plan was publicly announced in 2023, Meta’s initial R&D focus has been mainly on inference, aiming to improve the computational efficiency of Facebook and Instagram recommendation systems and reduce dependence on NVIDIA’s general-purpose GPUs.

With the explosive growth of generative AI, Meta’s demand for computing power has increased exponentially. Merely focusing on inference is no longer sufficient to support its large model strategy. Susan Li’s latest remarks send a clear signal to the market: although there are doubts about the R&D barriers for top-tier AI training chips, Meta remains firmly committed to “self-developing training chips” as the ultimate goal of its infrastructure transformation.

However, the path to autonomous computing power is not smooth. Recent market reports suggest that Meta has encountered technical bottlenecks in developing cutting-edge training chips, and there are rumors that some of its high-performance projects may face schedule adjustments. To balance the immediate high-performance computing gap with long-term self-research goals, Meta is currently adopting a flexible, diversified supply strategy.

On one hand, Meta has been reported to have reached an agreement with Google to rent TPU resources to accelerate the development of large models; on the other hand, the company maintains a deep procurement relationship with NVIDIA. Susan Li’s emphasis on “gradually expanding over time” hints that Meta will adopt a steady transition approach—initially achieving breakthroughs in specific custom tasks, then ultimately conquering the computational power needed for general large model training.

From an industry perspective, Meta’s chip development reflects the common logic among large-scale cloud providers in the AI era—full-stack self-research. By deeply coupling chip architecture with proprietary models like Llama, Meta aims to significantly reduce hardware procurement and energy costs in the long run, while avoiding dependency on supply chain fluctuations.

Although transitioning from inference in recommendation systems to training complex models presents significant architectural challenges, Meta, with its vast application scenarios and abundant cash flow, is attempting to redefine the power balance between internet giants and hardware suppliers.

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
0/400
No comments
  • Pin