Futures
Access hundreds of perpetual contracts
CFD
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Pre-IPOs
Unlock full access to global stock IPOs
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
Promotions
AI
Gate AI
Your all-in-one conversational AI partner
Gate AI Bot
Use Gate AI directly in your social App
GateClaw
Gate Blue Lobster, ready to go
Gate for AI Agent
AI infrastructure, Gate MCP, Skills, and CLI
Gate Skills Hub
10K+ Skills
From office tasks to trading, the all-in-one skill hub makes AI even more useful.
GateRouter
Smartly choose from 40+ AI models, with 0% extra fees
Li Auto CTO Xie Yan: Aiming to become a leading company, AI chips are a must-do.
Three days before the official release of the all-new Ideal L9.
This is not just a routine iteration of flagship models, but also Ideal’s first vehicle equipped with its self-developed chip, the Mach M100.
On May 12th, Li Xiang, CEO of Li Auto, posted on social media, directly responding to external doubts about automakers developing their own chips.
He clearly stated that self-developed chips are not “follow-the-trend money-burning,” but are aimed at enabling AI to truly run in the physical world and solving problems current suppliers’ technologies cannot overcome.
“Why can Apple achieve the best experience? It’s not just because a certain technology is the strongest, but because of self-developed chips, operating systems, hardware, and cloud services, achieving full-chain autonomous design and responsibility, with no weak points.” Li Xiang said, “In the AI era, the competition is about systemic capability. Ideal is synchronizing self-developed chips, operating systems, and large models to pursue comprehensive joint design for the AI age, thereby achieving ‘the champion of user experience.’”
Actually, as early as the end of March, Li Xiang revealed that his self-developed Mach 100 chip paper was officially accepted by the Industry Track of the 2026 International Symposium on Computer Architecture (ISCA).
This made Li Auto the first automotive company ever selected in the history of this top conference’s industry division.
The Mach 100 chip adopts Ideal’s original data flow native architecture, manufactured with 5nm process technology, with a peak computational power of 1280 TOPS per chip.
Before the official release of the all-new Ideal L9, CTO Xie Yan had a dialogue with media including Jiemian News.
Consistent with Li Xiang’s view, Xie Yan used the evolution of consumer electronics to reveal the underlying logic of automakers developing their own chips:
“Apple’s chips give its operating system differentiated capabilities. This vertical integration is a value that general solutions cannot provide.”
He pointed out that future automotive competition will trend toward differentiation, and leading automakers will inevitably pursue bottom-layer self-research. “If AI is the core goal, making AI chips is a must. To become a top-tier company, you definitely have to do this.”
The phrase “must do” from executives is also supported by macro industry trends.
A McKinsey report states that driven by AI and edge computing, global semiconductor industry revenue will reach $1.6 trillion by 2030.
In this explosive growth of computing power, cars are accelerating to become the most important edge AI devices, forcing automakers to move toward underlying silicon integration.
Another side of investment in underlying technology is the increasingly fierce commercial competition in the automotive industry.
In 2025, due to intense market competition and product cycle shifts, Li Auto’s annual revenue was 112.3 billion yuan, with a decline in net profit.
Meanwhile, R&D investment hit a record 11.3 billion yuan, with about 50% directly invested in AI-related fields.
By 2026, as capacity bottlenecks are resolved, Li Auto delivered 34,085 new vehicles in April.
According to official data, by April 30, 2026, Li Auto’s total cumulative deliveries reached 1,669,442 units.
Xie Yan said in an interview that when cars have autonomous physical action capabilities, their products increasingly resemble “embodied intelligent devices.”
At this point, underlying computing power is not only a cost center but also determines whether the company can secure a ticket to the embodied intelligence era in the elimination race.
In interviews with media including Jiemian News, Xie Yan also first detailed the decision background, technological breakthroughs, and organizational innovations behind the Mach 100 self-developed chip.
Image source: Li Auto
Below is the interview transcript, lightly edited and organized by Jiemian News:
Media: When did Ideal start considering self-developing chips? What are the main considerations and limitations?
Xie Yan: I joined Li Auto in 2022, but the idea of making chips had already emerged in 2021.
At that time, “self-developed chips” was becoming an increasingly common direction in the industry, but we kept asking a deeper question: Why did Tesla initially use Nvidia, then choose to self-develop? What is the underlying logic?
This question was rarely discussed in depth back then, but we believed that only by truly understanding “why” could we decide “how to do it.”
Our choice to self-develop was mainly based on long-term technological evolution judgments.
First, the exponential growth in computing power demand. In 2022, the Scaling Law of large language models was not widely recognized, but we vaguely sensed that bigger compute would bring higher performance and better experience.
If AI capabilities keep growing, and fully replacing human drivers at L4 level is still a long way off, the demand for computing power is enormous.
Faced with increasing compute needs, relying on external vendors’ iteration speeds would be relatively passive.
Second, the bottleneck of underlying computing architecture.
After 2020, traditional von Neumann architectures became a limiting factor for AI development.
Classified by technology, CPUs and GPUs are optimized on this architecture, but we believe it’s possible to design a native AI architecture—an entirely new computing architecture optimized for AI, with vast opportunities for innovation from software to hardware.
Looking back at the history of human and computer development, breakthroughs in computer architecture often arise from needs unmet by previous generations of technology.
Intel once thought graphics computing didn’t need a dedicated architecture, just a CPU would suffice, but Nvidia launched GPUs specifically for graphics, and now their market values have reversed.
Similarly, today using GPUs or GPGPU for AI computation is possible, but inefficient.
If AI computing is the fastest-growing form of computation, then dedicated architectures for AI services are necessary.
To be a top company, making AI chips is a threshold you must cross.
This vertical integration capability is a value that supply vendor models cannot provide.
Media: What issues have you encountered in practical scenarios with self-developed chips? Why does Mach 100 adopt a data flow architecture instead of the popular Chiplet technology?
Xie Yan: The most direct issue is cost of compute.
As VLA large models and world models evolve, the compute demand for edge AI inference continues to grow.
When designing the chip, we must plan for future needs, not just current ones.
If a supplier’s solution can deliver three times the performance at half the price, we might not need to develop our own— but in reality, that’s hard to achieve. Suppliers must serve all customers, making highly customized solutions difficult for a single client.
In architecture choice, Mach 100 is a large SoC without using Chiplet technology.
For AI inference chips, memory bandwidth is critical.
We designed very large distributed SRAM on-chip, which means we don’t need to move large amounts of data off-chip via DDR, as that would degrade performance.
Media: From deciding to self-develop chips in 2021 to now having Mach 100 ready to be installed in the all-new Ideal L9, is the development pace in line with expectations?
Xie Yan: It took about three and a half years.
Basically on schedule, even faster at some points.
From project initiation in November 2022 to tape-out in 2024, and mass production in 2026, the entire cycle is over three years— quite fast for a completely new architecture automotive-grade chip.
Also, achieving a successful tape-out on 5nm process is relatively rare in complex chip development history.
Media: During the process, what is the core reason your team could achieve mass production speed within 3.5 years? How will you balance high R&D costs in the future?
Xie Yan: The key is integrated software and hardware design.
The most time-consuming part of chip design isn’t the physical implementation but understanding and analyzing requirements.
A complex SoC with a new architecture usually takes 4 to 6 years; we did it in just over three.
The secret is the joint design approach— chip and model teams, autonomous driving teams working together from day one.
It’s not about designing the chip first and then adapting software, but defining architecture while running models and verifying performance simultaneously.
For example, with the rise of large models in 2024, we quickly optimized the core for Transformer models within a month.
External suppliers or outsourcing firms wouldn’t be able to adapt so rapidly to such technical shifts.
This close, cross-departmental collaboration is the fundamental reason for our speed advantage.
Regarding costs, the industry often talks about “per chip,” but that ignores die size differences.
The correct calculation is volume × chip area.
As the AI compute demand per vehicle multiplies, only with tens of thousands or more units can self-developed chips significantly reduce high costs.
We’ve estimated that once vehicle production reaches a certain scale, the total AI silicon area needed will surpass that of smartphones, making self-developed chips economically very viable for top automakers.
Image source: Li Auto
Media: After deploying Mach 100 chips, what tangible benefits will users experience?
Xie Yan: Larger chip compute power combined with more efficient inference will make the car “more human-like” in operation, reflected in several aspects.
First, seeing farther and more accurately, enabling autonomous driving to better understand the 3D world at longer ranges and with greater precision.
Second, smoother decision-making and control, supported by larger models, with compute power as the foundation for more human-like, less abrupt driving behavior.
Third, faster response times—whether from visual sensor input to inference, or from the final output to the line control chassis, Mach 100’s data flow architecture can greatly reduce intermediate latency and process sensor signals at higher frame rates.
Long-term, we aim to provide a sense of reassurance, making the “driver’s” cognition match that of most human drivers.
Additionally, Mach 100, as a general-purpose chip, is not limited to autonomous driving. It’s more like a universal AI platform that can be continuously upgraded via software.
Our logic aligns with Tesla’s—besides autonomous driving, this chip can run AI inference algorithms for robots, and in the future, expand new capabilities like a smartphone.
Media: How is Mach 100 deployed on the all-new Ideal L9? Will different versions with varying compute power be launched for different price segments?
Xie Yan: On the all-new Ideal L9, we use virtualization technology at the bottom layer, allowing one Mach 100 chip to handle both autonomous driving (AD) and central domain controller (XCU), eliminating the need for separate XCU controllers.
In terms of versions, we will only offer one; there will be no high- or low-performance variants.
Our core differentiation is strong AI capability.
As long as our self-developed chip can deliver higher compute and lower BOM costs, with excellent cost-performance, we want every vehicle to use it.
For high-end models like the all-new Ideal L9 Livis, we will equip two chips to provide ample, top-tier compute power.
Media: After mass production of self-developed chips, will hardware-software collaboration accelerate technological iteration? How will future hardware update cycles be planned to support L4 autonomous driving?
Xie Yan: Once in mass production, hardware and software will be even more tightly integrated.
On the software side, optimization can significantly impact performance with the same hardware.
On the hardware side, we will jointly plan the next-generation chips.
While we can’t disclose the iteration pace now, we believe AI will continue to grow, necessitating ongoing iteration.
As for L4, there’s no universally accepted timeline yet, but the compute foundation must always keep advancing.
Media: What is the current chip capacity? With more companies developing their own AI chips, will foundry capacity face shortages in the future?
Xie Yan: Foundry capacity is tight now, with substrate and packaging also very constrained, but our supply is assured.
AI chip capacity is indeed scarce and tight, but we have secured supply.
The explosive growth of AI applications will lead to superlinear increases in compute demand, making foundry capacity more scarce.
However, the evaluation metrics in the industry are very simple—cost and performance.
Many companies claim they need large capacity, but only a limited effective capacity can truly establish a foothold in the market.
Media: You mentioned that to be a top automaker, you need to develop chips like Apple. Will the competition pattern resemble the smartphone industry if all carmakers develop their own chips? Will suppliers provide chips to non-top-tier automakers?
Xie Yan: That’s a good analogy.
Only top-tier automakers with sufficient scale and recognition can sustain the high costs of self-developing chips.
Conversely, self-research helps these leading companies strengthen their differentiation, like Apple and Huawei in smartphones.
For the large mid- and low-tier market, they will still rely on third-party suppliers for general-purpose chips at different price points.
Media: Recently, Li Auto made major organizational adjustments, shifting from a focus on vehicle functions to building a “digital human” logic. What is the core motivation behind this change?
Xie Yan: The fundamental logic is that organizational structure must match the business direction.
We believe cars are becoming more like physical-world robots.
A vehicle equipped with high-resolution cameras and lidar as its eyes, with Mach 100 chips, will have AI compute power surpassing the total of personal computers and smartphones.
More importantly, the vehicle will have autonomous action capabilities in 3D physical space.
In recent years, the rapid development of intelligent agent technology has made us focus on making products more proactive.
Previously, cars were passive tools; in the future, they will actively think about task completion paths.
Autonomous driving is the first closed-loop task in 3D space that can be actively completed.
Since the product has essentially become an embodied intelligent device, our R&D organization must be restructured.
Media: After the popularity of lobster, everyone is looking forward to the arrival of Agents. What advantages do vehicles as physical-world intelligent agents have?
Xie Yan: Digital-world agents mainly operate in mobile electronics, but physical-world agents must be able to move atoms.
Cars are naturally excellent embodied intelligent products—they come with wheels, power systems, sensors, and massive compute bases, giving them action ability.
This is much easier than building a robot from scratch.
Moreover, the huge scale of the automotive industry can support rapid iteration of sensors, compute, and line-controlled chassis.
Once this system is highly optimized and scaled in cars, migrating to other embodied intelligent bodies will be straightforward.
Just like the smartphone industry’s maturity drove the rise of smart devices, the scale and intelligence of the automotive industry are prerequisites for further evolution into embodied intelligence.
This article is sourced from: Polyhedron InterfaceX
Risk warning and disclaimer:
Market risks exist; investments should be cautious.
This article does not constitute personal investment advice, nor does it consider individual user’s specific investment goals, financial situations, or needs.
Users should consider whether any opinions, views, or conclusions herein are suitable for their circumstances.
Invest accordingly at their own risk.