The Achilles' heel of AI is not NVIDIA

Writing: Alan Walker in Silicon Valley

Silicon Valley is burning through 700 billion dollars, betting not on whose model is smarter — but on whether the four narrow places on Earth can produce the parts on time. Those who don’t understand this layer will be building castles on quicksand for the next three years.

At 11:30 p.m., turning into a small alley off University Ave, there’s a tiny restaurant only old-timers know. The owner has been running it in the Bay Area for twenty years; the menu is never on Yelp, and behind the bar hangs a group photo from 1998 of Sun Microsystems employees. Alan Walker leans against the booth in the back, third glass of whiskey, speaking colder and colder, “Tonight I’ll tell you some real stuff. This round of AI, everyone’s got the wrong direction.”

You think AI is a software problem. Wrong — it’s a problem with a few machines.

“Look at the news, it’s all bubble layers.” Alan pushes his glass aside, “Model benchmarks, who raised a hundred billion, which startup’s valuation doubled — these are all washed every six weeks, and have little to do with AI’s true fate. What really determines the next decade is hyperscalers burning 700 billion dollars in capex annually, turning into wafers, fiber optics, concrete, and electricity. Physical world stuff.”

“Follow this line down, layer by layer, and it gets colder — everyone’s arguing around the same five companies, ignoring the layers below. But the real constraints are underneath. Not in the cloud, not in the models, not in keynote slides. In a few buildings, in small towns, in thousands of engineers around the world you’ve never heard of. To put it plainly — AI isn’t a software problem at all; it’s a problem with a few machines you’ve never heard of. If you don’t understand this layer, any investment judgment is just guesswork.”

A machine in a small Dutch town determines the ceiling of all AI.

“Let me tell you about one machine.” Alan lights a cigarette (the restaurant owner nods in approval). “There’s a town called Veldhoven in the Netherlands, and a company called ASML. One machine sells for four hundred million dollars — as big as a bus — it’s the most complex commercial object ever made in human history, bar none. What does it do? Inside a vacuum chamber, it shoots fifty thousand drops of molten solder per second, with laser two times: first to flatten the solder into a sheet, second to vaporize it into plasma, at a temperature higher than the sun’s surface, emitting 13.5 nanometer extreme ultraviolet light.”

“This light is absorbed even by air, even by glass, so lenses are impossible — only mirrors can be used. Who polishes the mirrors? Zeiss in Germany. How smooth? Enlarged to cover the German landmass, the highest bump is less than a millimeter. Light bounces between the mirrors, finally hitting a silicon wafer moving at a few meters per second, etching circuits thinner than viruses, with alignment precision controlled down to a few atoms by magnetic levitation.” He pauses, “Five thousand suppliers worldwide assembled this machine. The light source is made by a California company — ASML had to buy the whole thing back then because no one could make it stable. No country on Earth can produce it independently. China spent hundreds of billions trying to bypass it, but can’t. This isn’t just industrial equipment — it’s a civilization product. The thin software shell of AI is built on top of this kind of thing.”

The deeper you go, the fewer players there are.

“Marcus chimes in: How many of these narrow places are there?” Alan laughs, “More than you’d think. Break down the AI supply chain — cutting-edge large models rely on GPUs; GPUs rely on CoWoS packaging; CoWoS depends on a few TSMC Nanke lines; HBM memory depends on European companies making hybrid bonders; 800G+ optical modules depend on InP indium phosphide substrates; InP depends on two furnace suppliers that take two weeks to grow a batch; all of this relies on transformers, gas turbines, and a power grid never designed for AI. Each layer down, the players halve. At the bottom, often only two or three companies remain, sometimes just one.”

He takes a sip of his drink. “Civilization, from afar, looks like a smooth surface of water, but up close, it’s supported by a few load-bearing pillars. Break one pillar, and the whole system stops. An Icelandic volcano erupts, the Suez Canal blocks a ship, an export control notice on a Tuesday afternoon — you don’t see these pillars normally, but when you do, it’s too late. This game is about who can draw this pillar map earliest. Most financial media focus on the lightbulb at the top of the pillar, but they have no idea what the rebar underneath looks like.”

The first bottleneck: indium phosphide. Two companies determine the speed of AI.

“I’ll tell you about the first bottleneck — InP, indium phosphide.” Alan’s voice lowers. “All 1.6T coherent optical modules rely on this substrate. Guess how many non-Chinese suppliers produce 4-inch or 6-inch polished InP substrates? Two. One is a division of Sumitomo Chemical, the other is a small American company most investors haven’t even looked at their 10-K. Backlog is at a record high, capacity is doubling — but the market still values it as a cyclical telecom material stock, just like eighteen months ago.”

He taps the table. “Do the math: a million-GPU training cluster, using a fat-tree topology, each GPU needs at least one optical module, often more. Millions of optical modules, each containing a small InP wafer. The global non-Chinese InP capacity, after expansion, is just a line in an Excel sheet. This math is unbalanced — prices will rise, or allocation will be tight, or both. When the market is unbalanced, it always ends with price adjustments. This isn’t cyclical stock — it’s a structural shortage. If you don’t understand this layer, you’ll pay tuition over the next three years.”

The second bottleneck: advanced packaging. TSMC is also waiting in line.

“The second bottleneck, advanced packaging.” Alan orders another drink. “Many companies make cutting-edge silicon wafers, few do advanced packaging. CoWoS, ABF substrates, hybrid bonders — these determine the ceiling of all products after Blackwell, Rubin, and beyond. Aligning two dies at nanometer precision and bonding them into a complete logic chip — only about four companies worldwide can do this.”

“You won’t believe it — one European company is listed on pink sheets in the U.S., because U.S. brokers are still confused about secondary listings on European exchanges. This mispricing isn’t subtle — it’s a daylight structural undervaluation.” He leans back. “This packaging layer is the real bottleneck. TSMC is also waiting for hybrid bonders. In the AI arms race, it’s not about who can buy GPUs — everyone has money — but who can secure packaging capacity. This capacity limits the compute growth in 2027 and 2028. No one reports on this because they can’t interview the engineers in those factories.”

The third bottleneck: electricity. Three companies control it, orders booked until 2030.

“Kai leans over: What about power? Silicon Valley keeps shouting about data center power shortages.” Alan smiles bitterly. “Power shortage is the result, the root cause is gas turbines. A GW-scale data center park needs eight to twelve large industrial gas turbines, plus the same amount of backup units. Only three companies make these worldwide — GE Vernova, Siemens Energy, Mitsubishi Heavy Industries. Orders are booked until 2030. If you pay today, delivery is in 2029. Backup generators, switchgear, medium-voltage transformers — all in this state, lead times measured in years.”

He shakes his head. “Hyperscalers’ 2026 capex guidance starts with a ‘7’ — 700 billion dollars. This money can’t turn into electricity faster than turbine casting and shipping. The bottleneck has never been capital — this round, capital is abundant. The bottleneck is the people making parts. The core blades of a gas turbine take over ten months to cast, heat-treat, inspect, and assemble. The ceiling of AI is ultimately set by the capacity of a few foundries. This isn’t something TV news will talk about — it’s not sexy. But physics is physics; no amount of money can shorten casting times.”

The fourth bottleneck: November 27, 2026. The biggest catalyst of the decade.

Alan checks his watch. “The fourth bottleneck, the hottest — critical minerals. By the end of 2025, China paused exports of gallium, germanium, and antimony. Listen carefully — it’s a ‘pause,’ not a ‘cancel.’ When does the pause end? November 27, 2026.”

He enunciates. “That day is the biggest known catalyst on the entire financial calendar from now until the end of 2030. But if you turn on CNBC, Bloomberg, or mainstream financial media — no one pays attention.” He sneers slightly. "If the

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
Add a comment
Add a comment
No comments
  • Pin