Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Launchpad
Be early to the next big token project
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
This article helped me understand AI: The application layer is the hottest, the foundation layer makes the most money
Written by DeepThink Circle
Most people think AI is just a chatbot. You open ChatGPT and ask it to help you edit an email. It does, and it feels like magic. You happily close the page, thinking you understand what AI is. But that’s like swiping a credit card at a restaurant and then believing you understand how Visa makes money—you used the product, but you didn’t see the system.
Investor Anish Moonka recently published an in-depth article systematically analyzing the value chain structure of the AI industry. It took him nearly a year to truly understand how money flows within AI. Honestly, he admits in the article that he took some wrong turns, focusing on visible products like ChatGPT, Claude, and Gemini, while silently, $700 billion is flowing into infrastructure companies whose names he can’t even recall—chips you’ve never heard of, packaging technologies that sound fabricated, cooling systems, power plants. Concrete is being poured and formed in Texas, Iowa, and Hyderabad.
This article gave me great insight. It made me realize that our understanding of AI might be misaligned from the start. We’re seeing only the tip of the iceberg, while the real wealth creation is happening quietly beneath the surface.
Five Layers of the Cake: Why No One Talks About the Bottom Four Layers
Nvidia CEO Jensen Huang, at the Davos Forum in January 2026, described AI as a five-layer system: energy, chips, cloud computing, models, and applications. He called the entire system “the largest infrastructure build in human history.” Anish Moonka refers to this framework as the AI Stack, noting that each layer supports the one above and that capital flows bidirectionally between these layers.
This five-layer structure is quite understandable. The energy layer supplies power; AI data centers consume staggering amounts of electricity—one large training run consumes as much power as a small town in a year. The chip layer provides specialized processors for massive mathematical calculations—far beyond what regular laptops have. The cloud computing layer is a vast warehouse filled with these chips, connected via ultra-fast networks. The model layer is the actual AI software, the “brain” that learns patterns from data. The application layer is what people actually use—ChatGPT, Google Search, bank fraud detection systems, and so on.
I found an interesting phenomenon: almost all discussions about AI focus on the fifth layer—the application layer. Because that’s what we can see, touch, and use. But Anish points out a key fact: focusing only on the fifth layer ignores 80% of the full picture. For investors, entrepreneurs, or anyone trying to understand the direction of the world, what’s truly important is understanding how money flows between these layers—how it concentrates, compounds, and accumulates—and right now, it’s flowing into places most people aren’t paying attention to.
Think about the meaning of “infrastructure.” Roads, power grids, water systems—these are what keep civilization running, and no one thinks about them until they break down. AI is becoming that kind of infrastructure—invisible, essential, and extremely costly to build. This also explains why no one discusses data center cooling systems or grid capacity at cocktail parties, but precisely this “lack of discussion” signals where real money is flowing.
Where Is the Money Going? An Counterintuitive Truth
Anish reveals some astonishing numbers in his article. By 2026, the four major cloud providers—Amazon, Microsoft, Google, and Meta—are expected to spend between $650 billion and $700 billion on capital expenditures (capex). What’s that like? About the same as Switzerland’s annual GDP. Of that, roughly 75%, or $450 billion, will go directly into AI infrastructure—not chatbots or applications, but buildings, chips, cables, and cooling systems.
This number made me rethink the entire AI industry logic. Before anyone uses ChatGPT, someone must build a data center the size of a shopping mall, fill it with tens of thousands of dedicated processors, connect them with network equipment worth more than most companies’ total value, and supply enough electricity to power a small city every day. That’s what happens in layers one through three—these invisible layers are where serious capital is being deployed at scale.
But there’s a deeper contradiction. Everyone thinks companies like OpenAI are making big money, and they are. OpenAI reached $20 billion in annual recurring revenue (ARR) by the end of 2025, skyrocketing from $6 billion a year earlier and $2 billion two years prior. A tenfold growth in two years—no company in history has expanded so rapidly from that base.
Yet, Anish reveals a critical fact: in 2025, OpenAI burned through about $9 billion in cash, and in 2026, cash burn is expected to reach $17 billion. Their inference costs (the actual cost of running AI when you ask questions) hit $8.4 billion in 2025 and are projected to reach $14.1 billion in 2026. They estimate it will take until 2029 or 2030 to turn cash flow positive.
Where did this cash go? Anish provides the answer: it flows down the entire tech stack. It goes to Microsoft Azure (which, by 2032, will take 20% of OpenAI’s total revenue), to Nvidia for chips, to companies building and equipping data centers, and to power companies generating electricity. There’s an almost cyclical pattern: Microsoft invests in OpenAI, OpenAI spends money on Azure, Azure uses revenue to buy more Nvidia chips, Nvidia reports record profits, and everyone celebrates. Cash keeps flowing downward.
I believe this reveals a fundamental misconception: most users are at the top of the tech stack, most profits are at the bottom. This disconnect is at the core of the entire investment logic. As Anish puts it, this is the first lesson of the AI value chain: revenue flows upward, capital flows downward. As investors or observers, we’re often attracted by revenue growth but overlook that capital accumulation is the real moat.
History Repeats: Lessons from the Power Revolution
Anish makes a brilliant historical analogy in his article. If you want to understand what AI is doing, study the power revolution from 1880 to 1920. When Thomas Edison built the first commercial power plant on Pearl Street in Manhattan in 1882, people thought electricity was just a novelty—a fancy way to light rooms. Why need it when gas lamps worked fine?
But within just 40 years, electricity transformed every industry on Earth—manufacturing, transportation, communication, healthcare, entertainment. The companies that won weren’t the inventors of the lightbulb but those who built power plants, laid copper wires, and manufactured generators: General Electric, Westinghouse, utilities, copper miners, construction firms.
The same pattern is playing out in AI, just compressed from decades into a few years. Anish calls this phenomenon “Infrastructure Gravity.” Whenever a new computing platform appears, initial wealth creation happens in the “picks and shovels”—the infrastructure. Applications come later, garner media attention, but infrastructure captures all the profit margins.
Looking at the numbers makes this clear. Nvidia reported $215.9 billion in revenue for fiscal year 2026 (ending January 2026), up 65% from the previous year. Their data center division alone earned $62.3 billion in the last quarter, a 75% increase. This division now accounts for over 91% of Nvidia’s total revenue. One company’s quarterly revenue is $68 billion, with nine-tenths coming from a single business line.
TSMC, which manufactures Nvidia chips and nearly all other mainstream chips, held nearly 70% of the global wafer foundry market in 2025, with sales reaching $122.5 billion. The closest competitor, Samsung, had only 7.2%. Anish comments that this dominance would make even Standard Oil uneasy.
I especially agree with Anish’s point: ask anyone what the internet revolution was about, and they’ll say Google, Amazon, Facebook. But ask where the early money was made, and the answer is Cisco, Corning, and fiber optic laying companies. The same story, different decades. Infrastructure always wins first; the question is how long this window remains open.
Investors’ Map: Layer-by-Layer Opportunity Breakdown
Anish devotes a significant part of his article to breaking down investment opportunities layer by layer. I find this especially valuable because it turns abstract concepts into actionable investment frameworks.
Layer One: Energy. AI data centers consume enormous amounts of electricity—by 2026, estimated to use about 90 terawatt-hours annually, roughly ten times 2022 levels. This creates a direct investment thesis: any company that can generate, transmit, and reliably supply power to data centers will benefit. Jensen Huang’s statement in October 2025 is telling: “Data centers may generate their own power faster than connecting to the grid.” This means tech companies are becoming their own utilities, bypassing traditional grids. This trend suggests energy infrastructure investments might be closer to tech than most realize.
Layer Two: Chips. This is the most familiar layer, thanks to Nvidia. But Anish points out that the chip layer is far more complex. It has its own sublayers: designers (Nvidia, AMD, Broadcom), manufacturers (TSMC dominates with 70% market share), equipment suppliers (ASML is the sole producer of EUV lithography machines), memory suppliers (SK Hynix, Samsung, Micron), and packaging tech providers.
The concentration here is striking. Nvidia controls about 92% of the AI data center GPU market. TSMC manufactures chips for nearly all major chip designers. ASML is the only supplier of EUV lithography machines. One company designs, another builds, and another manufactures the machines. Anish notes that this concentration is both an investment argument and a geopolitical risk. This extreme centralization means high profits but also high risk.
Layer Three: Cloud Computing and Data Centers. The market is dominated by three giants: Amazon Web Services (31%), Microsoft Azure (24%), and Google Cloud (11%). But the layer extends far beyond these. Foxconn now assembles about 40% of global AI servers, Arista Networks and Credo build network infrastructure, Vertiv handles liquid cooling, data center REITs own land and buildings, and some even pour concrete.
Anish mentions a shocking figure: US Bank estimates that in 2026, 90% of cloud giants’ operating cash flow will go toward capex, up from 65% in 2025. Morgan Stanley projects these companies will borrow over $400 billion this year to fund construction—more than double the $165 billion in 2025. A single year’s bond issuance of $400 billion just to build data warehouses—an unprecedented scale.
Layer Four: Models. The “brain” layer, including OpenAI (GPT series, over $20 billion ARR), Anthropic (Claude, reportedly around $19 billion annualized revenue in early 2026), Google DeepMind (Gemini), Meta AI (Llama), etc. Anish’s assessment: this layer is both the most hyped and the least profitable. The business model is structurally problematic—spending more on computation makes models better, but this expenditure grows faster than revenue. It’s like running a restaurant where each dish requires more expensive ingredients, but customers expect prices to stay the same. Profit margins are continually squeezed.
Layer Five: Applications. The layer we see every day—ChatGPT, Google Search, Microsoft Copilot, etc. It’s the broadest and most crowded layer, destined to be the largest total addressable market, but currently the thinnest in profit and most uncertain in competition. Anish notes that differentiation here depends on data. Companies with proprietary, exclusive data will have lasting advantages—Salesforce with enterprise CRM data, Bloomberg with financial data, Epic with medical records.
I particularly agree with Anish’s view: the best returns over the next 3–5 years will come from investing in infrastructure now and applications later. The smartest capital is already positioned accordingly. The companies that will truly win the application layer are those that own data others cannot access—many of which don’t even call themselves AI companies.
Is This a Bubble? A Necessary Question
Anish directly addresses a core skepticism: “Is this just a replay of the internet bubble? Massive infrastructure spending, no profits, everyone hyped up?” His answer is convincing.
The key difference lies in demand timing. During the internet bubble, companies built infrastructure for unmet demand. Fiber networks and servers were built, but users still dialed in with modems. Infrastructure was in place, but demand only exploded 5–7 years later, and everything in between was wiped out.
By 2026, AI demand already exists and is growing rapidly. Nvidia can’t produce chips fast enough; TSMC’s advanced packaging capacity is sold out; cloud rental prices are rising instead of falling; OpenAI gained 400 million active users from March to October 2025. Models are being used, computation is being consumed, customers are paying.
But Anish honestly points out three major risks: misallocation of capital—if AI service revenues don’t grow fast enough to justify over $650 billion in spending, some companies will face severe profit compression, even Amazon’s free cash flow could turn negative this year; concentration risk—TSMC’s dominance, ASML’s monopoly on EUV machines, Nvidia’s control of 92% of AI data center GPUs, any geopolitical or natural disaster disruption could impact the entire stack; and the DeepSeek challenge—Chinese AI lab DeepSeek in January 2025 achieved near-frontier performance with a fraction of the training cost, challenging the assumption that “more spending equals better AI.”
I believe Anish’s honesty about risks makes his analysis more credible. He doesn’t shy away from these issues but lays them out clearly. Even considering these risks, McKinsey estimates that by 2030, global data center investments could reach $6.7 trillion, and PwC projects AI could contribute $15.7 trillion to global GDP by then. Even if these numbers are off by 50%, we’re still talking about the biggest technological-driven economic shift since the internet.
Anish’s words resonate: “Be skeptical of models, skeptical of timelines, but don’t be ignorant of the supply chain. These are different things. One is a healthy intellectual stance; the other can cost you money.”
Playing the Game at the Right Layer
Anish uses a gaming analogy to summarize investment strategy. Think of AI as a five-level video game, each with different difficulty and rewards. The energy layer is the tutorial—low risk, steady returns. The chip layer is the boss fight—highest profit but hardest. The cloud layer is multiplayer servers—big players take a cut. The model layer is PvP—brutal competition, most players eliminated. The application layer is an open world—limitless possibilities but no guaranteed loot.
His meta-strategy is simple: you don’t have to play all five levels. Most try to go for the fifth because it’s the most obvious, but smart money is playing the second and third levels, where experience is highest right now.
This framework’s value is that it clarifies your position in the tech stack determines what you should focus on. For non-technical folks, you don’t need to understand how GPUs work; you need to understand someone must make them, host them, power them—these “someones” are public companies. For technical folks, you already know models are advancing but may underestimate how quickly physical constraints become bottlenecks. For investors, the AI value chain is five different trades, each with distinct risks and rewards. Treating “AI” as a single industry is as naive as treating “technology” as a single industry in 1998.
Finally, Anish notes that this infrastructure advantage won’t last forever. At some point, infrastructure matures, applications integrate, and value shifts upward—like the internet era, where Amazon, Google, and Facebook captured more value than fiber companies and server makers. But we’re not there yet; we’re still in the infrastructure phase, the pick-and-shovel stage. And the pick-and-shovel stage is printing money.
Reading Anish’s long article, my biggest takeaway is understanding a simple but profound truth: consumers see products; investors see supply chains; the best investors see the supply chain before products hit the market. Five years from now, the winners’ names will seem obvious—they always do. The game is about seeing the structure before others catch up.
In ten years, understanding the AI tech stack will be as fundamental as understanding a balance sheet. Learn the stack, map the layers, follow the capital. That’s the game.