AI Bottleneck Investment Strategy: 14 Targets Covering Every Layer from Power to Lithography

Author: George Kikvadze

Compiled by: Deep Tide TechFlow

Introduction: George Kikvadze, Vice Chairman of the Bitfury Group, proposes a reverse thinking approach: the most profitable opportunities in the AI track are not at the model level, but in infrastructure bottlenecks such as power, cooling, memory, and networking. He outlines 7 critical “choke points” in AI systems and publicly shares his 14-target portfolio, currently yielding about 60% returns. This “bottleneck investment” framework is worth every AI investor’s careful review.

To understand where money can be made in AI, don’t just look at headlines—look at where the system is under pressure.

The simplest analogy: today’s AI is like a factory with unlimited orders, but power, cables, and cooling can’t keep up.

This mismatch itself is an opportunity.

After detailed due diligence, we have bet on the following “AI bottleneck” portfolio:

$CEG $GEV $VST $WMB $PWR $ETN $VRT $MU $ANET $ALAB $ASML $LRCX $CIFR $IREN

The real question to ask

Most investors ask: “Who will win AI?”—but that’s the wrong question.

The right question is: Where will the system break? Who is making money fixing it?

In the market, dependencies are leverage.

AI dependencies are not abstract—they are tangible:

  • Megawatt-level power
  • Transformer delivery timelines
  • Rack cooling capacity
  • Memory bandwidth

The economic focus is shifting toward these areas.

The only analytical framework needed

AI expansion → Infrastructure under pressure → Forced investment → Bottleneck → Pricing power → Profit upgrade

When demand is rigid and supply is constrained: prices move first, profits follow, and stock prices are revalued last.

Why now

A few numbers explain the entire issue:

Nearly 50% of data center projects in the U.S. are delayed—not due to lack of demand or capital, but because of power shortages. Transformer delivery times have extended from 24 months before 2020 to over 5 years now. Data center construction takes 18 months. This mismatch is unbalanced.

By 2026, mega-scale vendors will spend nearly $700 billion on AI infrastructure—almost 6 times 2022’s figure. Amazon: $200 billion, Google: $175-185 billion, Meta: $115-135 billion. No one is slowing down.

Semiconductors now account for 42% of the total market cap of the S&P 500 IT sector, more than doubling from the bottom of the 2022 bear market, and over four times the weight in 2013. Semiconductors contribute 47% of the forward-looking EPS of the IT sector, nearly tripling since 2023.

The market is rushing into compute capacity at an unprecedented density.

But compute is no longer the bottleneck.

Capital is flooding into chips, but the real constraints have shifted elsewhere.

This gap is the trading opportunity.

Bottleneck Map: Where is the pressure really?

  • Power: The foundation

AI cannot expand without power. Period.

The U.S. needs to add capacity equivalent to the entire current data center power infrastructure every two years to meet AI demand forecasts before 2030. Nuclear power is the only reliable, large-scale baseload source that can meet the scale required by mega-scale vendors, but even the fastest nuclear reboots take years.

Target: $CEG $GEV $VST $WMB

These are not utility stocks—they are AI capacity providers. The market has not yet reclassified them. This mispricing is an opportunity.

Constellation Energy ($CEG) operates the largest fleet of nuclear power plants in the U.S. and is one of the few providers offering large-scale, reliable, zero-carbon baseload power. Mega-scale vendors are accelerating long-term power purchase agreements with nuclear suppliers, and Constellation is directly on this demand path.

GE Vernova ($GEV) is building the generation backbone for the next energy cycle, covering gas turbines, renewables, and grid solutions. When AI demand accelerates, the ability to deploy power quickly and at scale becomes critical, and GE Vernova’s gas turbines and electrification capabilities are at the core.

Vistra Corp ($VST) has a diversified generation portfolio, including nuclear, gas, and retail power, capable of meeting both baseload and peak demands. AI workloads cause highly volatile power needs, making this flexibility especially valuable.

Williams Companies ($WMB) operates one of the largest natural gas pipelines in the U.S., providing fuel to bridge the gap between current demand and future nuclear capacity. In AI infrastructure expansion, natural gas is the fastest way to bring online incremental power. Williams is essentially an energy raw material supplier for AI growth.

Power grid and electrification: Constraints behind the power

Power generation is one thing; transmission is harder.

The U.S. grid interconnection queue now extends beyond 2030. Over the next decade, more than $50 billion in transmission investments will be needed just to meet existing commitments, not including new AI data centers.

Target: $PWR $ETN

The timeline is slipping here, and profit margins are expanding. Companies solving the “last mile” delivery will have lasting, long-cycle pricing power.

Quanta Services ($PWR) is a leading contractor building and upgrading transmission infrastructure, connecting power generation to consumption. When grid congestion becomes the main bottleneck for AI expansion, Quanta is directly on the long-term, non-discretionary capital expenditure path. Its backlog is a forward indicator of grid stress.

Eaton Corporation ($ETN) provides power distribution, switchgear, and power management tech, enabling large-scale, safe, and efficient power delivery. As data centers push toward higher power densities and more complex energy flows, Eaton’s components shift from standardized hardware to critical infrastructure.

Cooling: The silent ceiling

Heat kills performance. Thermodynamics have no software patch.

Next-generation AI facilities aim for 250 kW per rack, compared to 10-15 kW in standard enterprise data centers a decade ago. Liquid cooling is no longer optional but essential infrastructure. Every GPU sold requires corresponding cooling capacity, and this ratio will not change.

Target: $VRT

Vertiv is close to a monopoly in large-scale data center cooling. It’s one of the most underestimated links in the entire AI stack because no one cares about cooling until a cluster crashes.

Vertiv Holdings ($VRT) designs and deploys thermal management systems to keep high-density AI clusters operational under extreme power loads. As racks shift from air cooling to liquid cooling, Vertiv is at the heart of this structural upgrade cycle, expanding in tandem with AI compute deployment. This is not optional spending but a prerequisite for normal operation.

Memory: The next bottleneck

AI is shifting from compute-limited to memory-limited.

As models grow larger and inference volumes explode, memory bandwidth and capacity become constraints, not raw processing power. High-bandwidth memory (HBM) supply is tight. The top three AI memory suppliers control over 90% of global HBM output. Micron is the main Western beneficiary.

Core target: $MU

This is the next wave of profit upgrades. Most portfolios are not yet positioned for this. When the market reacts, they will be.

Micron Technology ($MU) is one of the few global manufacturers capable of large-scale production of advanced HBM. HBM is critical for AI training and inference workloads. When memory becomes a system performance bottleneck, Micron shifts from cyclical supplier to a structural beneficiary of AI demand. This shift is not yet fully reflected in valuation, offering room for sustained profit upgrades and multiple expansion.

Networking: The throughput layer

The speed of AI clusters depends on the slowest link.

A single network bottleneck can halt an entire cluster of thousands of GPUs, wasting hundreds of millions of dollars of capital. As cluster sizes expand toward 100k GPUs, interconnect issues grow exponentially. One bottleneck can bring everything to a halt.

Target: $ANET $ALAB

Quiet, critical, under-allocated. No one talks about networking until it fails.

Arista Networks ($ANET) builds high-performance network infrastructure, enabling seamless data flow in large-scale AI clusters. When workloads demand ultra-low latency and high throughput, Arista’s software-defined networking becomes essential for maintaining cluster efficiency. Downtime or inefficiency costs are high; Arista captures value by ensuring full-speed operation.

Astera Labs ($ALAB) operates within data pathways, ensuring high-speed connections between GPUs, CPUs, and memory in AI systems. As cluster density increases, bottlenecks shift from network edges to chip-to-chip communication, which is Astera’s domain. In high-performance AI environments, slow component communication slows down the entire system.

Manufacturing: Long-cycle constraints

Without chip manufacturing capacity, AI cannot scale. Without manufacturing tools, advanced chips cannot be made.

ASML’s EUV lithography machines have a production cycle of over a year, costing over $200 million each, with no credible substitutes. Every advanced chip—from NVIDIA’s H100 to Apple’s M-series—requires their equipment. Lam Research’s etching and deposition tools are embedded in every major wafer fab worldwide.

Target: $ASML $LRCX

Long-cycle constraints. Structurally more resilient than any software moat. Discussion remains far below the level it should be.

ASML Holding ($ASML) is the sole supplier of EUV lithography systems—the most advanced chip manufacturing tools available, a prerequisite for producing cutting-edge semiconductors. With years of backlog and no real competition, ASML controls a critical choke point in the global chip supply chain.

Lam Research ($LRCX) supplies etching and deposition equipment that forms the backbone of semiconductor manufacturing. Its tools are embedded in all major wafer fabs, making it an indispensable partner in capacity expansion. As AI demand drives continuous capacity growth, Lam secures long-term revenue directly tied to global semiconductor manufacturing expansion.

Misclassification: The source of alpha

This is what most investors overlook and the most asymmetric opportunity on the map.

Some companies are priced as A, but their operations and financials are already B.

Take $CIFR (Cipher Digital) and $IREN (IREN Limited).

The market still sees them as Bitcoin miners.

But what they are becoming is far more valuable: AI power infrastructure and HPC data center platforms.

These companies secured low-cost power and built infrastructure before demand materialized. Today, mega-scale vendors are rushing to acquire exactly these assets.

Cipher Digital has begun a transformation, signing 15-year leases with top-tier mega tenants (the third AI/HPC park), and secured a $200 million revolving credit line from top global banks. These are not speculative moves—they are long-term revenue commitments.

IREN is executing similar strategies across multiple sites, combining energy procurement with scalable data center construction. Its advantage is speed: it has already secured land, power, and infrastructure needed for AI workloads.

The market still sees them as miners. Their balance sheets look more like infrastructure companies.

This gap will close. When it does, it won’t be slow.

Portfolio overview

This is not just a collection of stocks; it’s a system.

Each position corresponds to a specific constraint in the AI stack, and each must be addressed for the system to operate. That’s discipline.

  • Power: $CEG $GEV $VST $WMB
  • Power grid: $PWR $ETN
  • Cooling: $VRT
  • Memory: $MU
  • Networking: $ANET $ALAB
  • Manufacturing: $ASML $LRCX
  • Misclassification: $CIFR $IREN

Most investors have yet to complete this cognitive shift

We are shifting from compute scarcity to infrastructure scarcity.

This means:

  • GPUs are no longer the only narrative
  • Power, power grids, memory, and cooling are now the main profit drivers
  • Returns follow constraints, not hype

Most portfolios are still stuck in the old paradigm.

Discipline is equally important

This framework can fail under certain conditions. It’s worth being honest about them.

Mega-scale vendor capital expenditure slowdown. If Amazon, Google, and Meta slow infrastructure spending due to profit margin pressures or weaker-than-expected demand, the assumption of rigid demand weakens. Monitoring quarterly capex guidance is key as a leading indicator.

Rapid resolution of bottlenecks. Government intervention in transformer manufacturing, accelerated nuclear approvals, or restructuring grid interconnection queues could compress the premium on constrained infrastructure. These changes are slow but real.

Regulatory friction. Power and grid infrastructure intersect with utility regulation, environmental reviews, and rate-setting agencies. When regulation turns unfavorable, it can structurally and persistently limit return potential.

The key difference: this is not a product cycle bet. Product cycles can reverse in a quarter. Industrial constraints take years to build and years to resolve. This asymmetry is the core point.

In conclusion

In every industrial era, wealth is not created by the companies building the trains.

It’s created by those owning the rails, coal, and rights.

The rails of AI are measured in megawatts, transformer lead times, and rack cooling capacity.

Most investors chase AI. The real opportunity lies in owning what AI cannot do without.

In each system, headlines follow innovation, profits follow constraints. We focus on constraints, not narratives, and currently yield about 60%. As AI infrastructure accelerates, this is not the end of the trade but still early. We believe we are only entering the third inning.

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
Add a comment
Add a comment
No comments
  • Pinned