Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Futures Kickoff
Get prepared for your futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to experience risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Launchpad
Be early to the next big token project
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
Silicon Curtain - ForkLog: Cryptocurrencies, AI, Singularity, Future
Graphics cards, RAM, SSD — what’s next?
The era of digital abundance, when any enthusiast could build a home server capable of competing with the power of a small company, is coming to an end. Owning your own advanced hardware is increasingly becoming an elite privilege amid rising memory chip prices and longer pre-order queues.
In this new ForkLog article, we explore why graphics cards have become a resource for the AI industry, why Nvidia no longer favors gamers, and why freelance designers now have to rent computing power from cloud data centers. But the main question we aimed to answer is: how will the chip shortage affect blockchain decentralization, where SSDs and DRAMs often play a key role.
Techno-Feudalism or Temporary Difficulties
Recently, based on statements from AI industry leaders and memory chip manufacturers, it seems that the era of owning a powerful personal computer (PC) is gradually ending.
There’s active discussion about Jeff Bezos’s 2024 speech, where he compared PC usage to using an electric generator during the era of centralized power supply. Some in the community see him as a prophet in this situation.
The latest hardware models are becoming the main computational resource for training and maintaining large language models (LLMs). AI is depleting inventories of HBM microchips, which previously served the consumer segment of SSDs and RAM. As component prices rise, the market may lose an entire class of budget devices this year.
In early February, TrendForce analysts raised their forecast for chip prices. They expect a 90–95% jump in contracts for consumer DRAM in Q1 2026 due to the AI boom. The previous forecast was 55–60%.
Additionally, training LLMs requires enormous amounts of data. The corporate sector has purchased SSDs with capacities of 2TB and higher, with high write endurance. Silicon chip manufacturers, for whom servicing the AI industry yields higher profits, are planning to reorganize their production capacities.
At the end of 2025, Micron Technology — previously one of the most active supporters of maintaining the desktop segment — announced the shutdown of its Crucial consumer line. Production will cease in Q2 2026 after nearly 30 years of the brand’s existence.
Micron also plans to increase production of HBM microchips. The company invested $9.6 billion in new facilities in Hiroshima, Japan.
On February 12, Samsung Electronics announced the start of shipments of advanced HBM4 chips to unnamed clients. This move aims to reduce the gap with competitors in critical AI accelerator components, including SK Hynix.
The world’s largest microchip manufacturer is in a difficult position: it is the main memory supplier for Nvidia and a leader in smartphones and consumer electronics. It is crucial for the company to maintain high-margin AI contracts without weakening its position in gadget manufacturing.
Last September, Samsung Semiconductor’s management aimed to balance the situation. The company confirmed that its memory production lines for top-tier graphics cards — GDDR7 — can serve gamers, content creators, and professional workstations.
These chips are used in Nvidia’s flagship gaming line — GeForce RTX 5090. Announced in January 2025, the graphics card remains the undisputed leader, and the price announced a year ago at $1999 is now far from reality. As of writing, offers range between $4000 and $5000.
By 2027, they aim to launch factories in Shanghai and Wuhan, focusing primarily on DRAM and NAND, rather than HBM, as market leaders do.
Ex-CIO/CTO of Bitfury Group and co-founder of Hyperfusion Alex Petrov believes there’s no point in hoping prices will fall; instead, costs should be redistributed.
Why Graphics Cards?
Why did graphics cards, which allowed playing Quake III Arena in 2000 and Fallout 4 in 2015, first get taken over by PoW mining and later absorbed by the AI industry? The answer lies in the specifics of graphics accelerators, which can be better explained by comparing them to CPUs.
A CPU is a genius capable of solving any type of software task: writing poetry, calculating taxes, managing an operating system. But it performs actions sequentially on each core.
In contrast, a GPU is like a factory with thousands of simple workers. Each is less intelligent than a genius, but they can operate simultaneously.
To render a frame in a game, millions of pixels’ colors must be calculated. This involves an equal number of identical mathematical operations per second. A graphics chip was born for parallel computations.
A similar situation occurs with PoW mining on graphics cards. Mining is a kind of lottery where the device must billions of times per second pick a random number to find the correct hash. GPUs were perfect for this, leading to the first wave of shortages until Ethereum switched to PoS in 2022.
Graphics processors became a real find for AI needs. Modern LLMs like ChatGPT or Gemini are essentially giant tables of numbers (matrices). Their training involves endless matrix multiplications to adjust “weights” (connections between neurons).
It turns out that the math creating water reflections in Cyberpunk 2077 is the same linear algebra underlying neural network training. But AI requires not only powerful computations but also colossal data transfer speeds. Ordinary gaming VRAM isn’t enough — it has been replaced by expensive, scarce HBM, which all tech giants are now fighting over.
Nvidia recognized this trend early and, starting with the Volta architecture, began adding “tensor cores” to graphics cards. These can multiply matrices simultaneously, optimized specifically for AI tasks.
GPU for an Hour and Offline Loss
In the current situation, the next two years will see content creators, video editors, designers, gamers, programmers, AI architects, and all who depend critically on powerful hardware face a choice: prioritize renting online computing power or pay significantly more to upgrade their PCs.
Given the shortage and queues for components, demand for subscriptions is growing, making cloud data centers more customer-oriented. Several companies offer flexible access to computing resources and GPUs for rent, such as Lambda Labs, Vast.ai, Hyperfusion, LeaderGPU, Hostkey, and others.
RunPod offers access to the scarce flagship RTX 5090 at $0.89/hour.
Petrov noted that data centers guarantee 24/7 availability, backup power, connection redundancy, and proper maintenance.
He also mentioned that many designers, video editors, producers, and artists are already being displaced by AI. At a certain level, they need to turn to specialized AI applications that “home hardware” cannot handle.
Bitcoin Back in Front
The entire IT sector depends on components, but for the blockchain industry, the microchip shortage poses a real threat to decentralization and power redistribution.
He pointed out the paradox of the current situation, where PoS networks are struggling:
In networks like Ethereum and Solana, the principle is “easy to create, but very costly to verify.” Given the many nodes and the fact that proofs take seven to nine steps, the entry barrier for PoS validators is often lower for deployment but more expensive operationally.
Nodes must process every block. In high-frequency networks (Solana — 400 ms, Ethereum — 12 s), verifying signatures and executing transactions demands enormous resources. Full archival nodes have even higher requirements: an Ethereum archival node needs 128GB RAM and at least 12TB SSD.
The rising costs of components reduce validator profitability and create a new centralization risk. In January, the number of active Solana nodes dropped to 800 — the lowest since 2021. As support for small node operators diminishes, covering voting and infrastructure costs becomes harder without sufficient delegated stake.
At the time of writing, the Nakamoto coefficient of the network has fallen to 19 (it was 33 in 2023).
Ethereum Foundation is already discussing initiatives to lower the infrastructure barrier. In May 2025, Vitalik Buterin proposed EIP-4444, which could significantly reduce disk space requirements. It suggests nodes only store transaction history for the last 36 days while maintaining the current state and Merkle tree structure. This approach reduces storage needs without compromising verification of the current blockchain state.
In the new “silicon curtain” reality, Bitcoin remains the “people’s blockchain.”
For these reasons, a full Bitcoin node can run even on a lightweight server or desktop, and sometimes on new Raspberry Pi devices with 4–8 GB RAM. The impact of memory shortages on PoW nodes is minimal. SSD prices are rising, but capacities up to 1TB are still accessible, the expert added.
What’s Next?
Petrov believes the era of personal hardware isn’t over — there are simply different approaches and solutions for specific tasks:
The industry is rushing to find solutions to the chip crisis by developing new technologies:
The current crisis isn’t the first in history but the most structural. Previously, memory chips faced similar challenges:
Exponential growth of AI makes it impossible to accurately predict future market behavior. The launch of new capacities by 2028 could ease the crisis if current growth rates are maintained.
If AI agents become the backbone of the economy, demand for chips will outpace production. In such a scenario, owning a powerful PC will become as elitist as owning a collectible horse. Whatever the future holds, timely thermal paste replacement remains essential.