Source: CryptoNewsNet
Original Title: Render Network director says DePIN could ease AI bottlenecks
Original Link:
As artificial intelligence (AI) grows more powerful, the infrastructure required to run it will reach its limits and those limits could open the door for decentralized physical infrastructure networks (DePINs), said Trevor Harries-Jones, director at the Render Network Foundation.
Harries-Jones said decentralized GPU networks are not aiming to replace traditional data centers, but rather to complement them by solving some of AI’s most pressing scaling challenges.
DePIN isn’t about replacing centralized infrastructures
In simple words, DePIN lets people around the world share real-world network infrastructure in return for rewards so that there is no dependence on or control by a centralized company.
One such project is the Render Network. It is actually a decentralized GPU rendering platform that is designed to democratize the digital creation process and free creators from the clutches of centralized entities.
Recent examples from the centralized AI world include video generation apps where usage had to be capped due to GPU constraints.
Harries-Jones pushed back on the idea of an outright replacement:
“I don’t think it’s a question of replacing. I actually think it’s a question of utilization of both.”
Centralized GPU clusters remain critical for training large AI models, which benefit from massive memory pools and tightly integrated hardware. But training is only a fraction of the total computational workload in AI.
Harries-Jones explained that inference—the running of the AI models—accounts for almost 80% of the GPU work.
That distinction is where decentralized networks like Render come into play. While early versions of AI models are resource-heavy, Harries-Jones said they quickly become more efficient as engineers optimize and compress them.
Over time, models that once required massive infrastructure can run on far simpler devices like smartphones.
“So we tend to see this on all models that come out. They start being really heavy and unrefined, and over a very short period, they get refined so that they can run on decentralized, simple devices.”
From a cost perspective, that shift makes decentralized GPU networks increasingly attractive. Instead of relying solely on expensive, high-end data centers, inference workloads can be distributed across idle GPUs around the world.
“It’s going to be cheaper to run them on decentralized idle consumer nodes than on centralized nodes.”
Harries-Jones is bullish on DePIN sector
Harries-Jones framed DePINs as a way to relieve growing AI bottlenecks across both compute and energy infrastructure.
When centralized power systems face strain, decentralized compute offers a parallel solution by tapping underutilized resources globally.
“So I’m very bullish on the sector as a whole.”
Harries-Jones underlined that global GPU demand far outstrips supply. “There aren’t enough GPUs in the world today,” he said.
So, the key is to utilize all idle GPUs, not fight for the undersupplied high-end GPUs.
As per Harries-Jones, the future of AI infrastructure isn’t centralized networks or DePIN. Instead, it’s a flexible usage of both to meet explosive AI demand.
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
Render Network director says DePIN could ease AI bottlenecks
Source: CryptoNewsNet Original Title: Render Network director says DePIN could ease AI bottlenecks Original Link: As artificial intelligence (AI) grows more powerful, the infrastructure required to run it will reach its limits and those limits could open the door for decentralized physical infrastructure networks (DePINs), said Trevor Harries-Jones, director at the Render Network Foundation.
Harries-Jones said decentralized GPU networks are not aiming to replace traditional data centers, but rather to complement them by solving some of AI’s most pressing scaling challenges.
DePIN isn’t about replacing centralized infrastructures
In simple words, DePIN lets people around the world share real-world network infrastructure in return for rewards so that there is no dependence on or control by a centralized company.
One such project is the Render Network. It is actually a decentralized GPU rendering platform that is designed to democratize the digital creation process and free creators from the clutches of centralized entities.
Recent examples from the centralized AI world include video generation apps where usage had to be capped due to GPU constraints.
Harries-Jones pushed back on the idea of an outright replacement:
Centralized GPU clusters remain critical for training large AI models, which benefit from massive memory pools and tightly integrated hardware. But training is only a fraction of the total computational workload in AI.
Harries-Jones explained that inference—the running of the AI models—accounts for almost 80% of the GPU work.
That distinction is where decentralized networks like Render come into play. While early versions of AI models are resource-heavy, Harries-Jones said they quickly become more efficient as engineers optimize and compress them.
Over time, models that once required massive infrastructure can run on far simpler devices like smartphones.
From a cost perspective, that shift makes decentralized GPU networks increasingly attractive. Instead of relying solely on expensive, high-end data centers, inference workloads can be distributed across idle GPUs around the world.
Harries-Jones is bullish on DePIN sector
Harries-Jones framed DePINs as a way to relieve growing AI bottlenecks across both compute and energy infrastructure.
When centralized power systems face strain, decentralized compute offers a parallel solution by tapping underutilized resources globally.
Harries-Jones underlined that global GPU demand far outstrips supply. “There aren’t enough GPUs in the world today,” he said.
So, the key is to utilize all idle GPUs, not fight for the undersupplied high-end GPUs.
As per Harries-Jones, the future of AI infrastructure isn’t centralized networks or DePIN. Instead, it’s a flexible usage of both to meet explosive AI demand.