Futures
Access hundreds of perpetual contracts
CFD
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Pre-IPOs
Unlock full access to global stock IPOs
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
Promotions
AI
Gate AI
Your all-in-one conversational AI partner
Gate AI Bot
Use Gate AI directly in your social App
GateClaw
Gate Blue Lobster, ready to go
Gate for AI Agent
AI infrastructure, Gate MCP, Skills, and CLI
Gate Skills Hub
10K+ Skills
From office tasks to trading, the all-in-one skill hub makes AI even more useful.
GateRouter
Smartly choose from 40+ AI models, with 0% extra fees
Enterprise AI shifts from "buying more GPUs" to "optimal configurations that reduce inference costs"... AMD and Red Hat's solutions are gaining attention
Enterprises are crossing a new watershed in adopting artificial intelligence. Today, market focus is no longer limited to whether to invest in AI, but shifts to how to deploy suitable semiconductors and infrastructure for different businesses to maximize cost efficiency. Especially with the rapid increase in “agent-based AI” tasks and rising inference costs, for large enterprises, the core issue is no longer blindly choosing the highest-performance equipment, but selecting the right computing resources based on goals, i.e., making “choices.”
Against this backdrop of change, AMD’s collaboration with Red Hat has once again attracted attention. John Hampton, Vice President of Global Enterprise Technology Sales at AMD, pointed out at the “Red Hat Summit 2026” in Boston that enterprises want more flexible AI infrastructure within the overall hybrid environment. He mentioned that recently many customers hastily built large-scale GPU clusters to meet AI demands, but during actual operation, they faced cost pressures far beyond expectations.
AI inference costs are rising sharply… Enterprises re-evaluate single large GPU strategies
According to Hampton, many companies initially focused on large-scale GPU procurement to stay ahead in the early AI competition. The problem is, as service scale expands, the costs generated by each AI query continue to accumulate, rapidly increasing budget pressure. This phenomenon is called “Token Economics,” meaning that as generative AI usage increases, token processing costs also rise, directly impacting corporate profitability.
He stated, “Companies initially purchased large GPU clusters en masse for AI, but now they face unsustainable reactions. Although AI applications are growing, the rapid cost inflation has caused great concern.” This ultimately means that the core of corporate AI strategy is shifting from “ensuring top performance devices” to “task-optimized deployment.”
AMD and Red Hat: Providing a “full spectrum” solution from CPUs to GPUs
To address this trend, AMD has launched a “full spectrum” product lineup covering CPUs, cost-effective GPUs, and high-performance accelerators. Their strategy is to combine these hardware components with Red Hat’s open-source software stack, supporting enterprises to operate AI tasks flexibly in hybrid cloud environments without relying on specific vendors.
For example, the AMD Instinct MI350P is introduced as a PCIe-based GPU that can be relatively easily integrated into existing servers. It features a air-cooled design to improve cost efficiency. Red Hat AI serves as an enterprise-level platform supporting deployment and scaling of AI agents on such hardware. Additionally, using AMD EPYC CPUs and Red Hat virtualization tools, server consolidation can be achieved, helping reduce data center space and power consumption.
The core is “open architecture”… Promoting AI budget control and infrastructure modernization simultaneously
The key message conveyed here is “openness” and “selectivity.” AMD and Red Hat emphasize that, compared to closed ecosystems, enterprises should adopt open architectures, enabling them to select the most suitable resources from CPUs, low-power GPUs, and high-performance accelerators for different AI workloads. Not all inference tasks need to be deployed on expensive equipment.
The benefits of this approach are not limited to cost reduction. For enterprises, it allows full utilization of existing infrastructure without slowing AI adoption, and the savings in budget and power can be reinvested into new AI projects. This has practical significance, enabling AI infrastructure modernization and budget control to be achieved simultaneously.
Hampton predicts that future AI market evaluation standards are likely to shift from “what was bought” to “how it is deployed.” As enterprises’ AI competition enters the operational stage, some analysts believe that future success will depend less on performance display and more on the ability to skillfully balance total ownership costs with actual results.
TP AI Notes: This article is summarized based on the language model of TokenPost.ai. The main information of the text may be omitted or may differ from facts.