Futures
Access hundreds of perpetual contracts
CFD
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Pre-IPOs
Unlock full access to global stock IPOs
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
Promotions
AI
Gate AI
Your all-in-one conversational AI partner
Gate AI Bot
Use Gate AI directly in your social App
GateClaw
Gate Blue Lobster, ready to go
Gate for AI Agent
AI infrastructure, Gate MCP, Skills, and CLI
Gate Skills Hub
10K+ Skills
From office tasks to trading, the all-in-one skill hub makes AI even more useful.
GateRouter
Smartly choose from 40+ AI models, with 0% extra fees
OpenAI is aggressively investing 4 billion USD to compete for FDE; the end of software engineers turns out to be on-site work.
Local time May 11, OpenAI announced the establishment of a new company called OpenAI Deployment Company, with an initial investment of over $4 billion, focusing on helping enterprises build and deploy AI systems.
OpenAI invests $4 billion to establish Deployment Company
OpenAI Deployment Company is a partnership jointly established by OpenAI and 19 leading global investment firms, consulting companies, and system integrators. Led by TPG, with Advent, Bain Capital, and Brookfield as co-founding partners, and B Capital, BBVA, Emergence Capital, Goanna, Goldman Sachs, SoftBank Group, WPP Investments, and WCAS as founding partners.
To rapidly expand the team size, OpenAI Deployment Company also acquired AI consulting firm Tomoro, bringing in approximately 150 experienced on-site deployment engineers and experts, ready to serve from day one. These engineers will go deep into client sites, working closely with various teams to identify the most valuable AI application scenarios and push for actual deployment.
This marks a strategic major shift for OpenAI Enterprise.
Over the past two years, OpenAI relied more on ChatGPT Enterprise, APIs, and model capabilities to open markets; now, it clearly recognizes that owning the most powerful models alone is no longer enough to win the enterprise market of the past. The true factor influencing AI commercialization speed is not model parameters, but “deployment capability.”
And this is precisely where Tomoro’s value lies.
So, who is Tomoro?
Founded in 2023, Tomoro has been branded from the start as an “OpenAI ecosystem company.”
It is primarily a consulting firm focused on enterprise AI deployment and engineering services. Its core business is not developing foundational models but helping enterprises embed OpenAI models into their operations, solving complex issues from data access, system governance, permissions, to production-level workflow design.
From publicly disclosed client lists, Tomoro has served large international companies including Mattel, Red Bull, Tesco, Virgin Atlantic, and Supercell.
These clients share a common trait: they are not “tech innovation companies.”
In other words, Tomoro’s expertise is not in training models in AI labs, but in taking AI from proof of concept to production within the most complex and realistic business environments of traditional enterprises.
Interestingly, Tomoro also espouses an attractive development philosophy: building a three-day workweek. On its homepage, Tomoro states:
Looking at the founding team, Tomoro’s core members mainly come from enterprise digital consulting, cloud infrastructure, and AI application engineering fields — a typical “model-savvy and enterprise system transformation” hybrid team.
Tomoro’s website shows it is hiring on-site engineers in Australia, Singapore, the UK, and other locations.
Why is OpenAI suddenly investing heavily in deployment?
The logic behind this is simple: enterprise clients have never bought models; they buy results.
OpenAI Chief Revenue Officer Denise Dresser said: “AI is becoming increasingly capable of handling more meaningful work within organizations. The current challenge is how to help enterprises integrate these systems into their infrastructure and workflows that support their business. OpenAI Deployment Company aims to bridge this gap and turn AI capabilities into real operational impact.”
Denise Dresser and her team have realized that on-site deployment is the most critical enterprise AI capability they need to strengthen now.
Although ChatGPT has achieved huge success on the consumer side, in the enterprise market, Anthropic has rapidly risen over the past year with its Claude series, establishing a strong presence among developers and corporate clients. Earlier this year, OpenAI even publicly acknowledged that Anthropic’s growth has posed significant pressure on the company.
According to Reuters, during an internal all-hands meeting, OpenAI’s application business head Fidji Simo explicitly told employees:
Anthropic’s rise should serve as a “wake-up call” for OpenAI.
She emphasized that the company must focus resources on improving enterprise productivity and not be slowed down by overly dispersed product lines.
To some extent, OpenAI Deployment Company is a strategic defensive response.
Of course, Anthropic is not idle either.
Last week, Anthropic announced the formation of a joint venture focused on deploying enterprise AI services, with Blackstone, Hellman & Friedman, and Goldman Sachs as founding partners.
The joint venture is valued at $1.5 billion, with Anthropic, Blackstone, and Hellman & Friedman investing a total of $300 million. Other investors include Apollo Global Management, General Atlantic, Singapore’s GIC, Leonard Green, and Suko Capital.
This signals that a merger and acquisition activity around “enterprise AI application capability” has officially begun.
If in the past AI competition was about training stronger models, now the race is shifting to: who can most quickly embed models into real enterprise workflows.
Why has demand for traditional engineers plummeted, while deployment engineers become highly sought after?
This competitive shift is already reflected in the job market.
When “embedding models into real enterprise workflows” becomes the key to victory, roles in traditional software engineering that are far from business frontlines—focused only on coding and functionality—are naturally shrinking, while deployment engineers who can go deep into client sites, connect systems, and push for deployment are in high demand.
Look at these stark data points: in Q1 2025, traditional software engineering roles decreased by about 70%; meanwhile, demand for frontline deployment engineers (Forward Deployed Engineers, FDEs) surged from around 800% to about 1000%. This sharp rise and fall clearly reveal a fundamental industry trend shift.
Why is demand for traditional engineers shrinking, while deployment engineers are becoming “hot commodities”?
The answer lies in: today, 60-70% of project success depends on “application deployment,” not just engineering or coding skills. The ability to innovate with clients, adapt, lead, and iterate rapidly has become crucial. The bottleneck has shifted from “technical ability” to “application success.” Helping clients reorganize workflows and systems to meet future needs is now the top priority.
However, enterprises find it difficult to handle AI deployment on their own. Truly understanding AI talent is scarce, and understanding AI alone isn’t enough—these people also need to grasp system architecture and overall company operations.
Only by combining these skills can success be achieved. And the missing deployment skills and details are precisely in the hands of frontline deployment engineers.
Even with off-the-shelf solutions available, clients often require extensive adjustments and fine-tuning. Without FDEs working alongside clients, innovating together, and deeply understanding their products and architecture, projects are unlikely to succeed. Practice shows that AI projects with FDE involvement have significantly higher ROI and success rates.
Why doesn’t the traditional delivery model work anymore?
The standard software sales process is: develop product → hand over to sales → promote to clients → clients try to install (possibly with help from customer success teams) → clients troubleshoot on their own. This model overlooks a critical step: the client’s real environment is always “unique and complex.”
But we all know the truth: companionship is the longest-lasting expression of care.
FDE mode isn’t about model companies simply delivering products and walking away; it’s about deploying the best engineers directly inside the client’s organization. These engineers, working with client managers who handle documentation, deliver real code, build custom integrations, and configure systems to operate in the client’s specific environment. This is “forward deployment”: your engineers are now working inside the client’s company.
This approach is effective because of a simple insight: FDE talent is proficient in the workings of software or models, while client engineers (e.g., at JPMorgan) are deeply familiar with their own data structures, compliance requirements, internal politics, and specific problems they want to solve. Neither side can succeed alone. The FDE model forces these two knowledge systems to collide and fuse in the same space until effective solutions emerge.
This method is especially suitable for clients facing “special and complex” problems: hospitals, banks, defense agencies, large financial institutions. They cannot be served by off-the-shelf SaaS products—they have legacy systems, regulatory restrictions, and internal workflows that weren’t designed with AI in mind.
It can also be explained this way: the decline in demand for traditional engineers isn’t because technology is less important, but because the definition of “engineer” is being reshaped. Those who can go deep into client sites, understand business, iterate quickly, and co-innovate are becoming the hottest talent in the AI era.
In this context, what specific skills must deployment engineers possess?
In a podcast, OpenAI platform engineering head Sherwin Wu and product head Olivier Godement discussed the core capabilities required for FDEs.
In highly customized and high-security deployment scenarios, such as physically isolated environments at national labs, deployment engineers demonstrate a range of critical skills.
On one hand, they have solid practical deployment abilities at the physical and low-level architecture layer: not just API calls, but actual installation and operation of models on client-specific hardware architectures and networks, even under extremely strict security restrictions—such as no electronic devices allowed, fully air-gapped environments—using physical media to transfer model weights into supercomputers.
On the other hand, they possess deep customization and engineering capabilities, working closely with development teams to conduct “manual” customization and environment adaptation for specific supercomputers like Venado, and have agentic engineering skills, handling orchestration, memory management, and task handoff to ensure models operate stably and efficiently even in highly restricted environments.
Additionally, OpenAI team mentioned that successful deployment relies on organizational features beyond individual engineer skills:
“Tiger Team” model:
Deployment requires not only technical experts but also those with “Institutional Knowledge.”
Capability composition: a lean team of technical staff, SME specialists, and insiders familiar with internal processes. Because most critical organizational knowledge (like SOPs) resides in veteran employees’ minds, not in documentation.
Bottom-up evaluation system (Evals First):
Clear goal setting: success must first be defined as “what is good.”
Frontline-driven: evaluation standards cannot be solely dictated from above; they must be co-created by actual operators who understand real scenarios and pain points.
Role transformation: from “tools” to “thinking partners”:
FDEs and deployment engineers have shifted from simple “software installers” to “full-stack technical PR + engineering architect + industry wisdom expert.” They must not only solve whether models run but also address how AI can truly penetrate core business areas (including undocumented, implicit processes) under extreme physical constraints.
Model vendors enter the second half, competing on customer stickiness
This is very similar to the path Palantir has successfully followed over the past decade.
Palantir’s core strength has never been selling software licenses but deploying engineers onsite to deeply understand business processes and embed technology into organizations.
OpenAI and Anthropic are now clearly copying this model because it reveals a truth: frontline deployment (FED) offers far greater stickiness than most SaaS.
When companies install CRM systems, they can theoretically migrate to competitors, though painful; but when frontline deployment teams spend six months building deeply integrated internal data, workflows, and compliance architectures into a custom AI system, that system becomes a foundational infrastructure supporting business operations. It’s very hard to remove, and companies tend to rely on the original team for maintenance, updates, and optimization.
This strategic logic makes FDE highly attractive to Anthropic and OpenAI—the enterprise AI market’s competition isn’t just about selling tokens but about becoming an infrastructure layer that large organizations find hard to dislodge. FDE is the key pathway to achieving this.
Timing is also crucial. Capital expenditure data from large-scale data center operators shows infrastructure buildout is accelerating, not slowing: Morgan Stanley raised its 2026 capex forecast for the top five hyperscalers to $805 billion, with $1.1 trillion projected for 2027; in Q1 2026 alone, the seven largest hyperscalers spent over $400 billion, with reported backlog orders around $1.3 trillion.
Such enormous backlog indicates demand far exceeds supply, meaning that in the long run, the limiting factor isn’t model capability or compute resources but how efficiently deployment can be executed.
Who can master large-scale deployment within complex organizations and integrate systems through customized work will capture the value created by infrastructure buildout. In the FDE model, the real scarce resource isn’t model-building ability but deployment expertise. This also interestingly shifts pricing logic: from seat-based licensing to token-based consumption.
In the FDE model, you’re not just selling seats but a deployed, operational system that consumes tokens as the organization continues to use it. The stickiness of deployment is the key to sustained token revenue.
This article is from: InfoQ
Risk warning and disclaimer
Market risks are present; investment should be cautious. This article does not constitute personal investment advice and does not consider individual users’ specific investment goals, financial situations, or needs. Users should evaluate whether any opinions, viewpoints, or conclusions herein are suitable for their circumstances. Investment is at their own risk.