Ultraman Preview: OpenAI’s new cybersecurity model GPT 5.5-Cyber will be unveiled in a few days, taking on Claude Mythos

OpenAI has announced it will roll out a dedicated cybersecurity model, GPT-5.5-Cyber, taking on the defense strategies strictly controlled by Anthropic, for experts in the cybersecurity field. Altman once predicted that a world-shaking cyberattack is highly likely to occur in 2026.

OpenAI GPT-5.5-Cyber to Debut in a Few Days

OpenAI CEO Sam Altman hinted earlier today (4/30) that in the coming days, GPT-5.5-Cyber, a new generation cybersecurity model, will be launched for experts in the cybersecurity field to use. He said the team will work with the ecosystem and government to find reliable access methods to ensure the security of enterprises and critical infrastructure.

In April this year, Altman predicted during an interview with Axios founder Mike Allen that a disruptive cyberattack event is highly likely to happen in 2026.

Whether his remarks accurately reflect the threat landscape has remained a topic of discussion. More recently, Anthropic’s Claude Mythos model, which can autonomously identify software vulnerabilities, has further intensified the debate and prompted concern from the U.S. government.

  • **Related report:**Claude Mythos Could Threaten Financial Security? U.S. Treasury Secretary and Fed Chair Urgently Meet to Warn of Risks

OpenAI Plans to Push Cybersecurity Tools to Governments at All Levels

The split between OpenAI and Anthropic over defense strategies reflects broader debate in the AI industry.

According to CNN, until recently, OpenAI’s cybersecurity trust access program was limited to a small number of partners, and it is now opening permissions to all reviewed government levels—from federal agencies to local governments—so approved entities can use special versions of models with fewer protective restrictions.

Sasha Baker, OpenAI’s director of national security policy, said OpenAI does not believe it should be the sole decision-maker determining tool permissions and top priorities.

Two Major AI Giants Disagree on Defense Strategies: Democratization vs. Strict Oversight

Anthropic’s Mythos model can identify and exploit software vulnerabilities. Based on potential harm, the company is gradually rolling it out through the tightly controlled Glass Wing program and in collaboration with government representatives.

On security, Anthropic argues that a slow and cautious approach is needed to mitigate an arms race triggered by hackers using AI, but OpenAI plans to fully open up the models.

Baker said it is necessary to democratize cyber defense capabilities so everyone benefits; it is not enough to keep access limited to only the 50 largest companies in Fortune magazine. She emphasized that this is an opportunity for companies to fix vulnerabilities before the tools fall into the hands of malicious actors.

Image source: Getty Images/ANTHONY WALLACE/AFP Sasha Baker, Director of National Security Policy at OpenAI

OpenAI Is Actively Cooperating With the U.S. to Draft an Action Plan for the Intelligence Era

Recently, OpenAI held a hands-on workshop in Washington. Baker said the attendees included representatives from the Pentagon, the White House, the U.S. Department of Homeland Security, and the Defense Advanced Research Projects Agency (DARPA). Together, they tested the security capabilities of the new model and are expected to return to Washington in a few weeks to collect feedback.

In addition, OpenAI is rolling out an action plan to coordinate cyber defense in the intelligence era between governments and enterprises. The company said it plans to introduce new security features for ChatGPT accounts in the coming days and provide tools to help the public improve their personal cybersecurity habits.

Demons or Saviors? AI Giants Push the “Doomsday Crisis” Narrative

However, many AI companies frequently warn that the technology could bring about a doomsday crisis, leading the academic community to question them.

In an interview with BBC, Shannon Vallor, a professor of ethics at the University of Edinburgh, pointed out that the AI companies’ fear-marketing strategy has worked: they have shaped their products into something that will end the world, without harming the companies themselves or limiting their power. This instead makes the public think that the only parties capable of providing protection are these companies themselves.

She said that utopia and the apocalypse are two sides of the same coin: “No matter which situation, the scale is too vast and steeped in myth, making it feel as though mechanisms such as regulation, governance, or the legal system and court laws are simply powerless to act.”

This makes people believe that their only option is to sit and wait for the outcome—whether these technologies ultimately become demons that end civilization, or saviors that grant us utopia. Even the name “Mythos” (myth) seems designed to inspire a sense of religious awe.

Further Reading:
A New Policy Is Needed in the AI Era! OpenAI Proposes 4 Major Initiatives: A Three-Day Weekend, a Robot Tax

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
Add a comment
Add a comment
No comments
  • Pin