Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Pre-IPOs
Unlock full access to global stock IPOs
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
Promotions
AI
Gate AI
Your all-in-one conversational AI partner
Gate AI Bot
Use Gate AI directly in your social App
GateClaw
Gate Blue Lobster, ready to go
Gate for AI Agent
AI infrastructure, Gate MCP, Skills, and CLI
Gate Skills Hub
10K+ Skills
From office tasks to trading, the all-in-one skill hub makes AI even more useful.
GateRouter
Smartly choose from 40+ AI models, with 0% extra fees
Ultraman Preview: OpenAI's new cybersecurity model GPT 5.5-Cyber will be unveiled in a few days, clashing with Claude Mythos
OpenAI announces the upcoming release of a dedicated cybersecurity model GPT-5.5-Cyber, aimed at experts in the cybersecurity field, competing with Anthropic’s strictly controlled defense strategies. Ultraman once predicted that a groundbreaking cyber attack is highly likely to occur in 2026.
OpenAI GPT-5.5-Cyber to Debut in a Few Days
OpenAI CEO Sam Altman briefly announced today (4/30) that in the coming days, GPT-5.5-Cyber, a new generation cybersecurity model, will be released for use by experts in the cybersecurity field. He stated that the team will collaborate with the ecosystem and government to find reliable access methods to ensure the security of enterprises and infrastructure.
In April this year, Altman predicted during an interview with Axios founder Mike Allen that a disruptive cyber attack is highly likely to happen in 2026.
Public discussion continues on whether his statements accurately reflect threat levels, and recently, Anthropic launched the Claude Mythos model capable of autonomously identifying software vulnerabilities, further intensifying the debate and raising concerns within the U.S. government.
OpenAI Plans to Bring Cybersecurity Tools to All Levels of Government
The divergence in defense strategies between OpenAI and Anthropic reflects broader debates within the AI field.
According to CNN, until recently, OpenAI’s cybersecurity trust access program was limited to a select few partners, but is now opening permissions to all reviewed government levels, from federal agencies to local authorities, allowing approved units to use specialized versions of models with fewer protective restrictions.
OpenAI’s National Security Policy Director Sasha Baker pointed out that OpenAI does not see itself as the sole decision-maker regarding tool permissions and top priorities.
Disagreement Between Two AI Giants on Defense Strategies: Democratization vs. Strict Control
Anthropic’s Mythos model has the ability to identify and exploit software vulnerabilities. Due to potential threats, the company is gradually promoting it through a strictly controlled “Glass Wing” program, working with government representatives.
In terms of security, Anthropic advocates a slow and cautious approach to slow down hackers from leveraging AI to trigger an arms race, whereas OpenAI plans to fully open access to models.
Baker stated that democratizing cybersecurity capabilities to benefit everyone is necessary; restricting access to the top 50 companies listed by Forbes is insufficient. She emphasized that this is an opportunity for companies to patch vulnerabilities before malicious actors get their hands on the tools.
Image source: Getty Images/ANTHONY WALLACE/AFP OpenAI National Security Policy Director Sasha Baker
OpenAI Actively Collaborates with the U.S. on Intelligence Era Action Plan
Recently, OpenAI held a practical workshop in Washington, D.C., where Baker revealed that participants included representatives from the Pentagon, White House, U.S. Department of Homeland Security, and DARPA, jointly testing the security capabilities of new models. They plan to return to Washington in a few weeks to gather feedback.
Additionally, OpenAI is releasing an action plan to coordinate government and corporate cybersecurity efforts in the intelligence era. The company plans to introduce new security features for ChatGPT accounts in the coming days and provide tools to help the public improve personal cybersecurity habits.
Are They Demons or Saviors? AI Giants Play the Doomsday Card
However, many AI companies frequently warn of potential doomsday scenarios, sparking skepticism in academia.
In an interview with BBC, University of Edinburgh ethicist Shannon Vallor pointed out that AI companies’ fear-mongering strategies have been effective, portraying their products as existential threats to the world, without harming their own interests or limiting their power. This, in turn, makes the public believe that only these companies can provide protection.
She said that utopia and doomsday are two sides of the same coin: “In either case, the scale is too grand and mythologized, making it seem as if mechanisms like regulation, governance, or legal systems are powerless.”
This leads people to believe that their only option is to wait and see whether these technologies will ultimately become demons ending civilization or messianic saviors bringing utopia. Even the name “Mythos” appears designed to evoke a religious sense of awe.
Further reading:
New Policies Needed in the AI Era! OpenAI Proposes Four Major Initiatives: Three-Day Workweek, Robot Tax