Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Pre-IPOs
Unlock full access to global stock IPOs
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
Promotions
AI
Gate AI
Your all-in-one conversational AI partner
Gate AI Bot
Use Gate AI directly in your social App
GateClaw
Gate Blue Lobster, ready to go
Gate for AI Agent
AI infrastructure, Gate MCP, Skills, and CLI
Gate Skills Hub
10K+ Skills
From office tasks to trading, the all-in-one skill hub makes AI even more useful.
GateRouter
Smartly choose from 40+ AI models, with 0% extra fees
Altman Accuses Anthropic of 'Fear-Based Marketing' for Claude Mythos
OpenAI CEO Sam Altman has accused rival Anthropic of using “fear-based marketing” to promote its Claude Mythos AI model, according to comments made on the Core Memory podcast hosted by tech journalist Ashlee Vance. Altman argued that the fear-based rhetoric is designed to justify keeping advanced AI systems under the control of a “smaller group of people,” though he acknowledged that some safety concerns are legitimate.
Altman’s Marketing Critique
Altman stated that while there are valid concerns about AI safety, “it is clearly incredible marketing to say: ‘We have built a bomb. We are about to drop it on your head. We will sell you a bomb shelter for $100 million. You need it to run across all your stuff, but only if we pick you as a customer.’” He noted that it was “not always easy” to balance AI’s new capabilities with the belief that the technology should be accessible.
Altman acknowledged that “there are going to be legitimate safety concerns” but suggested that fear-based messaging may be weaponized to justify centralized control. He stated: “if what you want is like ‘we need control of AI, just us, because we’re the trustworthy people’, I think fear-based marketing is probably the most effective way to justify that.”
Claude Mythos Capabilities and Distribution
Anthropics Claude Mythos model was revealed last month and has drawn significant attention from researchers, governments, and the cybersecurity industry. According to testing, the model can autonomously identify software vulnerabilities and execute complex cyber operations. During testing, Mythos identified hundreds of vulnerabilities in Mozilla’s Firefox browser and has demonstrated the ability to carry out multi-stage cyberattack simulations.
Anthropic has restricted access to the system through Project Glasswing, a limited program granting select companies—including Amazon, Apple, and Microsoft—the ability to test its capabilities. The company has also committed significant resources to supporting open-source security efforts, arguing that defenders should benefit from the technology before it becomes more widely available.
Safety Framing and Government Response
Anthropic has framed Mythos’ capabilities as both a defensive breakthrough—allowing faster detection of critical software flaws—and a potential offensive risk if misused. The model has also exposed limitations in existing AI evaluation systems, with Anthropic acknowledging that many current cybersecurity benchmarks are no longer sufficient to measure the capabilities of its latest system.
Despite calls within parts of the U.S. government to halt use of the technology over concerns about its potential applications in warfare and surveillance, the National Security Agency has reportedly begun testing a preview version of the model on classified networks. On prediction market Myriad, users put a 49% chance on Claude Mythos being released to the wider public by June 30.
A group of researchers claimed last week they were able to reproduce Mythos’ findings using publicly available models.
Broader AI Release Rhetoric
Altman suggested that rhetoric around highly dangerous AI systems may increase as capabilities improve, but argued that not all such claims should be taken at face value. He stated: “There will be a lot more rhetoric about models that are too dangerous to release. There will also be very dangerous models that will have to be released in different ways. I’m sure Mythos is a great model for cybersecurity but I think we have a plan we feel good about for how we put this kind of capability out into the world.”
Altman also dismissed suggestions that OpenAI is scaling back its infrastructure spending, saying the company would continue expanding its computing capacity. He noted: “I don’t know where that’s coming from… people really want to write the story of pulling back. But very soon it will be again, like, ‘OpenAI is so reckless. How can they be spending this crazy amount?’”