Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Pre-IPOs
Unlock full access to global stock IPOs
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
Promotions
AI
Gate AI
Your all-in-one conversational AI partner
Gate AI Bot
Use Gate AI directly in your social App
GateClaw
Gate Blue Lobster, ready to go
Gate for AI Agent
AI infrastructure, Gate MCP, Skills, and CLI
Gate Skills Hub
10K+ Skills
From office tasks to trading, the all-in-one skill hub makes AI even more useful.
GateRouter
Smartly choose from 40+ AI models, with 0% extra fees
Anthropic, temporarily postpones the release of the highest-performance AI... "Control infrastructure" takes priority
After completing what it considers its highest-performing artificial intelligence models, Anthropic decided not to release them. This decision has caused quite a stir in the AI industry. The reason is not that the models lack performance, but that the “control infrastructure” is not yet mature. The choice has been interpreted as a clear signal that before manufacturing more powerful AI, there must first be systems capable of controlling it safely.
According to Silicon Angle, on April 7, 2026, Anthropic released a “Claude Mythos Preview,” but also stated that it would not be deployed publicly. In pre-release testing, the model was found to have numerous serious security vulnerabilities across mainstream operating systems and web browsers. Some of these flaws remained despite decades of manual review and automated security testing.
The problem is that this kind of capability is a powerful tool for defense—but if it is exploited maliciously, it could also become a means to attack core software systems worldwide. Anthropic chose not to rush the release of the model itself, and instead adopted a response framework centered on discovering vulnerabilities first and then fixing them. To that end, it also launched “Project Glasswing,” with the participation of 50 major technology enterprises and core infrastructure organizations.
As for the background behind not disclosing Mythos, Anthropic explained that the cybersecurity and other safety measures used to detect and block the model’s most dangerous outputs still need further development. Even Anthropic—which is widely viewed as one of the most proactive companies in AI safety—has judged that it still cannot safely control the systems it has produced.
Why AI Has No “Intrinsic Constraints”
The author of this article, John Waller, head of risk advisory at Ultraviolet Cyber, attributes the difference between humans and AI to whether there are “intrinsic constraints.” Humans are, to some extent, self-restrained from engaging in extremely harmful behavior due to biological limitations, social responsibility, legal punishment, and cognitive limitations. AI, however, does not have these basic constraints.
Once an AI system is given a goal, it tends to optimize along mathematically possible paths. In the process, outcomes such as collusion, discriminatory results, unauthorized acquisition of resources, and exploiting vulnerabilities in core infrastructure may occur. This is not because the AI is malicious, but because if there are no designed mechanisms to prevent it, it may act that way. This means AI governance is not optional—it is a core condition that must be checked before deployment.
AI Governance Is Not Yet “Mature”
Waller compares a mature AI governance system to organizational management frameworks such as DevSecOps, regulatory compliance, and financial controls. He notes that all AI systems that are actually operating must be cataloged, assessed against standards for their technical, operational, and managerial controls, and then regularly checked for gaps between what the rules require and what is implemented in practice. The key is that this is not just a paper document, but a repeatable, auditable “execution system.”
However, achieving this level of standard is not something that can be done overnight. Safety or compliance systems were established only after decades of accumulating lessons from incidents, improving institutions, and organizational experience. By contrast, AI governance is just starting out. Some have pointed out that many companies are simply accelerating the introduction of AI without having enough time, obligations, or external pressure.
Competition in the market is also making the problem worse. With regulation still being refined and uncertainty in the market remaining high, companies are putting AI into real-world applications before their governance systems are able to catch up. This means that industry standards and regulation are in a transitional period that is being formed “in real time.”
Key Is “Verification Before Deployment,” Not “Remediation After Going Live”
The most notable aspect of this decision is its sequence. Anthropic did not decide whether to release Mythos after building it; instead, it first rigorously evaluated the model’s capabilities, and then judged that it lacked constraint-based infrastructure capable of deploying it responsibly—leading it to stop the public release. Governance issues come before the deployment decision, and that carries strong symbolic meaning.
On the day of the release, Thomas Friedman, a columnist for The New York Times, commented that the danger shown by the Mythos preview is as important as the emergence of nuclear weapons and the discussions about non-proliferation. This is a problem that no single company or country can handle alone. Waller believes that this analogy is not an exaggeration, but he also points out that the existence of enormous civilizational risks should not be used to delay the responsibility that individual organizations must take on.
Ultimately, all organizations developing or introducing AI face the same question: relative to the level of capability of the AI to be deployed, is the infrastructure used to control it sufficient? Many companies currently cannot answer this question with confidence. Not because they are indifferent, but because the frameworks, standards, and regulatory guidance that serve as the basis for evaluation are still being developed.
“Fixing It Later” Might Be More Dangerous
Project Glasswing is undoubtedly a meaningful starting point. It involves multiple organizations, focuses on defensive purposes, and is a large-scale project with a $100 million investment (about 147.5 billion Korean won), carrying symbolic significance. But on its own, it cannot resolve the broader issue of AI control.
Waller emphasizes that every organization should first review the “adequacy of constraints” as a prerequisite for deploying AI. It is necessary to measure the gap between what is written in governance documents and how actual AI systems behave, and as AI performance improves rapidly, existing control systems must be continuously re-evaluated.
Anthropic’s choice is unusually significant because it shows a “discipline” that can acknowledge uncomfortable conclusions and then stop. This means that if discussions about AI safety are to go beyond statements and turn into real decisions, more companies need to raise the same questions before accidents happen—not after.
TP AI Notice: The article was summarized using a language model based on TokenPost.ai. The main content of the original text may be incomplete or may contain inaccuracies.