Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Launchpad
Be early to the next big token project
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
Afraid to open the Pandora’s box? Anthropic’s strongest model in history is completely unwilling to be disclosed publicly.
There is a vulnerability in OpenBSD’s codebase that has been hidden for 27 years. FFmpeg has a vulnerability that’s been hidden for 16 years—the code behind it has been called over 5 million times before it was found. Digging these two things out wasn’t done by any top researcher from a bug bounty platform, and it wasn’t Google Project Zero. It was an unreleased Anthropic model—code-named Claude Mythos Preview.
On April 7, Anthropic announced Project Glasswing. The action itself is simple: send Mythos Preview to a whitelist. The list includes AWS, Apple, Google, Microsoft, NVIDIA, Broadcom, Cisco, CrowdStrike, JPMorgan Chase, Linux Foundation, and Palo Alto Networks, plus around 40 other organizations responsible for critical infrastructure. People outside the list can’t get it. Anthropic explicitly said that, in the short term, it does not plan to publicly release the model.
This is the first time a frontier lab has proactively locked up its strongest asset.
Over the past two years, the release cadence has been almost reflexive. Every cross-generation release of GPT, Gemini, and Claude has followed “release, observe, patch.” Anthropic’s own “Responsible Scaling Policy” (RSP) is, at its core, a commitment framework: once a certain capability threshold is reached, mitigation measures at the corresponding level are applied, and then it continues to release. Glasswing isn’t the next step in that framework—it’s the first exception to it. A model that Anthropic has already judged as “not suitable for release through the original process” is pulled out separately and given only to the defenders.
What did Mythos Preview accomplish? The official claim is “thousands of zero-day vulnerabilities, covering every mainstream operating system and every mainstream browser.” More telling than the numbers is the range of capability. Claude 4.6 Opus has a near-zero success rate on tasks like autonomous vulnerability development—which means that six months ago, Anthropic’s strongest publicly available model still couldn’t do this at all. Mythos can string multiple unrelated vulnerabilities into a complete attack chain, and a four-step browser exploitation sequence is already a proven example. Going from “nearly zero” to a “four-vulnerability chain” isn’t a single generational push—it’s a leap.
The maintainers have already felt it. Greg Kroah-Hartman of the Linux kernel and Daniel Stenberg, the author of curl, have both recently said the same thing publicly: over the past year, AI-generated security reports have gone from “spam-level” to “real, high-quality, impossible to ignore.” The number of reports that open-source projects receive is increasing, and so is their quality, while the manpower available to maintainers hasn’t increased. This is the pain the defensive side has already been suffering for a long time. Anthropic’s move is simply putting this matter—previously vague anxiety—out on the table.
It’s worth looking at the whitelist itself. The “big three” clouds (AWS, Google, Microsoft), three hardware vendors (Apple, NVIDIA, Broadcom), two network equipment companies (Cisco, Palo Alto Networks), one endpoint security company (CrowdStrike), one open-source infrastructure (Linux Foundation), and one bank. On the list there is only one bank: JPMorgan Chase.
This isn’t random quota allocation. What Anthropic has drawn is a map of “if we can’t hold the line, the sky will fall.” Most of the world’s code runs on the stacks of these companies, and most of the world’s money runs on the ledgers of one of them. The logic behind the whitelist isn’t “who needs it most,” but “who, if they go down, will pull everyone else into it first.” Outside the whitelist, Anthropic also earmarked $4 million to open-source security organizations. The money funds manpower; the model provides capability. Put together, it boils down to one sentence: give maintainers a few months.
Anthropic’s own wording is even more direct than the whitelist. In the company’s statement, it wrote, “Given the speed of AI development, such capabilities will not remain long in the hands of participants committed to secure deployment.” Then it adds, “Defending global network infrastructure may require years.”
Put these two statements together. Anthropic judges that the time window during which the model will leak or be copied is short, while the time window for defenders to patch vulnerabilities cleanly is long. Glasswing’s entire significance lies in the gap between these two timeframes. By using a controlled first move, it buys a patch window of several months to a year.
There’s also a Washington dimension to this. Anthropic is in ongoing communication with the U.S. government about the capabilities of Mythos Preview. At the same time, it is also still in a dispute with the U.S. Department of Defense over the scope of use for military AI. One side is that a company refuses to use the model for certain military purposes, while at the same time proactively sending the model to security teams at the Linux Foundation and Apple. These two things aren’t contradictory; they are two sides of the same judgment. Anthropic is defining what this model can be used for, rather than leaving the definition power to users.
The most unusual part about Glasswing isn’t what it did—it’s when it did it. In the past, AI companies proved themselves by releasing. Now, Anthropic has chosen to prove itself with “not releasing.” A frontier lab proactively locks up its strongest product and says it isn’t for commercial reasons, not because alignment isn’t finished, not because of regulatory requirements, but because it calculated that the open release timetable can’t keep up with the patch timetable.
In the coming months, what matters won’t be Mythos Preview itself, but how many of the vulnerabilities that show up on it—across those roughly 50 whitelist organizations—get patched. The next step to watch is whether other frontier labs will follow suit. If they do, a “open, iterate, open” style industry will, for the first time, see an action of “lock it up first and deal with it later.” If they don’t, Anthropic will be the one standing at the doorway, holding the keys, watching the clock.
Click to learn more: Lydong BlockBeats is hiring
Welcome to join Lydong BlockBeats’ official community:
Telegram subscription group: https://t.me/theblockbeats
Telegram group chat: https://t.me/BlockBeats_App
Twitter official account: https://twitter.com/BlockBeatsAsia