Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Launchpad
Be early to the next big token project
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
Anthropic launches top-tier AI safety model, currently limited to tech giants for testing, leading to a surge in cybersecurity stocks.
Anthropic is handing a more powerful, undisclosed AI model to technology giants for testing, in response to potential cybersecurity threats posed by advanced AI systems. This move has sparked an immediate reaction in the market, with cybersecurity stocks rising across the board.
On Tuesday, April 7, Anthropic announced the launch of an industry consortium project called “Project Glasswing.” Companies such as Amazon, Apple, Microsoft, and Cisco will be granted access to its undisclosed new model, Mythos, to identify vulnerabilities in their own products and share their findings with industry peers.
Concerned that hackers might use the model to launch cyberattacks, Anthropic said that there are currently no plans to release Mythos to the public, and it will use the feedback from Project Glasswing to set safety guardrails for the technology.
Fueled by this news, global cybersecurity stocks generally rose on Tuesday. The Global X cybersecurity ETF rose 0.9%, marking the sixth consecutive trading day closing higher.
Palo Alto Networks rose as much as 4.9%, CrowdStrike climbed 6.2%, Zscaler gained 1.8%, and Fortinet increased 1.7%. Analysts believe Anthropic’s move has clearly validated the boost effect of AI technology on demand for cybersecurity software.
Glasswing project: Let defenders get a head start
The core logic behind the Project Glasswing initiative is to identify and patch potential cybersecurity vulnerabilities before powerful AI models are rolled out more broadly. Participating companies will use the Mythos model to proactively uncover weaknesses in their own products, and share the research results within the industry.
Newton Cheng, who leads Anthropic’s advanced red-team cybersecurity work, said:
Analysis suggests this actually reflects growing concerns within the technology industry. As AI model capabilities continue to improve, criminals and hackers may use tools like this to scan for source code vulnerabilities and break through network defenses.
Anthropic’s competitor, OpenAI, previously also emphasized growth in the networking capabilities of its models, and rolled out pilot projects aimed at putting tools “into the hands of defenders first.”
Anthropic said it has communicated with U.S. government officials regarding Mythos’s security-related capabilities, but declined to disclose which specific agencies are involved.
Newton Cheng noted that the company’s existing work has already included cooperation with the U.S. Cybersecurity and Infrastructure Security Agency (CISA) and the U.S. National Institute of Standards and Technology (NIST).
Analyst: The industry threat environment is entering a turning point
Wall Street analysts generally interpreted Anthropic’s move as a positive signal for the cybersecurity industry, saying the event has redefined the direction of the relationship between AI companies and security vendors.
Jefferies Financial Group believes that for Palo Alto Networks and CrowdStrike, this news is a clear signal of “collaboration rather than competition,” and shows that “the threat environment is entering a turning point.” It also believes the two platform-type companies “have good potential for above-market excess returns in the emerging AI era.”
Stephens, a financial services company, said the news “confirms the seriousness of a stronger AI model at the cybersecurity level, and the necessity for cybersecurity vendors and providers of critical enterprise software to collaborate and jointly defend against complex threats.”
Stephens expects enterprises to increasingly rely on trusted cybersecurity service providers to protect an ever more complex heterogeneous AI environment.
Bernstein, a private wealth management firm, pointed out from another angle that “Anthropic clearly places a high level of attention on the issue of code security and the possibility of its model being maliciously exploited,” and believes this “does not pose a basic obstacle for cybersecurity vendors, and may even create a tailwind.”
Risk warning and disclaimer