Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Pre-IPOs
Unlock full access to global stock IPOs
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
Promotions
AI
Gate AI
Your all-in-one conversational AI partner
Gate AI Bot
Use Gate AI directly in your social App
GateClaw
Gate Blue Lobster, ready to go
Gate for AI Agent
AI infrastructure, Gate MCP, Skills, and CLI
Gate Skills Hub
10K+ Skills
From office tasks to trading, the all-in-one skill hub makes AI even more useful.
GateRouter
Smartly choose from 30+ AI models, with 0% extra fees
White House Weighs Reinstating Anthropic for Federal Use Amid Pentagon Fight: Report
In brief
The White House is reportedly moving to restore Anthropic’s standing across the federal government as the AI company remains in a dispute with the Pentagon over how its models can be used. According to a report by Axios citing multiple sources, officials in President Donald Trump’s administration are drafting guidance that could let federal agencies work around Anthropic’s “supply chain risk” designation and gain access to its newest AI models, including Mythos. The move would mark a reversal for an administration that previously treated Anthropic as a security risk and sought to remove its technology from government systems, after CEO Dario Amodei refused unrestricted access to the Pentagon earlier this year.
Axios reported that a draft executive action under discussion could ease the administration’s stance and allow agencies to use Anthropic’s technology while allowing the administration to “save face and bring them back in.” In February, the Pentagon designated Anthropic a “supply chain risk.” After Trump ordered agencies to “immediately cease” using its products, federal departments were given six months to phase out the company’s technology. “The United States of America will never allow a radical left, woke company to dictate how our great military fights and wins wars!” Trump wrote on Truth Social in February. “That decision belongs to your commander-in-chief and the tremendous leaders I appoint to run our military.”
Anthropic CEO Dario Amodei reportedly met earlier this month with White House Chief of Staff Susie Wiles and Treasury Secretary Scott Bessent as administration officials convened meetings with major banks, including Citigroup, Bank of America, Wells Fargo, Morgan Stanley, and Goldman Sachs, to discuss the cybersecurity risks tied to Mythos. The news comes as demand for Mythos grows across the federal government and corporations. Revealed in March, Mythos is Anthropic’s most advanced AI model, which the company says can identify and exploit software vulnerabilities across major operating systems. Anthropic said that it would not yet release the model publicly, instead offering access to tech giants and governments to help secure their software and understand its capabilities. According to the AI Security Institute, Mythos Preview became the first AI model to complete “The Last Ones” (TLO), a 32-step corporate network attack simulation that typically requires humans 20 hours to finish. The National Security Agency is reportedly already running Claude Mythos Preview on classified networks. At the same time, Firefox developer Mozilla said that using Mythos, the company was able to identify and patch 271 vulnerabilities in the browser during testing. It shows the serious potential for the AI to both exploit current software and attempt to safeguard it from attackers. “As these capabilities reach the hands of more defenders, many other teams are now experiencing the same vertigo we did when the findings first came into focus,” Mozilla wrote in a post. “For a hardened target, just one such bug would have been red-alert in 2025, and so many at once makes you stop to wonder whether it’s even possible to keep up.” Anthropic did not immediately respond to a request for comment by Decrypt.