Futures
Access hundreds of perpetual contracts
CFD
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Pre-IPOs
Unlock full access to global stock IPOs
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
Promotions
AI
Gate AI
Your all-in-one conversational AI partner
Gate AI Bot
Use Gate AI directly in your social App
GateClaw
Gate Blue Lobster, ready to go
Gate for AI Agent
AI infrastructure, Gate MCP, Skills, and CLI
Gate Skills Hub
10K+ Skills
From office tasks to trading, the all-in-one skill hub makes AI even more useful.
GateRouter
Smartly choose from 40+ AI models, with 0% extra fees
The Trump administration is urged to conduct a security review before the release of AI models
Investing.com — An advocacy group called on Monday for the Trump administration to conduct security threat assessments of advanced artificial intelligence models before they are publicly released, and to refuse government contracts for models that fail to pass the review.
The White House is currently responding to safety concerns raised by the Mythos model from Anthropic, which could make executing complex cyberattacks faster and easier, thereby creating national security risks.
The organization “Americans for Responsible Innovation” sent a letter to government officials, urging the Trump administration to establish mechanisms to review the cyberattack capability and weapon development potential of leading-edge models that major developers are set to roll out.
In the letter, the organization said that companies must pass the above review in order to qualify for government contracts.
The Center for AI Standards and Innovation (CAISI) is currently reviewing certain AI models through voluntary agreements with OpenAI, Anthropic, Google, Microsoft, and xAI.
The organization suggested that CAISI should take the lead in drafting mandatory requirements, and that Congress should set up a permanent enforcement agency within the U.S. Department of Commerce to enforce the related requirements.
MSFT-1.34% GOOGL+0.71%
Microsoft
Follow
Analyze MSFT
Included in our AI select strategy
·
View strategy details
415.12
▼-5.65(-1.34%)
Close · 09/05 · USD
412.75
▼-2.37(-0.57%)
Pre-market · 10:51:55
1D
1W
1M
6M
1Y
5Y
Max
Created with Highcharts 11.4.8 14:00 15:00 16:00 17:00 18:00 19:00 414 416 418
Contents
MSFT-1.34% GOOGL+0.71%
Analyze MSFT
The proposed requirements above would apply to companies that invest $100 million or more annually in frontier models trained on computing power, or companies with annual AI product and service revenues of $500 million or more.
California last year already set similar threshold standards for security reporting requirements.
This article was translated with the assistance of AI. For more information, please see our Terms of Use.