A senior White House official said they should "review AI like drug approval," and the next day, the White House said they were taken out of context.

According to Beating Monitoring, on May 4th, The New York Times reported that the White House was discussing the establishment of an AI model pre-release review mechanism, which was then still under “consideration.” Two days later, the situation escalated: White House National Economic Council Director Kevin Hassett directly appeared on Fox Business on Wednesday, saying the government was studying an executive order requiring AI models to undergo government review before launch, “just like FDA approval for drugs.” Trump’s first action after taking office was to revoke Biden’s AI safety executive order, and now he wants to implement a stricter pre-approval process than Biden. Less than 24 hours later, the White House quickly cooled the situation. A senior official said on Thursday that Hassett’s comments were “taken out of context,” and that the White House was seeking “partnerships” with companies rather than “government regulation.” White House Chief of Staff Susie Wiles posted a clarification on X late Wednesday night, stating that the government “does not pick winners and losers,” and that security deployment should be driven by innovators rather than bureaucrats. This is her fourth post since creating the official account. Although the tone has softened, the actions have not stopped. Three insiders told Politico that the White House is discussing having intelligence agencies conduct pre-assessments before models are publicly released, with the goal of ensuring that U.S. intelligence can research and utilize these tools before Russia and China become aware of the new capabilities. Deputy Secretary of Defense Emil Michael also publicly supported pre-evaluation at an AI conference in Washington on Thursday, saying Mythos is essentially a cybersecurity issue, “these models will come sooner or later,” and the government must build a response mechanism. The AI Standards and Innovation Center (CAISI) under the Department of Commerce has already signed voluntary assessment agreements with OpenAI and Anthropic in 2024, and this week expanded the scope to include Google DeepMind, Microsoft, and xAI. The direct trigger for this policy U-turn was Anthropic disclosing last month that Mythos had such strong vulnerability discovery capabilities that it could not be publicly released. However, the White House faces a dilemma: on one side, Trump signed an executive order banning federal agencies from using Anthropic products and called its executives “left-wing lunatics”; on the other side, various federal agencies are rushing to access Mythos to detect their own systems. The industry has reacted strongly against mandatory pre-approval. Daniel Castro, president of the Information Technology and Innovation Foundation, said, “If approval can be withheld, it’s a big problem for any company. One company getting approval while another doesn’t creates a market access gap that can last weeks or months, with a huge impact.”

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
Add a comment
Add a comment
No comments
  • Pin