Techub News reports that, according to CryptoBriefing, the Trump administration is considering implementing mandatory safety reviews for new AI models, requiring them to pass government assessments before public release. This move stems from Anthropic's Mythos model demonstrating the ability to discover hidden software vulnerabilities that have national security implications. The government previously adopted a deregulatory stance, but has recently engaged in discussions with executives from Anthropic, Google, and OpenAI regarding AI safety. Analysts point out that if U.S. AI models can identify critical system vulnerabilities, rival countries may also possess the same capabilities, so reviews are necessary to prevent the technology from being weaponized.

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
Add a comment
Add a comment
No comments
  • Pin