Anthropic Claims No 'Kill Switch' for AI Models Deployed at Pentagon

According to monitoring by Beating, Anthropic submitted documents to the Washington Federal Appeals Court stating that once its AI models are deployed in the Pentagon environment, the company has neither visibility nor technical means to control or shut down the models, and there is no ‘kill switch’ in place. Anthropic also noted that the Pentagon had the opportunity to test the models before deployment. This document represents the latest development in the dispute between Anthropic and the Pentagon over the ‘supply chain risk’ label. In March of this year, the Pentagon classified Anthropic as a supply chain risk, citing the company’s improper interference in how its technology is used in sensitive military operations. The crux of the dispute lies in Anthropic’s usage policy, which prohibits Claude from being used for autonomous weapons or mass surveillance, a stipulation the Pentagon views as a ‘smokescreen.’ The litigation has resulted in a split between the two courts: the Washington court rejected Anthropic’s request to suspend the supply chain risk label, while the California court approved it. The practical effect is that Anthropic cannot participate in new Pentagon contracts but can continue to provide services to other government agencies. Meanwhile, the Trump administration is pushing for the deployment of Anthropic’s new model, Mythos, in federal agencies, with agency heads exploring how to use Mythos to defend against cyberattacks, contradicting the Pentagon’s stance that Anthropic poses a national security risk. The next hearing is scheduled for May 19.

This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
Add a comment
Add a comment
No comments