A US federal judge in San Francisco has temporarily blocked the Pentagon’s designation of AI firm Anthropic as a supply chain risk, halting a directive that ordered federal agencies to stop using its Claude chatbot. Judge Rita Lin ruled that the government’s actions appeared arbitrary and potentially retaliatory, stating there was no legal basis to label an American company a national security threat for disagreeing with government policy.



The dispute began after negotiations collapsed over a Pentagon contract that would allow military use of Anthropic’s AI without restrictions. Anthropic opposed deploying its technology for lethal autonomous weapons and mass domestic surveillance, prompting the administration to move toward a government-wide ban. The court’s injunction allows Anthropic to continue operating with federal agencies while the lawsuit proceeds.
post-image
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
Contains AI-generated content
  • Reward
  • 1
  • Repost
  • Share
Comment
Add a comment
Add a comment
DONMADOLLARvip
· 11h ago
2026 GOGOGO 👊
Reply0
  • Pin