I just noticed that both Anthropic and OpenAI have significantly adjusted their safety commitments. With Anthropic, a central promise from the responsible growth guidelines was simply omitted — specifically, the pledge to pause AI training if risk mitigation measures are insufficient. Jared Kaplan, Anthropic's Chief Scientific Officer, justifies this by pointing to the reality of a highly competitive market. An unilateral pause, under competitive pressure, is simply not feasible.



Similarly, with OpenAI: the mission has been rephrased, and the word "safe" has been removed. Instead, the focus now is on AI benefiting humanity. Of course, this aligns with the expectations of investors and policymakers, but it also shows how pragmatically these companies are thinking nowadays.

The timing is interesting: Anthropic is currently closing a funding round worth $30 billion and is valued at $380 billion. OpenAI aims for up to $100 billion, backed by Amazon, Microsoft, and Nvidia. It’s clear why such safety language has been omitted — the pressure is enormous.

Another point: Anthropic has denied the Pentagon full access to Claude, which led to tensions with U.S. Secretary of Defense Pete Hegseth. This raises questions about defense contracts and shows that safety concerns and geopolitical realities sometimes conflict. It’s fascinating to observe how this industry is currently redefining itself.
View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
Add a comment
Add a comment
No comments
  • Pin