A major controversy has emerged within OpenAI. Catalin Kalinowski, head of the robotics team, has resigned from the company over an AI agreement with the Pentagon. This isn’t just about quitting a job—it’s a matter of principle, as Kalinowski himself said.



His concerns are straightforward. While AI could be crucial to national security, there should be better control and transparency regarding sensitive issues such as domestic surveillance and autonomous weapon systems. Kalinowski argues that the company failed to put sufficient security measures in place before announcing this collaboration, and the decision was made with too much haste.

OpenAI’s leadership has a different response. CEO Sam Altman’s team says that this partnership with the Pentagon is an effort to advance national security through cautious AI experimentation. They have set clear boundaries—no use for domestic surveillance, no development of autonomous weapons. But disagreement within the industry over this collaboration continues.

The market reaction is interesting. After news of Kalinowski’s resignation, the number of removed ChatGPT instances increased by more than 295%. At the same time, Anthropic’s Claude app reached the top spot among free apps on the US App Store. This shows how sensitive users are to the policies and values of AI companies. On one side, major companies are making complex political deals; on the other, users are changing their options. This is an important moment for the AI industry—the question is whether companies can balance technological advancement with ethical responsibility, or whether these issues will grow even deeper.
View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
Add a comment
Add a comment
No comments
  • Pin