An interesting news story has emerged that highlights the controversy surrounding AI and national security. Caitlin Kalinowsky, head of OpenAI’s robotics team, has left the company, and the reasons are quite significant.



Kalinowsky had issues with OpenAI’s AI agreement with the Pentagon. She explained on social media that this was not personal, but a matter of principle. She believes that while AI can contribute to national security, sensitive topics such as domestic surveillance and autonomous weapon systems should be discussed more cautiously, with judicial oversight. Her argument is that the company had not put in place sufficient safeguards before announcing this collaboration.

Interestingly, this resignation has caused ripples in the market. There was a dramatic 295% surge in ChatGPT deletions, while Anthropic’s Claude app topped the list of free apps on the US App Store. This sends a clear message that users are genuinely concerned about these policy issues.

An OpenAI spokesperson confirmed that the collaboration with the Pentagon is intended to advance national security through the responsible use of AI and through clearly defined limits, (, such as minimal involvement in domestic surveillance). However, controversy continues in the industry. Kalinowsky has maintained respect for Sam Altman and his team, but her resignation shows just how deep concerns about AI policy run.

This case shows how, as AI technology advances, debates over ethical and security standards have become increasingly important. Questions about advanced AI systems and autonomous systems are no longer just technical—they have become political and ethical as well.
View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
Add a comment
Add a comment
No comments