An interesting story happened with OpenAI and the Pentagon. Sam Altman, the company's CEO, unexpectedly admitted that the deal looked quite opportunistic and careless. Honestly, it's rare to see a leader of a major corporation openly acknowledge flaws in their decisions.



Basically, the problem was that OpenAI's AI technology could be used for internal surveillance or by intelligence agencies like the NSA. It's clear this sparked a wave of criticism and concerns within the community. The shortcomings in the contractual terms turned out to be more serious than they initially appeared.

What did OpenAI do? They introduced new provisions into the agreement to protect their technologies from potential misuse. Essentially, this is an attempt to close the very gaps that would allow government agencies to use AI for surveillance.

This is an interesting turn in the history of Big Tech's interaction with the government. The company faced ethical questions and decided to take responsibility. Of course, the question remains: why weren't these flaws anticipated from the start? In any case, this step toward responsible AI use in government contracts is a move in the right direction.
View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
Add a comment
Add a comment
No comments
  • Pin