I found this recent report quite interesting about what was really happening inside OpenAI. Basically, investigative journalists spent months interviewing more than 100 people involved, obtained internal memorandums that were never disclosed, and uncovered something quite disturbing: 70-page documents from chief scientist Ilya Sutskever concluding that Sam Altman demonstrated a consistent pattern of lies. This is not a small matter.



What caught my attention was how OpenAI started as a non-profit organization in 2015 with a clear promise to prioritize safety above all else. The idea was that if the AI became dangerous, the board would have the power to shut down the company. But then comes the central question: everything depended on an extremely honest person controlling the technology. And what if the bet was wrong?

The details are concerning. In December 2022, during a board meeting, Sam assured them that GPT-4’s features had already undergone a safety review. When they asked to see the documents, they found that two of the most controversial features had never been approved by the safety panel. There are also notes from Dario Amodei, founder of Anthropic, who worked on safety at OpenAI, describing how the company was stepping back step by step under commercial pressure.

There’s more. OpenAI publicly announced that it would allocate 20% of its computational capacity to a superalignment team, with a potential value above US$1 billion. But in practice? Four people who worked there confirmed that it was only 1–2% of the total capacity, using older hardware. The team was dismantled without completing its mission.

What really stood out to me was a description from a former board member about Sam. He has an extremely rare combination: in face-to-face conversations, he shows a strong desire to please. At the same time, he displays almost sociopathic indifference to the consequences of deceiving people. Microsoft executives have even compared it to Bernie Madoff or SBF. Heavy stuff.

Now there’s the issue with CFO Sarah Friar, who doesn’t agree with accelerating the IPO this year, arguing that the financial risks are too high (. Sam promised US$600 billion in computing expenses over five years ). But then she stopped reporting directly to Sam and instead reports to another executive who took medical leave. The company is in the process of an IPO, with fundamental disagreements between the CEO and the CFO. Absurd.

The point Gary Marcus raised makes sense: if a future OpenAI model were to manage to create biochemical weapons or launch cyberattacks, do you really want to leave it up to one person with this kind of integrity to decide whether to release it or not? OpenAI’s official response was vague, questioning the motives of sources instead of denying the specific facts.

It’s like that line I saw: a non-profit organization created to protect humanity turned into a commercial machine where practically every security measure was personally removed by the same person. Summarized in ten years: idealism → technological advancement → massive capital → the mission ceding space → security dismantled → the structure transformed into a for-profit entity.

All of this while Sam prepares to take OpenAI public with a valuation above US$850 billion. More than a hundred witnesses described him with the same label: not bound by the truth. This story is far more than corporate gossip. When we’re talking about what could be the most powerful technology in human history, CEO integrity isn’t a detail—it’s an existential risk for everyone.
View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
Add a comment
Add a comment
No comments
  • Pin