Page 70 confidential document's first allegation: "Lying," Altman told the board, "I can't change my personality."

robot
Abstract generation in progress

Original title: 70-page secret document: first allegation is “lying,” Altman tells the board, “I can’t change my personality”

Original author: Lyding BlockBeats

Original source:

Reprinted: Mars Finance

According to monitoring by 1M AI News, Pulitzer Prize winner Ronan Farrow and New Yorker reporter Andrew Marantz released a long-form investigative report. Based on interviews with more than 100 insiders, they first fully disclosed two core documents: a roughly 70-page secret dossier compiled by OpenAI’s former Chief Scientist Ilya Sutskever in the fall of 2023, and more than 200 pages of internal notes accumulated by Anthropic CEO Dario Amodei during his time at OpenAI. Neither of the two documents has ever been made public before.

The Sutskever dossier includes Slack messages, HR documents, and screenshots reportedly taken with a mobile phone (to avoid monitoring of company devices). It begins with a checklist that says: “Sam shows a sustained pattern…,” and the first item is “lying.” The dossier accuses Altman of misrepresenting facts to executives and the board, and of deceiving colleagues on safety procedures. At the time, Sutskever told another board member: “I don’t think Sam is the one who should have their hands on the button.”

Amodei’s notes are titled “My experience at OpenAI” (subtitle: “Private documents, do not share”). They circulated among peers in Silicon Valley but were never published. In them, it says “OpenAI’s problem is Sam himself,” and it also accuses Altman of denying in person the clauses already present in the contract when signing a $1 billion investment agreement with Microsoft. Even after Amodei read out the relevant language word-for-word on the spot, Altman still refused to acknowledge it.

The report also reveals multiple facts that were previously undisclosed:

  1. The independent investigation promised after Altman’s reinstatement was never turned into a written report. The law firm WilmerHale, which was responsible for the investigation (having previously led investigations in the Enron and WorldCom cases), only briefed two newly appointed directors verbally and did not produce a written report. The decision portion was based on advice from the private attorneys of those two directors. Insiders say the investigation “seems intended to limit transparency,” and some sitting directors believe it could lead to “a need to re-investigate.”

  2. The actual computing power obtained by the super-alignment team was about 1%-2% of the publicly pledged 20%, and most of it was allocated to the “oldest, worst-chip clusters.” When the reporters asked researchers working on existential safety for interviews, an OpenAI representative responded: “What do you mean by ‘existential safety’? That’s not a thing.”

  3. Around 2018, senior executives seriously discussed an internally named proposal called the “National Plan”: let major powers (including China and Russia) bid for AI technology. Jack Clark, then head of policy, described its goal as “to create a prisoner’s dilemma so that all countries have to give us funding.” The proposal was shelved after threats from multiple employees to resign.

  4. Multiple Microsoft executives expressed strong dissatisfaction with Altman. One executive said: “He misrepresented, distorted, re-negotiated, and went back on the agreement,” and believed there was “a small but real possibility” that “he will ultimately be remembered the way people remember Bernie Madoff, the mastermind of a Ponzi scheme, or Sam Bankman-Fried, the founder of FTX.”

After Altman was fired, during a call with the board, he was asked to admit his deception pattern; he repeatedly said, “This is too absurd,” and then said, “I can’t change my personality.” One board member present interpreted the line as: “What that sentence means is ‘I have a trait for lying to people, and I won’t stop.’” Aaron Swartz, a programmer who was a first cohort Y Combinator participant and died in 2013, had warned friends before his death: “You have to understand that Sam can never be trusted. He’s a sociopath—anything can be done.” The report says more than one person used the term “sociopath” in interviews of their own initiative.

In more than a dozen conversations with reporters, Altman denied intentional deception. He characterized his continually evolving promises as “good-faith adaptation” to a rapidly changing environment, and attributed early criticism to his tendency to “avoid conflict too much.” When asked whether running an AI company requires higher standards of integrity, he added: “Yes. It requires a higher level of integrity; I feel the weight of that responsibility every day.”

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
Add a comment
Add a comment
No comments
  • Pin