The New Yorker In-Depth Investigation: Why Do OpenAI Insiders Believe Altman Is Untrustworthy?

Original author: Xiao Bing, Deep Tide TechFlow

In the fall of 2023, OpenAI’s Chief Scientist Ilya Sutskever sat in front of a computer and completed a 70-page document.

The document was compiled from Slack message logs, HR communications files, and internal meeting minutes—only to answer one question: Sam Altman, the person in charge of what may be the most dangerous technology in human history, can he truly be trusted?

The answer Sutskever gave is written on the very first page, first line of the file, with the list title: “Sam demonstrates a consistent pattern of …”

First: Lying.

Two and a half years later, today, investigative reporters Ronan Farrow and Andrew Marantz published a long-form investigative report in The New Yorker. They interviewed more than 100 people involved, obtained internal memos that had never been made public before, and also uncovered more than 200 pages of private notes left behind by Anthropic founder Dario Amodei during his time at OpenAI. The story pieced together from these documents is far uglier than the “palace intrigue” that took place in 2023: how OpenAI, step by step, transformed from a nonprofit created to keep humans safe into a commercial machine—almost every safety barrier was dismantled by the same person, with his own hands.

Amodei’s conclusion in the notes was even more blunt: “OpenAI’s problem is Sam himself.”

The “original sin” of OpenAI

To understand the weight of this report, you first need to clarify how special OpenAI is.

In 2015, Altman and a group of Silicon Valley elites did something with almost no precedent in business history: they used a nonprofit organization to develop what could be the most powerful technology in human history. The board’s responsibilities were spelled out clearly: safety comes before the company’s success, even before the company’s survival. Put simply, if one day OpenAI’s AI becomes dangerous, the board has an obligation to shut the company down itself.

The entire architecture hinges on one assumption: the person in charge of AGI must be an extremely honest one.

What if they bet wrong?

The core bomb in the report is that 70-page document. Sutskever doesn’t play office politics—he’s one of the world’s top AI scientists. But by 2023, he became increasingly convinced of one thing: Altman has been continuously lying to executives and the board.

A specific example: In December 2022, Altman assured the board during a meeting that multiple features of the upcoming GPT-4 had already passed safety review. Board member Toner asked to see the approval documents and found that the two most controversial features (user-customized fine-tuning and personal assistant deployment) had never been approved by any safety panel in the first place.

Even stranger things happened in India. A staff member reported to another board member about “that violation”: Microsoft had failed to complete the necessary safety review and released an early version of ChatGPT in India ahead of schedule.

Sutskever also recorded another matter in the memo: Altman told the former CTO Mira Murati that the safety approval process wasn’t that important—the company’s general counsel had already signed off. Murati went to confirm with general counsel, who replied: “I don’t know where Sam got that impression.”

Amodei’s 200+ pages of private notes

Sutskever’s document reads like a prosecutor’s indictment. Amodei’s more than 200 pages of notes are more like a diary written by a witness at the crime scene.

In the years Amodei worked at OpenAI as the head of safety, he watched the company backslide step by step under commercial pressure. In his notes, he recorded a key detail from the 2019 Microsoft investment case: he had inserted a “merger and assistance” clause into OpenAI’s charter, to the effect that if another company found a safer AGI path, OpenAI should stop competing and instead help that company. This was the safety safeguard he valued most across the entire deal.

When the deal was about to be signed, Amodei discovered something: Microsoft obtained veto power over this clause. What does that mean? Even if one day a competitor found a better path, Microsoft could block OpenAI’s obligation to assist with a single sentence. The clause was still on paper, but from the day the signatures were added, it became worthless scrap.

Amodei later left OpenAI and founded Anthropic. The competition between the two companies ultimately boils down to fundamental disagreement over “how AI should be developed.”

The missing 20% compute commitment

The report contains a detail that makes your spine chill once you read it—about OpenAI’s “super alignment team.”

In mid-2023, Altman emailed a PhD student at Berkeley researching “deceptive alignment” (AI pretends to be obedient during tests but does its own thing after deployment), saying he was extremely concerned about the issue and considering setting up a $1 billion global research prize. The student was encouraged, took a leave of absence, and joined OpenAI.

Then Altman changed his mind: no external awards, and instead the creation of a “super alignment team” inside the company. The company announced loudly that it would allocate “20% of existing compute” to this team, with a potential value of more than $1 billion. The wording of the announcement was extremely serious, stating that if alignment issues weren’t resolved, AGI could lead to “human beings being stripped of power, even human extinction.”

Jan Leike, who was appointed to lead this team, later told reporters that the commitment itself was an effective “talent retention tool.”

What about reality? Four people who worked on or had close contact with this team said that the compute actually allocated was only 1% to 2% of the company’s total compute—and that it was also the oldest hardware. The team was later disbanded, with its mission left incomplete.

When reporters asked to interview people at OpenAI responsible for “existential safety” research, the company’s PR response was both laughable and maddening: “That’s not … an actual thing.”

Altman himself was more composed. He told reporters that his “intuition doesn’t quite align with a lot of traditional AI safety approaches,” but that OpenAI would still do “safety projects, or at least projects tangential to safety.”

A CFO that was sidelined and the upcoming IPO

The New Yorker’s report is only half of the bad news for this day. On the same day, The Information broke another major story: OpenAI’s CFO Sarah Friar had serious disagreements with Altman.

Friar privately told colleagues that she didn’t think OpenAI was ready to go public this year. Two reasons: the amount of procedural and organizational work to be completed was too large, and the financial risks stemming from the $600 billion in compute spending over 5 years that Altman promised were too high. She was even unsure whether OpenAI’s revenue growth could support those commitments.

But Altman wanted to push for an IPO in the fourth quarter of this year.

Even more bizarrely, Friar was no longer reporting directly to Altman. Starting in August 2025, she began reporting to Fidji Simo (OpenAI’s Chief of Applications Business). And Simo just took sick leave last week due to health reasons. Judge the situation: in a company sprinting toward an IPO, the CEO and CFO have fundamental disagreements, the CFO doesn’t report to the CEO, and the CFO’s superior is on leave.

Even executives inside Microsoft couldn’t stand it, saying Altman “misrepresents facts, reneges repeatedly, and constantly overturns agreements that had been reached.” One Microsoft executive even said: “I think there’s a certain probability that he’ll ultimately be remembered as a scammer on the level of Bernie Madoff or SBF.”

Altman’s “two-faced” character portrait

A former OpenAI board member described two traits in Altman to reporters. This passage may be the harshest character sketch in the entire report.

The board member said that Altman has a very rare combination of traits: in every one-on-one face-to-face exchange, he has an intense desire to please the other person and to be liked by them. At the same time, he has an almost sociopathic indifference to the consequences that deceiving others might bring.

When both traits appear in one person, it’s extremely rare. But for a salesperson, this is the most perfect gift.

The report had a fitting metaphor: Jobs was known for a “reality distortion field”—he could make the whole world believe in his vision. But even Jobs never told customers, “If you don’t buy my MP3 player, the people you love will die.”

Altman has said similar things, about AI.

A CEO’s character problem is a risk for everyone

If Altman were only the CEO of a normal tech company, these accusations would be no more than an impressive set of business gossip. But OpenAI is not a normal company.

By its own description, it is developing what could be the most powerful technology in human history. It can reshape the global economy and labor market (OpenAI itself just released a policy white paper about how AI causes unemployment issues), and it can also be used to create large-scale biological weapons or launch cyber attacks.

All the safety guardrails have become meaningless. The founder’s nonprofit mission has given way to the IPO sprint. Both the former chief scientist and former head of safety concluded that the CEO is “not trustworthy.” Partners liken the CEO to SBF. In that situation, why does this CEO get to unilaterally decide when to release an AI model that could potentially change the fate of humanity?

Gary Marcus (a professor of AI at New York University and a long-time advocate for AI safety) wrote a single sentence after reading the report: if a future OpenAI model could build large-scale biological weapons or launch catastrophic cyber attacks, would you really feel comfortable letting Altman alone decide whether to release it?

OpenAI’s response to The New Yorker was concise: “Most of this article is recycling already-reported events, using anonymous phrasing and selective anecdotes—sources clearly have personal motives.”

Very much in keeping with Altman’s style of responding: not addressing specific allegations, not denying the authenticity of the memos, and only questioning the motives.

On the corpse of a nonprofit, a cash cow grows

OpenAI’s decade, written as a story outline, looks like this:

A group of idealists worried about AI risks creates a mission-driven nonprofit. The organization delivers extraordinary technical breakthroughs. The breakthroughs attract huge capital. Capital demands returns. The mission steps aside. The safety team is disbanded. Critics are purged. The nonprofit structure is turned into a for-profit entity. A board of directors that once had the power to shut the company down is now filled with the CEO’s allies. A company that once promised to put 20% of compute toward protecting human safety now has its public relations staff say, “That’s not an actual thing.”

The story’s protagonist—more than 100 people who have lived through it—gave him the same label: “not bound by the truth.”

He is preparing to take this company public with an IPO at a valuation of over $850B.

*This article’s information is compiled from public reports by multiple media outlets, including The New Yorker, Semafor, Tech Brew, Gizmodo, Business Insider, The Information, and others.

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
Add a comment
Add a comment
No comments
  • Pin