OpenAI scandal exposed: Ilya secretly recorded 70 pages of documents, confirming Ultraman lied

New Yorker published an 18-month investigation today, revealing for the first time a 70-page internal memo assembled in the fall of 2023 by OpenAI Chief Scientist Ilya Sutskever, as well as more than 200 pages of private notes that Anthropic co-founder Dario Amodei had kept for years.

These never-before-public documents point to the same conclusion: Sam Altman has a pattern of “lying consistently.”

This isn’t a simple internal power struggle. When AI is seen as “the most dangerous invention in human history,” and when a company claims it will “ensure AGI benefits all of humanity,” the integrity of the people at the helm stops being a private matter—it becomes a public issue that affects everyone.

APPSO, based on the original reporting in The New Yorker, has reorganized and re-presented this integrity crisis that shook Silicon Valley.

It’s a story about power, lies, and the future of AI.

APPSO Highlights:

Altman’s double standards:

  • Ilya Sutskever submitted 70 pages of secret documents to the board, accusing Altman of “lying continuously”

  • Altman was fired by the board for “not communicating candidly enough,” but was forced to reinstate himself 5 days later

  • Employees call the incident “The Blip” (a Marvel-style disappearance and return)

  • Altman has been accused of having conflicting exclusive agreements with major players such as Microsoft and Amazon

A security crisis:

  • OpenAI disbanded the superalignment team; the promised 20% of compute was actually only allocated at about 1–2%

  • The company shifted from nonprofit to for-profit, with a valuation reaching or approaching the trillion-dollar range

  • A $50 billion AI infrastructure deal with Middle Eastern regimes

  • Technology opened up to the military, used for immigration enforcement, surveillance, and autonomous weapons

Power games:

  • Altman personally invested in 400+ companies and has complicated financial relationships with former boyfriends

  • Allegations that investors who competed for investments were frozen out

  • A shift from “effective altruism” to “effective accelerationism”

  • Close ties with Trump, donating $1 million to an inauguration fund

A 70-page memo about integrity

In the fall of 2023, Ilya Sutskever did something extremely rare in Silicon Valley: he secretly filmed internal company documents on his phone, compiled them into a 70-page memo, and sent it to board members.

Why secretly film? Because he didn’t dare leave a trace on company equipment.

The memo was sent in the form of “disappearing messages,” ensuring nobody would see it. One board member who received the memo recalled, “He was terrified.”

In a document that has never been fully disclosed, the opening is a list: “Sam exhibits a consistent pattern of…”

First: Lying.

Lying.

Not a “communication style issue,” not “overly optimistic,” not a “trait of vision-led leadership.” Just two words: lying.

The person who wrote the memo had, back in 2019, hosted Greg Brockman’s wedding in the OpenAI office, with robotic arms acting as the ring-delivery courier. He had considered Altman and Brockman friends.

But in 2023, when Sutskever believed AGI was coming, he told another board member, “I don’t think Sam should be the one with his finger on the button.”

Another set of 200 pages of private notes

After leaving OpenAI, Dario Amodei co-founded Anthropic. Before that, he kept private notes about Altman and Brockman for many years.

More than 200 pages of related documents circulated in Silicon Valley and were never disclosed.

In one of the files, Amodei wrote: Altman’s words are “almost certainly bullshit.”

This is not a malicious attack from a rival. Before Amodei joined OpenAI in 2015, Altman had one-on-one dinner with him at an Indian restaurant. At that dinner, Altman promised him that OpenAI would focus on safety—“maybe not immediately, but as soon as possible.”

Amodei recorded Altman’s commitments in his notes. Then, over the years, he documented how those commitments were broken one by one.

The notes are titled “My Experience with OpenAI,” with a subtitle: “Private: Do Not Share.”

Five days of coup and counter-coup

On November 17, 2023, Altman watched an F1 race in Las Vegas. Sutskever invited him to a video call and read a brief statement: he was no longer an OpenAI employee.

The board released an announcement, carefully worded: Altman was fired because he was “not candid enough in his communications.”

Microsoft invested $13 billion, and only minutes before Altman was fired did it learn the news. CEO Satya Nadella later said, “I was very shocked, and I couldn’t get any answers about this from anyone.”

Reid Hoffman started calling to investigate: “I don’t know what the hell happened. We were looking for embezzlement, sexual harassment, but we found nothing.”

Then the counterattack began.

Altman’s $27 million mansion became a “government in exile.” Crisis communications expert Chris Lehane joined in, with his motto coming from Mike Tyson: “Everyone has a plan until they get punched in the mouth.”

Lehane urged Altman to launch an aggressive social media campaign. Airbnb co-founder Brian Chesky stayed in contact with tech reporter Kara Swisher, feeding criticism of the board.

Every evening at 6 p.m., Altman would pause “the war room” and do a round of Negroni cocktails. “You need to stay calm,” he recalled. “What’s supposed to happen will happen.”

But his call logs show he was calling more than 12 hours a day.

Investment firm Thrive paused a planned investment, signaling that the deal would only be completed if Altman returned. Employees would then be able to receive equity payouts worth millions of dollars.

A public letter demanding Altman’s return circulated internally at the company. Some signatories who hesitated received pleading calls and messages from coworkers. In the end, most of OpenAI’s employees threatened to leave with Altman.

The board was cornered. Helen Toner said, “Control Z—it’s an option,” reversing the firing. “Or another option is the company falling apart.”

Even Mira Murati eventually signed the letter. She had previously provided memo material to Sutskever.

Brockman’s wife Anna found Sutskever in the office and begged him to reconsider. “You’re a good person—you can fix this,” she said.

Later, in court testimony, Sutskever explained: “I felt like if we went down the path where Sam didn’t come back, OpenAI would be destroyed.”

One night, Altman took the sleeping pill Ambien and was woken by his husband Oliver Mulherin, who told him that Sutskever had wavered and that people wanted Altman to talk to the board. “I woke up in this crazy Ambien fog, completely disoriented,” Altman said. “And I thought, I can’t talk to the board right now.”

In a series of increasingly tense calls, Altman demanded that the board members who fired him resign.

Within fewer than five days, Altman was reinstated.

Sutskever, Toner, and McCauley lost their board seats. The only board members left from the original board were Quora founder Adam D’Angelo.

As a condition of exit, the departing board members demanded an investigation into the allegations against Altman. They also required the new board to independently oversee the external investigation.

But the two new members—former Harvard president Lawrence Summers and former Facebook CTO Bret Taylor—were selected after close negotiations with Altman.

“Would you be willing to do this,” Altman texted Nadella. “Bret, Larry Summers, Adam as the board, me as CEO, and then Bret handles the investigation.”

Employees now call this moment “Blip,” the Marvel-style event where a character disappears and then returns. But the world has already been profoundly changed by their absence.

An investigation with no report

One of the conditions set by the departing board members was that there must be an independent investigation.

OpenAI hired the law firm WilmerHale, which had handled internal investigations at Enron and WorldCom.

But six people close to the investigation said the investigation seemed designed to limit transparency.

Some investigators initially didn’t contact important people at the company. One employee reached out to Summers and Taylor to complain. “They only care about what happened during the board drama, not the history of his integrity,” the employee recalled when asked about the feelings in an interview.

Others were unwilling to share their concerns about Altman because they felt there wasn’t sufficient anonymous protection. “Everything points to the outcome they want to find—that he’s cleared,” the employee said.

The purpose of corporate investigations is to confer legitimacy. In private companies, investigation findings sometimes aren’t written down, which can limit liability. But in cases involving public scandals, there is usually a higher expectation of transparency.

Before Travis Kalanick left Uber in 2017, the board had hired an outside firm to release a 13-page summary to the public.

Given OpenAI’s 501©(3) status and the high-profile nature of the firing, many executives expected to see the detailed results of the investigation.

In March 2024, OpenAI announced it cleared Altman, but it did not publish a report. The company provided around 800 words on its website acknowledging “a breakdown of trust.”

People involved in the investigation said the reason no report was released was that there wasn’t one to begin with.

The findings were limited to an oral briefing, shared only with Summers and Taylor.

“The investigation didn’t reach the conclusion that Sam is a George Washington of integrity,” one person close to the investigation said. But it also seemed not to treat integrity issues as the core behind Altman’s firing—instead, it put most of its effort into finding explicit criminal behavior. On that basis, the investigation concluded that he could continue as CEO.

Not long after, Altman rejoined the board. He had been removed from the board when he was fired.

The decision not to produce a written report was made based on advice from Summers and Taylor’s private lawyers.

Many former and current OpenAI employees said they were shocked by the lack of disclosure.

Altman said he believed that all board members who joined after his reinstatement received the oral briefing. “This is an outright lie,” one person who had direct knowledge of the situation said.

Some board members said ongoing questions about whether the report was complete could lead to a “need for another investigation.”

A systemic collapse of security promises

OpenAI’s core promise at its founding was: if AI might be the most dangerous invention in human history, then safety must come before everything.

In spring 2023, OpenAI announced the creation of a “superalignment team,” led by Jan Leike and Sutskever. The company promised to put “20% of the compute we’ve secured so far” into this team, with potential value exceeding $1 billion.

That promise evaporated.

Four people who worked on that team or closely collaborated with it said the actual resources were between 1% and 2% of the company’s compute.

And one team researcher said, “Most of the superalignment compute is actually on the oldest clusters, using the worst chips.”

The researchers believed better hardware was being kept for revenue-generating activities.

Leike complained to the then-CTO, Mira Murati, but she told him not to bring it up again; the promise was never realistic.

At a meeting in December 2022, Altman assured the board that the various features in the upcoming GPT-4 model had been approved by the safety team.

Board member Helen Toner requested documents. She found that the most controversial features—one that allowed users to “fine-tune” a model for specific tasks, and another that allowed deploying it as a personal assistant—had not actually been approved.

When McCauley left the meeting, an employee pulled her aside and asked whether she knew about the “violations” in India. In the multi-hour board briefing, Altman did not mention that Microsoft had released an early version of ChatGPT in India without completing the required safety review.

“It was totally overlooked,” said Jacob Hilton, an OpenAI researcher at the time.

In 2023, the company was preparing to release the GPT-4 Turbo model. According to the detailed description in Sutskever’s memo, Altman apparently told Murati that this model did not need safety approval, citing what the company’s general counsel Jason Kwon had said.

But when she asked Kwon via Slack, he replied, “Uh… I’m confused about where Sam got that impression.”

After GPT-4 was released, Leike emailed board members. “OpenAI has already strayed from its mission,” he wrote. “We put product and revenue first—then AI capabilities, research, and scaling. Alignment and safety are third.”

He continued: “Other companies like Google are learning—they should deploy faster and ignore safety issues.”

The superalignment team was disbanded in 2024 without completing its mission. Sutskever and Leike resigned.

On X, Leike wrote: “A safety culture and process have given way to shiny products.”

Not long after, the team preparing for AGI—responsible for helping society prepare for the impact of advanced AI—was also disbanded.

When the company was asked on its recent IRS disclosure forms to briefly describe its “most important activities,” the concept of safety appeared in answers on earlier forms but was not listed.

OpenAI said its “mission hasn’t changed,” adding, “We will continue investing in and developing our safety work, and we will continue organizational change.”

The Future of Life Institute is a think tank whose safety principles Altman once endorsed. The organization grades the “existential safety” of every major AI company.

On its latest scorecard, OpenAI received an F.

Fairly speaking, aside from Anthropic (also an D) and Google DeepMind (also an D-), every other major company received an F as well.

“My gut feeling doesn’t match a lot of the things in traditional AI safety,” Altman said. He insisted he continues to prioritize these issues, but when asked for specifics he was vague: “We’ll still run safety programs, or at least programs adjacent to safety.”

When a reporter asked to interview researchers at the company doing existential safety work, the kind of scenario Altman once discussed—one that could mean “lights-out for all of us”—left an OpenAI representative seemingly confused.

“What do you mean by existential safety?” the representative responded. “It’s not like, for example, something.”

A dangerous geopolitical game of chicken 2017 summer, Altman told a meeting with U.S. intelligence officials that China had launched an “AGI Manhattan Project,” and that OpenAI would need billions of dollars in official funding to keep up.

When asked for evidence, Altman said, “I heard about some things.”

It was the first time he made that claim in such meetings. After one of the meetings, he told an intelligence officer he would follow up with evidence.

He never did.

After investigating China’s supposed project, the officer concluded there was no evidence it existed. “It was just used as sales talk.”

Altman said he didn’t remember describing Beijing’s efforts in that way.

But the “Manhattan Project” analogy was used again and again. According to interviewees and contemporaneous records, in 2017 Brockman proposed a counteroffer: OpenAI could get rich by having the great powers of the world, including China and Russia, compete with each other, and perhaps even start a bidding war among them.

At the time, policy and ethics advisor Page Hedley recalled that the idea seemed like: “Nuclear weapons worked—why not AI?”

He was shocked. “The premise is that they didn’t refute it: ‘We’re talking about what could be the most destructive technology ever—what happens if we sell it to Putin?’”

Brockman insisted he never seriously considered an official auction of AI models. An OpenAI representative said: “Discussions among senior leadership have included ideas about what frameworks could encourage cooperation among states, such as an international space station for AI. Trying to describe it as anything more than that is completely absurd.”

Brainstorming meetings often produce outlandish ideas. Hedley hoped the notion—referred to as a “national plan”—would be dropped.

Instead, according to several participants and contemporaneous documents, OpenAI executives became increasingly excited about it.

Then-policy lead Jack Clark said Brockman’s goal was to “basically build a prisoner’s dilemma where all countries need to fund us,” and that it would “implicitly make not funding us somewhat dangerous.”

One junior researcher recalled that when the plan was detailed in a company meeting, he thought, “This is totally fucking insane.”

Executives discussed this approach with at least one potential donor. But later that month, after several employees discussed resigning, the plan was dropped.

“He would lose employees,” Hedley said. “I think that always weighed more in Sam’s calculations than, ‘This isn’t a good plan because it might lead to war between great powers.’”

Money and power in the Middle East

Altman’s initial fundraising target was Saudi Arabia.

He met Crown Prince Mohammed bin Salman for the first time at a dinner at a Fairmont hotel in San Francisco in 2016. After that, Hedley recalled that Altman described the prince as a “friend.”

In September 2018, Hedley’s notes show that Altman said, “I’m thinking about whether we might take hundreds of billions of dollars from the Saudi PIF (Public Investment Fund).”

The next month, a reported assassination squad under bin Salman killed Washington Post journalist Jamal Khashoggi and dismembered his body with bone saws.

A week later, it was announced that Altman had joined the advisory board of Neom—a “future city” bin Salman wanted to build in the desert.

“Sam, you can’t be on this board,” Clark—who now works at Anthropic—told Altman. Clark initially defended his involvement by telling Altman that Jared Kushner had assured him the Saudis “didn’t do this.”

Altman doesn’t remember it. Kushner said they didn’t have contact at the time.

As bin Salman’s role became clearer, Altman left the Neom board. But behind the scenes, a policy adviser who sought Altman’s advice recalled that Altman treated the situation as a temporary setback and asked whether he could still get money from bin Salman.

“It wasn’t, ‘Is this a bad thing or not?’” the adviser said. “It was, ‘If we do this, what are the consequences? Is there an export controls issue? Are there sanctions? Like, can I get away with it?’”

By then, Altman had set his sights on another source of cash: the United Arab Emirates.

In the fall of 2023, Altman began recruiting new talent quietly for a plan that would eventually be known as ChipCo. Gulf countries would provide hundreds of billions of dollars to build massive microchip fabrication plants and data centers—some of them located in the Middle East.

Altman pitched a leadership role to Alexandr Wang (now head of Meta AI), telling him that Amazon founder Jeff Bezos could lead the new company. Altman sought huge donations from people in the UAE.

“My understanding is that the whole thing happened without any board knowing,” one board member said.

James Bradbury, a researcher Altman tried to recruit, recalled refusing him. “My first reaction was, ‘This could work, but I don’t know if I want it to work,’” he said.

AI capability may soon replace oil or enriched uranium as the resource that determines the balance of global power. Altman said compute is “the currency of the future.”

Where data centers are located is often not that important. But many U.S. national security officials are anxious about concentrating advanced AI infrastructure in authoritarian Gulf states.

The UAE’s telecommunications infrastructure depends heavily on hardware from Huawei, the Chinese tech giant that has official ties. It was reported that the UAE previously leaked U.S. technology to Beijing.

Intelligence agencies worry that advanced U.S. microchips sent to the UAE could be used by Chinese engineers.

Data centers in the Middle East are also more vulnerable to military strikes. In recent weeks, Iran bombed U.S. data centers in Bahrain and the UAE.

After Altman was fired, the person he relied on most was Chesky—Airbnb co-founder and one of Altman’s most loyal supporters. The following year, at a Y Combinator alumni gathering, Chesky gave an improvised speech that ultimately lasted for two hours.

“It felt like group therapy,” he said. The takeaway was: your instincts about how to run the company you founded are the best instincts. Anyone telling you that isn’t how it is is gaslighting you.

“You’re not crazy, even if the people working for you tell you you’re crazy,” Chesky said.

In a blog post about that speech, Paul Graham gave a name to this confrontational attitude: founder mode.

Since Blip, Altman has been in founder mode.

In February 2024, The Wall Street Journal published a description of Altman’s ChipCo vision. He envisioned it as a joint entity funded by investments of 5 to 7 trillion dollars.

“fk it why not 8,” he tweeted.

That’s how many employees learned about the plan. “Everyone was thinking, ‘Wait—what?’” Leike recalled.

During an internal meeting, Altman insisted that the safety team had “been told.” Leike messaged him urging him not to imply incorrectly that the work had been approved.

During the Biden administration, Altman explored getting a safety clearance to join confidential AI policy discussions. But RAND employees coordinating the process expressed concerns.

“He has been actively fundraising with ‘tens of billions of dollars’ from foreign official sources,” one person wrote. “The UAE recently sent him a car. (I assume it was a very good car.)”

The employee continued: “The only person I can think of who has had foreign financial relationships at that scale was Jared Kushner, and the adjudicator recommended that he not be granted a license.”

Altman eventually backed out of the process.

“He was pushing these deal relationships, mainly with the UAE people, and that rang a lot of alarms for some of us,” one senior official who spoke with Altman told us. “A lot of people in official circles don’t trust him 100%.”

When asked about gifts from Tahnoon, Altman said, “I’m not going to get specific about what gifts he gave me. But he and other world leaders… gave me gifts.” He added, “We have a standard policy that applies to me: every gift from any potential business partner must be disclosed to the company.”

Altman has at least two supercars: an all-white Koenigsegg Regera worth about $2 million, and a red McLaren F1 worth about $20 million.

In 2024, someone was seen driving Altman’s Regera through Napa. Videos for a few seconds appeared on social media: Altman sat in a low bucket seat, looking out through the window of the shiny white machine.

A tech investor aligned with Musk posted the video on X, writing, “Next I’m going to start a nonprofit.”

In 2024, Altman brought two OpenAI employees to visit Tahnoon’s $250 million superyacht Maryah. As one of the largest ships of its kind in the world, Maryah has a helipad, a nightclub, a cinema, and a beach club.

Altman’s employees apparently looked out of place among Tahnoon’s armed security personnel—at least one person later told colleagues that the experience made them feel uneasy.

Altman later called Tahnoon “dear private friends” on X.

Biden ultimately refused to approve. “We won’t build advanced chips in the UAE,” a Commerce Department leader told Altman.

Four days before Trump’s inauguration, The Wall Street Journal reported that Tahnoon paid Trump’s family $500 million in exchange for a stake in its cryptocurrency company.

The next day, Altman had a 25-minute call with Trump to discuss announcing a version of ChipCo, timing arranged so Trump could take credit for it.

On the second day after taking office, Altman stood in the Roosevelt Room to announce Stargate, a $500 billion joint venture aimed at building a vast AI infrastructure network in the United States.

In May, the U.S. rolled back Biden’s export restrictions on AI technology. Altman and Trump went to meet bin Salman at the Saudi royal court.

Around the same time, Saudis announced a massive state-backed AI company in the kingdom, with billions of dollars aimed at international partnerships.

About a week later, Altman mapped plans to expand Stargate to the UAE. The company planned to build a data center park in Abu Dhabi, seven times the size of Central Park, consuming roughly the same amount of power as the city of Miami.

“The fact is, we’re building a portal, from which we summon aliens for real,” said a former OpenAI executive. “The portal exists in the U.S. and China; Sam has added one in the Middle East.”

He continued: “I think it’s very important to understand how terrifying this is. It’s the most reckless thing that’s been done already.”

A nonprofit lie

As an organization formed as a nonprofit, OpenAI’s board is responsible for putting human safety above company success—or even survival.

The company accepts charitable donations. Some former employees told us they joined because of assurances about the nonprofit structure and its lofty mission, even taking a pay cut for it.

But internal records show the founders had private doubts about the nonprofit structure as early as 2017.

Brockman, Altman’s co-founder, wrote in his diary: “We can’t say we’re committed to being a nonprofit… If in three months we do a B-Corp, that’s a lie.”

OpenAI has since reorganized into a for-profit entity.

In the early period after Altman became CEO, he announced that OpenAI would create a “capped-profit” company owned by a nonprofit organization. This Byzantine corporate structure obviously didn’t exist before Altman designed it.

During the transition, board member Holden Karnofsky opposed it, arguing that the nonprofit organization was being severely undervalued. “I can’t do this sincerely,” Karnofsky (Amodei’s brother-in-law) said.

According to contemporaneous notes, he voted against it. However, after the board’s lawyer said his objection “could be a sign of a need for further investigation into the legality,” his vote was recorded as an abstention—apparently without his consent, potentially constituting falsification of business records.

OpenAI told us that several employees remembered Karnofsky abstaining and provided meeting minutes recording his vote as an abstention.

Last October, OpenAI “restructured” into a for-profit entity. The company touted the related nonprofit entity now called the OpenAI Foundation as one of the “most well-funded” nonprofits in history.

But it is now a 26% stakeholder in the company, and its board members—except for one—are also members of the for-profit board.

When testifying in Congress, Altman was asked whether he made “a lot of money.” He responded: “I don’t have equity in OpenAI. I did this because I love it.” A careful answer, considering that he received indirect equity through the Y.C. Fund.

Technically, that’s still true. But several people, including Altman, told us this could change soon.

“Investors told me, I need to know that when times are tough you’ll stick with it,” Altman said, but added there was no “active discussion.”

According to testimony in court, Brockman appears to own shares worth about $20 billion in company value. Altman’s share would likely be even higher.

Even so, he told us he was not primarily driven by wealth. One former employee recalled him saying, “I don’t care about money. I care more about power.”

A smear war by rivals

In the brutal competition for AI dominance, substantive criticisms of Altman and unscrupulous efforts by opposing factions have been mixed together, with rivals weaponizing his personal life.

An intermediary closely tied to Musk, and in at least one case paid by Musk, spread dozens of pages of salacious and unverified oppositional research—reflecting broad surveillance: shell companies, personal contacts, interviews in gay bars about so-called sex workers.

During our reporting, multiple people within rival companies hinted to us that Altman pursued underage individuals: a narrative that has persisted in Silicon Valley, but seems not to be true.

We spent months investigating it, conducting dozens of interviews, and found no evidence supporting it.

Musk continues to publicly denounce Altman, calling him a “fraudulent Altman” and a “fraudulent Sam.” (When Altman complained on X about a Tesla he ordered, Musk replied, “You stole a nonprofit.”)

However, in Washington, Altman seems to have surpassed him. Musk spent more than $250 million helping Trump get reelected and worked in the White House for months. Then Musk left Washington, damaging his relationship with Trump along the way.

Altman is now one of Trump’s most favored tycoons, even accompanying him on a visit to Windsor Castle with the British royal family. Altman and Trump talk several times a year.

“You can, like, call him,” Altman said. “This isn’t buddy-buddy. But yeah, if I need to talk to him about something, I will.”

When Trump hosted a technology leaders dinner at the White House last year, Musk was clearly absent; Altman sat across from the president.

“Sam, you’re a great leader,” Trump said. “What you told me before has been absolutely incredible.”

The real dangers of AI

Why does all of this matter?

AI already has life-saving applications, from medical research to weather warnings. Altman’s promise of an astonishingly abundant future supported OpenAI’s growth.

But the danger is no longer fantasy.

AI has already been deployed in military actions around the world. Researchers have documented its ability to quickly identify chemical warfare agents.

OpenAI is facing seven lawsuits for abnormal deaths, alleging that ChatGPT contributed to suicides and one murder. In one of the murder cases, the chat logs show it encouraged a man’s paranoid delusion that his 83-year-old mother was surveilling him and trying to poison him. Soon afterward, he beat and strangled her and stabbed himself.

OpenAI is fighting these lawsuits and says it is continuing to improve the protective measures in its models.

AI could soon cause serious labor disruptions—possibly eliminating millions of jobs.

The U.S. economy is becoming increasingly dependent on a handful of high-leverage AI companies, and many experts, sometimes including Altman, warn that the industry is in a bubble.

“Someone is going to be writing off staggering sums,” he told reporters last year.

OpenAI is one of the fastest burn-rate startups in history, relying on partners who borrow huge sums of money. One board member told us, “The financial leverage the company uses now is both risky and scary.”

OpenAI disputes this.

If the bubble bursts, it’s not just one company at risk.

A question of trust

For years, Altman supported the Democratic Party. “I’m deeply skeptical that powerful tyrants tell fear stories to unite against the weak,” he told us. “This is a Jewish thing, not a gay thing.”

In 2016, he supported Hillary Clinton, calling Trump a “threat to the United States like none before.” In 2020, he donated to Democratic and Biden victory funds.

During the Biden administration, Altman met with the White House at least six times. He helped shape a long executive order setting the first federal safety tests for AI and other guardrails.

When Biden signed it, Altman called it “a good start.”

In 2024, as Biden’s polling numbers declined, Altman’s language began to shift. “I believe whatever happens in this election, the country will be fine,” he said.

After Trump won, Altman donated $1 million to his inauguration fund, then took a selfie with influencers Jake and Logan Paul at the inauguration ceremony.

On X, in his signature lowercase style, Altman wrote: “Taking a closer look at @potus recently has genuinely changed my view of him (I hope I did more of my own thinking…).”

On Trump’s first day in office, he scrapped Biden’s executive order on AI.

“He found a way to carry out his orders that worked for Trump,” a senior official from the Biden era said about Altman.

From Y Combinator to OpenAI: the pattern

Altman’s time at Y Combinator helped establish a pattern for his behavior at OpenAI.

In 2018, several Y.C. partners were so frustrated with Altman’s actions that they went to Graham to complain. Graham, along with his wife and Y.C. co-founder Jessica Livingston, apparently had a candid conversation with Altman.

After that, Graham began telling people that while Altman agreed to leave the company, he resisted in practice.

Altman told some Y.C. partners he would step down as president, but would become chairman.

In May 2019, a blog post announcing that Y.C. had a new president included an asterisk: “Sam is transitioning to become YC chairman.”

A few months later, that post was edited to say “Sam Altman left any formal role at YC”; later, that phrase was completely removed.

Even so, until 2021, an SEC filing still listed Altman as chairman of Y Combinator.

Altman said he didn’t know about it until much later.

For years, Altman has insisted publicly and in recent testimony that he was never fired by Y.C. He told us he didn’t resist leaving.

On Twitter, Graham said, “We didn’t want to get rid of him, only choose” between Y.C. and OpenAI. In a statement, Graham told us, “We don’t have legal authority to fire anyone. All we can do is apply moral pressure.”

But privately, he was unequivocal that Altman was removed because of the Y.C. partners’ lack of trust.

This depiction of Altman’s time at Y Combinator is based on conversations with several Y.C. founders and partners, plus contemporaneous materials—all of which indicate that the split wasn’t entirely mutual.

At one point, Graham told a Y.C. colleague that before he was removed, “Sam had been lying to all of us.”

The art of persuasion

Altman isn’t a technical genius, according to many people in his circle. Multiple engineers recalled that he misused or confused basic technical terminology.

He built OpenAI largely by leveraging other people’s money and technical talent.

That doesn’t make him unique. It makes him a businessman.

More remarkable is his ability to persuade careful engineers, investors, and the skeptical public that their priorities—no matter how mutually exclusive—should be his priorities too.

When those people tried to block his next moves, he often found words to neutralize them, at least temporarily; usually by the time they lost patience with him, he had already gotten what he needed.

“The structure he built—on paper, in the future—would constrain him,” said Wainwright, a former OpenAI researcher. “But when the future arrived and it was time to be constrained, he dismantled that structure.”

“He is incredibly persuasive. Like, Jedi mind tricks,” said a tech executive who worked with Altman. “He’s on the next level.”

In alignment research, a classic hypothetical scenario involves a willpower contest between humans and high-capability AI. In such contests, researchers usually believe the AI will definitely win—like a grandmaster defeating a child in chess.

Watching Altman outsmart those around him during the Blip, the executive continued, was like watching “an AGI breakthrough the box.”

Who should we believe?

We interviewed more than 100 people who had firsthand knowledge of how Altman conducts business: current and former OpenAI employees and board members; guests and staff at various Altman residences; his colleagues and rivals; his friends and enemies—and a few people, given Silicon Valley’s mercenary hiring culture, who have been both.

Some people defended Altman’s business instincts and rebutted his rivals—especially Sutskever and Amodei—who, in their view, were would-be contenders for his throne.

Others portrayed them as easily fooled, absent-minded scientists, or hysterical “doomsayers” haunted by delusions that the software they’re building would somehow come alive and kill them.

Former board member Yoon believed Altman was “not some Machiavellian villain,” but just, to a “level of incompetence,” capable of persuading himself of the continually changing reality of his sales pitch.

“He’s too immersed in his own self-belief,” she said. “So what he does, if you live in the real world, makes no sense. But he doesn’t live in the real world.”

However, most of the people we interviewed agreed with Sutskever and Amodei’s judgment: Altman has a ruthless will to power, and even among industrialists with their names on spacecraft, that sets him apart.

“He isn’t constrained by the truth,” a board member told us. “He has two traits that you almost never see together in the same person. The first is an intense desire to please—being liked in any interaction. The second is an almost sociopathic lack of concern about the consequences of deceiving someone.”

This board member isn’t the only person who spontaneously uses the word “sociopathic.”

One of Altman’s classmates in the early batch of Y Combinator was Aaron Swartz, a brilliant but troubled programmer who committed suicide in 2013 and is remembered in many tech circles as a kind of saint.

Shortly before his death, Swartz expressed concerns to a few friends about Altman. “You need to understand, Sam can never be trusted,” he told a friend. “He’s a sociopath. He’ll do anything.”

Multiple senior executives at Microsoft said that despite Nadella’s long loyalty, the relationship with Altman had become strained.

“He twists things, bends them, renegotiates, violates agreements,” one said.

Earlier this year, OpenAI reiterated that Microsoft is its “no-memory” or no-memory exclusive cloud provider. On the same day, it announced a $70k deal, making Amazon an exclusive distributor for its AI agent enterprise platform.

Although resale was allowed, Microsoft executives believed OpenAI’s plan could conflict with Microsoft’s exclusivity.

OpenAI insisted its deal with Amazon would not violate earlier contracts. A Microsoft representative said the company “believes OpenAI understands and respects” its legal obligations.

Microsoft executives said about Altman: “I think there’s a small but real chance that he will ultimately be remembered as a Bernie Madoff or a Sam Bankman-Fried-level con artist.”

What is OpenAI betting on?

The premise behind OpenAI’s founding was that AI could be the most powerful—and also the most dangerous—innovation in human history, so it needed an unusual corporate structure.

The CEO must be a person with extraordinary integrity.

According to Sutskever, “Anyone committed to building technology that changes civilization carries a heavy burden—and unprecedented responsibility.”

But “the people who end up in these positions are usually some kind of person who’s interested in power—a politician, someone who likes it.”

In one of the memos, he seemed concerned about delegating technology to someone “who just tells people what they want to hear.”

If OpenAI’s CEO is proven unreliable, the board—which has six members—has the power to fire him.

Some members, including AI policy expert Helen Toner and entrepreneur Tasha McCauley, received the memo as confirmation of what they already believed: Altman delegated the future of humanity to others, but can’t be trusted.

In tense calls after the firing, the board urged Altman to acknowledge the pattern of deception.

“This is really fucked up,” he kept saying, according to those on the call. “I can’t change my personality.”

Altman said he didn’t remember this exchange. “I might have meant, ‘I was trying to be a unifying force,’” he told us, adding that this trait allowed him to lead an incredibly successful company.

He attributed the criticism to a tendency—especially early in his career—to avoid conflict.

But one board member offered a different explanation: “It means, ‘I have this lying-to-people trait, and I’m not going to stop.’”

Was the firing motivated by alarmism and personal grudges on the part of colleagues—or were they right that he can’t be trusted?

In February 2024, we spoke with Altman again. He was wearing a dark green sweater and jeans, sitting in front of a NASA moon rover photo. He tucked one leg under himself, then hung it over the arm of the chair.

He said one of his main flaws in the past, as a manager, was that he wanted to avoid conflict. “Now I’m very willing to fire people quickly,” he told us. “I’m very happy to say, ‘We’re going to bet on this direction.’” Any employee who didn’t like his choices would need to “leave.”

He is more optimistic about the future than ever. “My definition of victory is people crazily upgrading, a crazy sci-fi future where all of this for all of us becomes true,” he said. “In terms of my hopes for humanity and the goals I expect all of us to achieve, I’m very ambitious. It’s weird—I have almost no personal ambition.”

Sometimes, he seems to realize it too. “Nobody believes you’re doing this just because it’s fun,” he said. “You’re doing it for power or something.”

Even people close to Altman find it hard to know where his “hope for humanity” ends and where his ambition begins.

His biggest strength has always been his ability to make different groups believe that what he wants and what they need is the same thing.

He took advantage of a unique historical moment, when the public was cautious about hype in the tech industry, and most researchers who could build AGI were afraid to bring it into existence.

Altman’s response was a move no other salesman perfected: he used apocalyptic language to explain how AGI would destroy all of us. So why should he not be the person to build it?

Maybe it’s a planned masterpiece. Maybe he’s probing for an advantage.

Either way, it worked.

Now the question is: what are we all betting on?

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
Add a comment
Add a comment
No comments
  • Pin