Yesterday, a report was published in The New Yorker that I can't stop thinking about. Basically, investigators there interviewed over 100 people involved with OpenAI and gained access to internal documents that had never been made public. What came out of it? A story much more complicated than that drama of 2023.



It all started with a 70-page document that Ilya Sutskever, OpenAI's chief scientist, compiled in 2023. He gathered Slack messages, communications with HR, internal meeting minutes... all to answer one question: Can Sam Altman, the person possibly controlling the most dangerous technology in history, be trusted? His conclusion on the first page? "Sam demonstrates a consistent pattern of lying."

What caught my attention was a very specific example. In December 2022, during a board meeting, Altman claimed that several features of GPT-4 had already undergone safety review. When asked to see the documents, they discovered that two of the most controversial features had never gone through the safety panel. Just like that.

But there's more. Dario Amodei, who later founded Anthropic, left over 200 pages of personal notes while at OpenAI. He documented how the company retreated step by step under commercial pressure. One thing that particularly worried him: when Microsoft invested in 2019, there was a clause stating that if another competitor found a safer path to AGI, OpenAI would have to stop competing and help. But Microsoft had veto rights over that. Basically, that safety guarantee turned into a blank piece of paper.

There's a detail that’s a bit frightening. In mid-2023, Altman publicly announced that OpenAI would allocate 20% of its computing capacity to a "superalignment team"—potentially over $1 billion. But four people who worked there say the actual allocated power was only 1% to 2%, still with older hardware. The team was disbanded afterward. When journalists asked to interview those responsible for existential safety research, OpenAI’s spokesperson responded: "That’s not really a thing that exists."

The latest: Sarah Friar, OpenAI’s CFO, had serious disagreements with Altman over an IPO. She believes the company isn’t ready, considering the financial risk of Altman’s promise to spend $600 billion on computational capacity. But here’s the strange part—she no longer reports directly to Altman. She now reports to Fidji Simo, who took medical leave last week. A company in IPO process, CEO and CFO at fundamental odds, CFO not reporting to the CEO, and the boss of the CFO on leave. Even Microsoft executives can’t take it anymore, saying Altman "distorted facts, broke promises."

An ex-board member described Altman quite interestingly: he has an extremely strong desire to please in every face-to-face interaction, but at the same time shows an almost sociopathic indifference to the consequences of deceiving others. It’s a rare combination—and perfect for a salesperson.

Here’s the problem: if Altman were CEO of any ordinary tech company, it would just be a good corporate rumor. But OpenAI is developing technology that could reshape global economies, be used to create biochemical weapons, or launch cyberattacks. The former chief scientist and the former head of safety consider him untrustworthy. Partners compare him to SBF. And yet, he wants to take this company public with a valuation above $850 billion.

Gary Marcus, an AI professor in New York, asked a very direct question after reading all this: Do you really feel safe leaving it up to Altman to decide whether or not to release a model that could alter the course of humanity?

OpenAI’s response was basically to question the motives of the sources, without denying anything specifically. Altman did not respond to the direct accusations.

It’s a story about how a group of idealists concerned with AI risks created a nonprofit organization with a clear mission. Then technology advanced, massive capital arrived, capital demanded returns, and the mission began to shift. The safety team was disbanded, questioning individuals were expelled, the nonprofit structure turned into a for-profit entity, and the board that could shut down the company is now full of CEO allies. More than a hundred witnesses used the same description: "not constrained by the truth."
View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
Add a comment
Add a comment
No comments
  • Pin