Editorâs note, May 18, 2024, 7:30 pm ET: This story has been updated to reflect OpenAI CEO Sam Altmanâs tweet on Saturday afternoon that the company was in the process of changing its offboarding documents. For months, OpenAI has been losing employees who care deeply about making sure AI is safe. Now, the company is positively hemorrhaging them. Ilya Sutskever and Jan Leike announced their departures from OpenAI, the maker of ChatGPT, on Tuesday. They were the leaders of the companyâs superalignment team â the team tasked with ensuring that AI stays aligned with the goals of its makers, rather than acting unpredictably and harming humanity. Theyâre not the only ones whoâve left. Since last November â when OpenAIâs board tried to fire CEO Sam Altman only to see him quickly claw his way back to power â at least five more of the companyâs most safety-conscious employees have either quit or been pushed out. Whatâs going on here? If youâve been following the saga on social media, you might think OpenAI secretly made a huge technological breakthrough. The meme âWhat did Ilya see?â speculates that Sutskever, the former chief scientist, left because he saw something horrifying, like an AI system that could destroy humanity. But the real answer may have less to do with pessimism about technology and more to do with pessimism about humans â and one human in particular: Altman. According to sources familiar with the company, safety-minded employees have lost faith in him. âItâs a process of trust collapsing bit by bit, like dominoes falling one by one,â a person with inside knowledge of the company told me, speaking on condition of anonymity. Not many employees are willing to speak about this publicly. Thatâs partly because OpenAI is known for getting its workers to sign offboarding agreements with non-disparagement provisions upon leaving. If you refuse to sign one, you give up your equity in the company, which means you potentially lose out on millions of dollars. (OpenAI did not respond to a request for comment in time for publication. After publication of my colleague Kelsey Piperâs piece on OpenAIâs post-employment agreements, OpenAI sent her a statement noting, âWe have never canceled any current or former employeeâs vested equity nor will we if people do not sign a release or nondisparagement agreement when they exit.â When Piper asked if this represented a change in policy, as sources close to the company had indicated to her, OpenAI replied: âThis statement reflects reality.â On Saturday afternoon, a little more than a day after this article published, Altman acknowledged in a tweet that there had been a provision in the companyâs off-boarding documents about âpotential equity cancellationâ for departing employees, but said the company was in the process of changing that language.) [Image: https://platform.vox.com/wp-content/uploads/sites/2/chorus/uploads/chorus_asset/file/25454608/Screenshot_2024_05_18_at_6.33.22_PM.png?quality=90&strip=all] One former employee, however, refused to sign the offboarding agreement so that he would be free to criticize the company. Daniel Kokotajlo, who joined OpenAI in 2022 with hopes of steering it toward safe deployment of AI, worked on the governance team â until he quit last month. âOpenAI is training ever-more-powerful AI systems with the goal of eventually surpassing human intelligence across the board. This could be the best thing that has ever happened to humanity, but it could also be the worst if we donât proceed with care,â Kokotajlo told me this week. OpenAI says it wants to build artificial general intelligence (AGI), a hypothetical system that can perform at human or superhuman levels across many domains. âI joined with substantial hope that OpenAI would rise to the occasion and behave more responsibly as they got closer to achieving AGI. It slowly became clear to many of us that this would not happen,â Kokotajlo told me. âI gradually lost trust in OpenAI leadership and their ability to responsibly handle AGI, so I quit.â And Leike, explaining in a thread on X why he quit as co-leader of the superalignment team, painted a very similar picture Friday. âI have been disagreeing with OpenAI leadership about the companyâs core priorities for quite some time, until we finally reached a breaking point,â he wrote. Why OpenAIâs safety team grew to distrust Sam Altman To get a handle on what happened, we need to rewind to last November. Thatâs when Sutskever, working together with the OpenAI board, tried to fire Altman. The board said Altman was ânot consistently candid in his communications.â Translation: We donât trust him. The ouster failed spectacularly. Altman and his ally, company president Greg Brockman, threatened to take OpenAIâs top talent to Microsoft â effectively destroying OpenAI â unless Altman was reinstated. Faced with that threat, the board gave in. Altman came back more powerful than ever, with new, more supportive board members and a freer hand to run the company. When you shoot at the king and miss, things tend to get awkward. Publicly, Sutskever and Altman gave the appearance of a continuing friendship. And when Sutskever announced his departure this week, he said he was heading off to pursue âa project that is very personally meaningful to me.â Altman posted on X two minutes later, saying that âthis is very sad to me; Ilya is ⊠a dear friend.â Yet Sutskever has not been seen at the OpenAI office in about six months â ever since the attempted coup. He has been remotely co-leading the superalignment team, tasked with making sure a future AGI would be aligned with the goals of humanity rather than going rogue. Itâs a nice enough ambition, but one thatâs divorced from the daily operations of the company, which has been racing to commercialize products under Altmanâs leadership. And then there was this tweet, posted shortly after Altmanâs reinstatement and quickly deleted: [Image: https://platform.vox.com/wp-content/uploads/sites/2/chorus/uploads/chorus_asset/file/25451798/Screenshot_2024_05_16_at_5.05.54_PM.png?quality=90&strip=all] So, despite the public-facing camaraderie, thereâs reason to be skeptical that Sutskever and Altman were friends after the former attempted to oust the latter. And Altmanâs reaction to being fired had revealed something about his character: His threat to hollow out OpenAI unless the board rehired him, and his insistence on stacking the board with new members skewed in his favor, showed a determination to hold onto power and avoid future checks on it. Former colleagues and employees came forward to describe him as a manipulator who speaks out of both sides of his mouth â someone who claims, for instance, that he wants to prioritize safety, but contradicts that in his behaviors. For example, Altman was fundraising with autocratic regimes like Saudi Arabia so he could spin up a new AI chip-making company, which would give him a huge supply of the coveted resources needed to build cutting-edge AI. That was alarming to safety-minded employees. If Altman truly cared about building and deploying AI in the safest way possible, why did he seem to be in a mad dash to accumulate as many chips as possible, which would only accelerate the technology? For that matter, why was he taking the safety risk of working with regimes that might use AI to supercharge digital surveillance or human rights abuses? For employees, all this led to a gradual âloss of belief that when OpenAI says itâs going to do something or says that it values something, that that is actually true,â a source with inside knowledge of the company told me. That gradual process crescendoed this week. The superalignment teamâs co-leader, Jan Leike, did not bother to play nice. âI resigned,â he posted on X, mere hours after Sutskever announced his departure. No warm goodbyes. No vote of confidence in the companyâs leadership. Other safety-minded former employees quote-tweeted Leikeâs blunt resignation, appending heart emojis. One of them was Leopold Aschenbrenner, a Sutskever ally and superalignment team member who was fired from OpenAI last month. Media reports noted that he and Pavel Izmailov, another researcher on the same team, were allegedly fired for leaking information. But OpenAI has offered no evidence of a leak. And given the strict confidentiality agreement everyone signs when they first join OpenAI, it would be easy for Altman â a deeply networked Silicon Valley veteran who is an expert at working the press â to portray sharing even the most innocuous of information as âleaking,â if he was keen to get rid of Sutskeverâs allies. The same month that Aschenbrenner and Izmailov were forced out, another safety researcher, Cullen OâKeefe, also departed the company. And two weeks ago, yet another safety researcher, William Saunders, wrote a cryptic post on the EA Forum, an online gathering place for members of the effective altruism movement, who have been heavily involved in the cause of AI safety. Saunders summarized the work heâs done at OpenAI as part of the superalignment team. Then he wrote: âI resigned from OpenAI on February 15, 2024.â A commenter asked the obvious question: Why was Saunders posting this? âNo comment,â Saunders replied. Commenters concluded that he is probably bound by a non-disparagement agreement. Putting all of this together with my conversations with company insiders, what we get is a picture of at least seven people who tried to push OpenAI to greater safety from within, but ultimately lost so much faith in its charismatic leader that their position became untenable. âI think a lot of people in the company who take safety and social impact seriously think of it as an open question: is working for a company like OpenAI a good thing to do?â said the person with inside knowledge of the company. âAnd the answer is only âyesâ to the extent that OpenAI is really going to be thoughtful and responsible about what itâs doing.â With the safety team gutted, who will make sure OpenAIâs work is safe? With Leike no longer there to run the superalignment team, OpenAI has replaced him with company co-founder John Schulman. But the team has been hollowed out. And Schulman already has his hands full with his preexisting full-time job ensuring the safety of OpenAIâs current products. How much serious, forward-looking safety work can we hope for at OpenAI going forward? Probably not much. âThe whole point of setting up the superalignment team was that thereâs actually different kinds of safety issues that arise if the company is successful in building AGI,â the person with inside knowledge told me. âSo, this was a dedicated investment in that future.â Even when the team was functioning at full capacity, that âdedicated investmentâ was home to a tiny fraction of OpenAIâs researchers and was promised only 20 percent of its computing power â perhaps the most important resource at an AI company. Now, that computing power may be siphoned off to other OpenAI teams, and itâs unclear if thereâll be much focus on avoiding catastrophic risk from future AI models. To be clear, this does not mean the products OpenAI is releasing now â like the new version of ChatGPT, dubbed GPT-4o, which can have a natural-sounding dialogue with users â are going to destroy humanity. But whatâs coming down the pike? âItâs important to distinguish between âAre they currently building and deploying AI systems that are unsafe?â versus âAre they on track to build and deploy AGI or superintelligence safely?ââ the source with inside knowledge said. âI think the answer to the second question is no.â Leike expressed that same concern in his Friday thread on X. He noted that his team had been struggling to get enough computing power to do its work and generally âsailing against the wind.â [Image: https://platform.vox.com/wp-content/uploads/sites/2/chorus/uploads/chorus_asset/file/25452835/Screenshot_2024_05_17_at_12.11.21_PM.png?quality=90&strip=all] Most strikingly, Leike said, âI believe much more of our bandwidth should be spent getting ready for the next generations of models, on security, monitoring, preparedness, safety, adversarial robustness, (super)alignment, confidentiality, societal impact, and related topics. These problems are quite hard to get right, and I am concerned we arenât on a trajectory to get there.â When one of the worldâs leading minds in AI safety says the worldâs leading AI company isnât on the right trajectory, we all have reason to be concerned. Update, May 18, 7:30 pm ET: This story was published on May 17 and has been updated multiple times, most recently to include Sam Altmanâs response on social media.