OpenAI is forming a new safety team, and it’s led by CEO Sam Altman, along with board members Adam D’Angelo and Nicole Seligman. The committee will make recommendations on “critical safety and security decisions for OpenAI projects and operations” — a concern several key AI researchers shared when leaving the company this month.
- Home
- Technology
- News
OpenAI has a new safety team — it’s run by Sam Altman
Following the departure of several key AI researchers, OpenAI announced that it’s forming a new safety committee led by CEO Sam Altman.


For its first task, the new team will “evaluate and further develop OpenAI’s processes and safeguards.” It will then present its findings to OpenAI’s board, which all three of the safety team’s leaders have a seat on. The board will then decide how to implement the safety team’s recommendations.
Its formation follows the departure of OpenAI co-founder and chief scientist Ilya Sutskever, who supported the board’s attempted coup to dethrone Altman last year. He also co-led OpenAI’s Superalignment team, which was created to “steer and control AI systems much smarter than us.”
The Superalignment team’s other co-leader, Jan Leike, announced his departure shortly after Sutskever left. In a post on X, Leike said safety at OpenAI has “taken a backseat to shiny products.” OpenAI has since dissolved the Superalignment team, according to Wired. Last week, OpenAI policy researcher Gretchen Krueger announced her resignation, citing similar safety concerns.
Along with the new safety board, OpenAI announced that it’s testing a new AI model, but it didn’t confirm whether it’s GPT-5.
Earlier this month, OpenAI revealed its new voice for ChatGPT, called Sky, which sounds eerily similar to Scarlett Johansson (something Altman even alluded to on X). However, Johansson then confirmed that she refused Altman’s offers to provide a voice for ChatGPT. Altman later said that Open AI “never intended” to make Sky sound like Johansson and that he reached out to Johansson after the company cast the voice actor. The entire affair left AI fans and critics alike concerned.
Other members of OpenAI’s new safety team include head of preparedness Aleksander Madry, safety head Lilian Weng, head of alignment science John Schulman, security head Matt Knight, and chief scientist Jakub Pachocki. But with two board members — and Altman himself — heading up the new safety board, it doesn’t seem like OpenAI is actually addressing its former workers’ concerns.

PM constitutes committee to review progress on federal development projects in KP
- 15 گھنٹے قبل

The new food pyramid is lying to you
- 6 گھنٹے قبل

How the US shut the door on asylum-seekers
- 6 گھنٹے قبل
Tracking NFL coaching changes: Falcons fire Raheem Morris after two seasons
- 7 گھنٹے قبل

The best chargers and portable power solutions at CES 2026
- 8 گھنٹے قبل

Iraq shows interest in JF-17, Super Mushshak trainer aircraft: ISPR
- 20 گھنٹے قبل
Early wild card bets: How to bet Packers-Bears, Bills-Jaguars
- ایک دن قبل

Pak Navy conducts successful live firing of surface-to-air missile
- 19 گھنٹے قبل

How the Minnesota fraud scandal could upend American child care
- ایک دن قبل

Fujifilm’s new instant camera captures video with vintage effects
- 8 گھنٹے قبل

Want closer friendships? Find your “strawberry people”
- 6 گھنٹے قبل

MAHA’s latest offensive
- ایک دن قبل






