Openai independent safety board stop model releases – Breaking News & Latest Updates 2026
Skip to main content

OpenAI is launching an ‘independent’ safety board that can stop its model releases

The company’s Safety and Security Committee will become an ‘independent Board oversight committee.’

The company’s Safety and Security Committee will become an ‘independent Board oversight committee.’

Vector illustration of the ChatGPT logo.
Vector illustration of the ChatGPT logo.
Image: The Verge
Jay Peters
is a senior reporter covering technology, gaming, and more. He joined The Verge in 2019 after nearly two years at Techmeme.

OpenAI is turning its Safety and Security Committee into an independent “Board oversight committee” that has the authority to delay model launches over safety concerns, according to an OpenAI blog post. The committee made the recommendation to make the independent board after a recent 90-day review of OpenAI’s “safety and security-related processes and safeguards.”

The committee, which is chaired by Zico Kolter and includes Adam D’Angelo, Paul Nakasone, and Nicole Seligman, will “be briefed by company leadership on safety evaluations for major model releases, and will, along with the full board, exercise oversight over model launches, including having the authority to delay a release until safety concerns are addressed,” OpenAI says. OpenAI’s full board of directors will also receive “periodic briefings” on “safety and security matters.”

The members of OpenAI’s safety committee are also members of the company’s broader board of directors, so it’s unclear exactly how independent the committee actually is or how that independence is structured. (CEO Sam Altman was previously on the committee, but isn’t anymore.) We’ve asked OpenAI for comment.

By establishing an independent safety board, it appears OpenAI is taking a somewhat similar approach as Meta’s Oversight Board, which reviews some of Meta’s content policy decisions and can make rulings that Meta has to follow. None of the Oversight Board’s members are on Meta’s board of directors.

The review by OpenAI’s Safety and Security Committee also helped “additional opportunities for industry collaboration and information sharing to advance the security of the AI industry.” The company also says it will look for “more ways to share and explain our safety work” and for “more opportunities for independent testing of our systems.”

Update, September 16th: Added that Sam Altman is no longer on the committee.

Follow topics and authors from this story to see more like this in your personalized homepage feed and to receive email updates.