Openai split its trust and safety team creating three separate groups taking on ai risk – Breaking News & Latest Updates 2026
Skip to main content
E
External Link
OpenAI split its trust and safety team, creating three separate groups taking on AI risk.

The Information reported that OpenAI has abandoned finding a replacement for trust and safety head Dave Willner, who stepped down in July. Instead, it’s replacing the division with teams dubbed Safety Systems, Superalignment, and Preparedness teams.

The company said in a blog post that Safety Systems will focus on the safe deployment of advanced AI models and artificial general intelligence while Superalignment works to align human and AI intelligence that surpasses humans, and Preparedness will do safety assessments for foundation models.

Follow topics and authors from this story to see more like this in your personalized homepage feed and to receive email updates.
Comments
Loading comments
Getting the conversation ready...