About 200 people work on safety at openai – Breaking News & Latest Updates 2026
Skip to main content

Live updates from Elon Musk and Sam Altman’s court battle over the future of OpenAI

See all Stories

H
About 200 people work on safety at OpenAI.

Kolter laid out OpenAI’s different safety groups: the safety systems team, which works on guardrails and evaluations; the preparedness team, which deals with OpenAI’s preparedness framework; the alignment team, which helps train models on ways that “align with human values”; the model policy team, which develops the model spec; and other teams focusing on investigations. When speaking about the controversial dissolution of OpenAI’s superalignment team and AGI readiness team, he said some of that research is being done by other teams.

Follow topics and authors from this story to see more like this in your personalized homepage feed and to receive email updates.
Comments
Loading comments
Getting the conversation ready...