Live updates from Elon Musk and Sam Altman’s court battle over the future of OpenAI
See all Stories
Kolter laid out OpenAI’s different safety groups: the safety systems team, which works on guardrails and evaluations; the preparedness team, which deals with OpenAI’s preparedness framework; the alignment team, which helps train models on ways that “align with human values”; the model policy team, which develops the model spec; and other teams focusing on investigations. When speaking about the controversial dissolution of OpenAI’s superalignment team and AGI readiness team, he said some of that research is being done by other teams.











