OpenAI is opening up about its goblin problem. After a report from Wired revealed instructions to OpenAI’s coding model to “never talk about goblins, gremlins, raccoons, trolls, ogres, pigeons, or other animals or creatures,” the AI startup published an explanation on its website, calling references to the creatures a “strange habit” its models developed as a result of their training.
OpenAI talks about not talking about goblins
References to goblins and gremlins spiked with the release of GPT-5.1’s ‘Nerdy’ personality, and then spread to other models.
References to goblins and gremlins spiked with the release of GPT-5.1’s ‘Nerdy’ personality, and then spread to other models.


As outlined in the blog post, OpenAI began noticing metaphors referencing goblins and other creatures starting with its GPT-5.1 model — specifically when using the “Nerdy” personality option. OpenAI says the problem continued to worsen with subsequent model releases, until it found that its reinforcement training rewarded the quirky metaphors with the Nerdy personality, which newer models were training on.
The rewards were applied only in the Nerdy condition, but reinforcement learning does not guarantee that learned behaviors stay neatly scoped to the condition that produced them. Once a style tic is rewarded, later training can spread or reinforce it elsewhere, especially if those outputs are reused in supervised fine-tuning or preference data.
Though references to goblins and gremlins dropped off after OpenAI discontinued the Nerdy personality in March, they didn’t disappear completely with GPT-5.5 inside its Codex coding tool, as OpenAI started training the model before finding the “root cause.” The company had to give Codex very specific instructions not to talk about the mythological creatures as a result. But if you’d prefer to have your AI code with some goblin sprinkled in, OpenAI has shared a way to reverse its instructions.











