A paper on chatgpts liberal bias wasnt even testing chatgpt – Breaking News & Latest Updates 2026
Skip to main content
E
External Link
A paper on ChatGPT’s ‘liberal bias’ wasn’t even testing ChatGPT.

Princeton computer scientists Arvind Narayand and Sayash Kapoor found a widely reported paper alleging ChatGPT sided with liberal-leaning opinions had a lot of flaws. These included testing an older language model, text-davinci-003, not present in ChatGPT, relying on multiple choice questions instead of asking for more direct answers, and poorly constructed prompts.

As the report says, ChatGPT won’t tell users how to vote.

For now, users can take comfort in the fact that chatbots are highly steerable. In ChatGPT, to the extent that users don’t want it expressing opposing political opinions, setting a custom instruction to always respond as a Republican or Democrat (or other affiliation) might be sufficient to take care of it.

Follow topics and authors from this story to see more like this in your personalized homepage feed and to receive email updates.
Comments
Loading comments
Getting the conversation ready...