Princeton computer scientists Arvind Narayand and Sayash Kapoor found a widely reported paper alleging ChatGPT sided with liberal-leaning opinions had a lot of flaws. These included testing an older language model, text-davinci-003, not present in ChatGPT, relying on multiple choice questions instead of asking for more direct answers, and poorly constructed prompts.
As the report says, ChatGPT won’t tell users how to vote.
For now, users can take comfort in the fact that chatbots are highly steerable. In ChatGPT, to the extent that users don’t want it expressing opposing political opinions, setting a custom instruction to always respond as a Republican or Democrat (or other affiliation) might be sufficient to take care of it.
[www.aisnakeoil.com]











