ChatGPT Would Vote Democrat, New Study Finds — But It’s Full of Flaws
The researchers studied ChatGPT’s political bias, but independent analysis casts doubt on the methodology
--
AI makes headlines every day. Politics makes even more headlines every day, especially any news with the slightest partisan touch to it. No wonder the combination of the two attracts us like moths to a flame.
This piece covers a topic I consider crucial if we are to build a healthy relationship between AI and the world. It’s not primarily about AI or politics but about a meta-topic best described as the importance of treating high-stakes areas where AI can have a good or bad impact as such — with enough care, respect, and intellectual honesty.
It just so happens that, if we analyze over a sufficiently long timeframe, AI’s effects on politics is probably the highest-stakes category of them all.
Before we begin, I want to give a shout-out to Arvind Narayanan and Sayash Kapoor for the amazing and invaluable work they do week after week on AI Snake Oil, demystifying one paper after another — especially those that make attractive, but often dubious claims.
That’s an unpaid public service. Thank you.
This article is a selection from The Algorithmic Bridge, an educational newsletter whose purpose is to bridge the gap between AI, algorithms, and people. It will help you understand the impact AI has in your life and develop the tools to better navigate the future.
The Algorithmic Bridge is 30% off—the offer expires in 2 weeks
AI’s political bias is a danger to democracy
A new paper published earlier this month on the political preference of ChatGPT claims the chatbot shows a “strong and systematic … bias … clearly inclined to the left side,” despite it denying such partisanship when asked. From the abstract: