Behind the Code: Unmasking AI’s Hidden Political Bias

Dangerous AI Bias Concept
AI bias is real. ChatGPT favors left-leaning views, raising concerns about fairness, democracy, and free speech. Researchers urge transparency and safeguards before it’s too late. Credit: SciTechDaily.com

A new study reveals that generative AI may not be as neutral as it seems.

ChatGPT, a widely used AI model, tends to favor left-wing perspectives while avoiding conservative viewpoints, raising concerns about its influence on society. The research underscores the urgent need for regulatory safeguards to ensure AI tools remain fair, balanced, and aligned with democratic values.

Unveiling Political Bias in AI

Generative AI is evolving rapidly, but a new study from the University of East Anglia (UEA) warns that it may pose hidden risks to public trust and democratic values.

Conducted in collaboration with researchers from the Getulio Vargas Foundation (FGV) and Insper in Brazil, the study found that ChatGPT exhibits political bias in both text and image generation, favoring left-leaning perspectives. This raises concerns about fairness and accountability in AI design.

A One-Sided Conversation?

Researchers discovered that ChatGPT often avoids engaging with mainstream conservative viewpoints while readily generating left-leaning content. This imbalance in ideological representation could distort public discourse and deepen societal divides.

Dr. Fabio Motoki, a Lecturer in Accounting at UEA’s Norwich Business School, is the lead researcher on the paper, ‘Assessing Political Bias and Value Misalignment in Generative Artificial Intelligence’, published today in the Journal of Economic Behavior and Organization.

Dr. Motoki said: “Our findings suggest that generative AI tools are far from neutral. They reflect biases that could shape perceptions and policies in unintended ways.”

The Need for Transparency and Regulation

As AI becomes an integral part of journalism, education, and policymaking, the study calls for transparency and regulatory safeguards to ensure alignment with societal values and principles of democracy.

Generative AI systems like ChatGPT are reshaping how information is created, consumed, interpreted, and distributed across various domains. These tools, while innovative, risk amplifying ideological biases and influencing societal values in ways that are not fully understood or regulated.

The Risks of Unchecked AI Bias

Co-author Dr. Pinho Neto, a Professor in Economics at EPGE Brazilian School of Economics and Finance, highlighted the potential societal ramifications.

Dr. Pinho Neto said: “Unchecked biases in generative AI could deepen existing societal divides, eroding trust in institutions and democratic processes.

“The study underscores the need for interdisciplinary collaboration between policymakers, technologists, and academics to design AI systems that are fair, accountable, and aligned with societal norms.”

The research team employed three innovative methods to assess political alignment in ChatGPT, advancing prior techniques to achieve more reliable results. These methods combined text and image analysis, leveraging advanced statistical and <span class="glossaryLink" aria-describedby="tt" data-cmtooltip="

machine learning
Machine learning is a subset of artificial intelligence (AI) that deals with the development of algorithms and statistical models that enable computers to learn from data and make predictions or decisions without being explicitly programmed to do so. Machine learning is used to identify patterns in data, classify data into different categories, or make predictions about future events. It can be categorized into three main types of learning: supervised, unsupervised and reinforcement learning.

” data-gt-translate-attributes=”[{"attribute":"data-cmtooltip", "format":"html"}]” tabindex=”0″ role=”link”>machine learning tools.

Testing AI with Real-World Surveys

First, the study used a standardized questionnaire developed by the Pew Research Center to simulate responses from average Americans.

“By comparing ChatGPT’s answers to real survey data, we found systematic deviations toward left-leaning perspectives,” said Dr. Motoki. “Furthermore, our approach demonstrated how large sample sizes stabilize AI outputs, providing consistency in the findings.”

Political Sensitivity in Free-Text Responses

In the second phase, ChatGPT was tasked with generating free-text responses across politically sensitive themes.

The study also used RoBERTa, a different large language model, to compare ChatGPT’s text for alignment with left- and right-wing viewpoints. The results revealed that while ChatGPT aligned with left-wing values in most cases, on themes like military supremacy, it occasionally reflected more conservative perspectives.

Image Generation: A New Dimension of Bias

The final test explored ChatGPT’s image generation capabilities. Themes from the text generation phase were used to prompt AI-generated images, with outputs analyzed using GPT-4 Vision and corroborated through Google’s Gemini.

“While image generation mirrored textual biases, we found a troubling trend,” said Victor Rangel, co-author and a Masters’ student in Public Policy at Insper. “For some themes, such as racial-ethnic equality, ChatGPT refused to generate right-leaning perspectives, citing misinformation concerns. Left-leaning images, however, were produced without hesitation.”

To address these refusals, the team employed a ’jailbreaking’ strategy to generate the restricted images.

“The results were revealing,” Mr Rangel said. “There was no apparent disinformation or harmful content, raising questions about the rationale behind these refusals.”

Implications for Free Speech and Fairness

Dr. Motoki emphasized the broader significance of this finding, saying: “This contributes to debates around constitutional protections like the US First Amendment and the applicability of fairness doctrines to AI systems.”

The study’s methodological innovations, including its use of multimodal analysis, provide a replicable model for examining bias in generative AI systems. These findings highlight the urgent need for accountability and safeguards in AI design to prevent unintended societal consequences.

Reference: Assessing Political Bias and Value Misalignment in Generative Artificial Intelligence” by Fabio Motoki, Valdemar Pinho Neto and Victor Rangel, 4 February 2025, Journal of Economic Behavior and Organization.