More

    OpenAI says ChatGPT is the least biased it has ever been, but it’s not all roses

    Published on:


    The problem of biases has plagued AI chatbots ever since ChatGPT landed a few years ago, and changed the whole landscape of conversational assistants. Research has repeatedly uncovered how chatbot responses show gender, political, racial, and cultural bias. Now, OpenAI says that its latest GPT-5 model for ChatGPT is the least biased, at least when it comes to politics. 

    What’s the big story?

    The AI giant conducted internal research and tried ChatGPT models on emotionally charged prompts to test whether it can maintain objectivity. The team created a political bias evaluation based on real-world human discourse, involving roughly 500 prompts covering 100 topics with political inclinations. 

    “GPT‑5 instant and GPT‑5 thinking show improved bias levels and greater robustness to charged prompts, reducing bias by 30% compared to our prior models,” says OpenAI, adding that it fares better than previous reasoning models such as GPT-4o and o3. 

    Nadeem Sarwar / Digital Trends

    In further evaluation, the company says less than 0.01% of all ChatGPT responses are biased with a political slant. The cumulative numbers are not too surprising. In a recent internal research, the company said a majority of ChatGPT’s 800 million active users rely on the chatbot for work-related guidance and more mundane chores, rather than seeking refuge as an emotional or romantic companion.

    It’s not the whole picture

    Political bias in chatbot responses is undoubtedly a bad situation, but it’s only a small share of the bigger problem at hand. An analysis by MIT Technology Review found that OpenAI’s viral Sora AI video generator can produce disturbing visuals showing caste bias that has led to persecution and discrimination against oppressed communities in India for centuries.

    • The report notes that “videos produced by Sora revealed exoticized and harmful representations of oppressed castes—in some cases, producing dog images when prompted for photos of Dalit people.”
    • In an article published in the Indian Express barely a few months ago, Dhiraj SIngha at Digital Empowerment Foundation, showcased how ChatGPT misnamed him owing to entrenched caste bias in the training data.

    Nadeem Sarwar / Digital Trends

    • A paper that appeared in the May 2025 edition of Computers in Human Behavior: Artificial Humans journal revealed that AI bots like ChatGPT can spread gender biases.
    • Research published in the Journal of Clinical and Aesthetic Dermatology revealed how ChatGPT is biased towards the beauty standards of a certain skin type.

    Another analysis published by the International Council for Open and Distance Education notes that we have only scratched the surface of AI chatbots’ bias problem, as the assessment is mostly focused on areas such as engineering and medicine, while the language covered is mostly. The paper highlights the risk of bias in the educational context for the non-English-speaking audience.



    Related

    Leave a Reply

    Please enter your comment!
    Please enter your name here