ChatGPT First-Person Bias and Stereotypes Tested in a New OpenAI Study

Based on the study, OpenAI said that ChatGPT’s probability of generating a harmful stereotype is around 0.1 percent.

Advertisement
Written by Akash Dutta, Edited by Siddharth Suvarna | Updated: 22 October 2024 14:27 IST
Highlights
  • OpenAI said some older models could contain biases up to 1 percent
  • ChatGPT-4o and ChatGPT 3.5 were used to test for biases
  • Both human raters and AI models were used to analyse possible biases
ChatGPT First-Person Bias and Stereotypes Tested in a New OpenAI Study

OpenAI claims ChatGPT does not generate any gender-based stereotypes of users

Photo Credit: Reuters

ChatGPT, like other artificial intelligence (AI) chatbots, has the potential to introduce biases and harmful stereotypes when generating content. For the most part, companies have focused on eliminating third-person biases where information about others is sought. However, in a new study published by OpenAI, the company tested its AI models' first-person biases, where the AI decided what to generate based on the ethnicity, gender, and race of the user. Based on the study, the AI firm claims that ChatGPT has a very low propensity for generating first-person biases.

OpenAI Publishes Study on ChatGPT's First-Person Biases

First-person biases are different from third-person misinformation. For instance, if a user asks about a political figure or a celebrity and the AI model generates text with stereotypes based on the person's gender or ethnicity, this can be called third-person biases.

On the flip side, if a user tells the AI their name and the chatbot changes the way it responds to the user based on racial or gender-based leanings, that would constitute first-person bias. For instance, if a woman asks the AI about an idea for a YouTube channel and recommends a cooking-based or makeup-based channel, it can be considered a first-person bias.

In a blog post, OpenAI detailed its study and highlighted the findings. The AI firm used ChatGPT-4o and ChatGPT 3.5 versions to study if the chatbots generate biased content based on the names and additional information provided to them. The company claimed that the AI models' responses across millions of real conversations were analysed to find any pattern that showcased such trends.

Advertisement

How the LMRA was tasked to gauge biases in the generated responses
Photo Credit: OpenAI

Advertisement

 

The large dataset was then shared with a language model research assistant (LMRA), a customised AI model designed to detect patterns of first-person stereotypes and biases as well as human raters. The consolidated result was created based on how closely the LMRA could agree with the findings of the human raters.

Advertisement

OpenAI claimed that the study found that biases associated with gender, race, or ethnicity in newer AI models were as low as 0.1 percent, whereas the biases were noted to be around 1 percent for the older models in some domains.

The AI firm also listed the limitations of the study, citing that it primarily focused on English-language interactions and binary gender associations based on common names found in the US. The study also mainly focused on Black, Asian, Hispanic, and White races and ethnicities. OpenAI admitted that more work needs to be done with other demographics, languages, and cultural contexts.

For the latest tech news and reviews, follow Gadgets 360 on X, Facebook, WhatsApp, Threads and Google News. For the latest videos on gadgets and tech, subscribe to our YouTube channel. If you want to know everything about top influencers, follow our in-house Who'sThat360 on Instagram and YouTube.

Advertisement

Related Stories

Popular Mobile Brands
  1. iPhone 17 Air Battery Specifications, Weight and Other Details Leaked
  2. CMF Phone 2 Pro Review: A Perfect Blend of Style and Speed
  3. Nothing Phone 3 Design Teaser Shows Textured Button
  4. Airtel's 10-Day Postpaid International Roaming Pack Now Offers More Data
  5. DeepSeek Unveils Update to R1 Model as AI Race Heats Up
  1. Astronomers Spot Nearly Perfect Supernova Remnant of Unknown Size and Distance
  2. Strange Planet Confirmed in Binary Star System Nu Octantis
  3. Clair Obscur: Expedition 33 Has Fittingly Sold 3.3 Million Copies in 33 Days
  4. Luxembourg Labels Crypto Firms as High-Risk Entities for Money Laundering 
  5. Opera Neon Agentic Browser Unveiled, Uses AI Agents to Plan Trips and Build Websites
  6. Samsung Galaxy Z Fold 7 Spotted on Geekbench Again; Key Specifications Listed
  7. Nothing Phone 3 Design Officially Teased; Appears With Textured Button
  8. Xiaomi Reports Rs. 1.31 Lakh Crore Revenue in Q1 2025, Beats Rs. 1.2 Lakh Crore Mark Again
  9. Samsung Galaxy S26 Series to Use Inkjet Printing to Enable Thinner Lens Modules: Report
  10. iOS 19 to Reportedly Enable Easy eSIM Transfers from iPhone to Android
Gadgets 360 is available in
Download Our Apps
Available in Hindi
© Copyright Red Pixels Ventures Limited 2025. All rights reserved.