> news & events > Limiting generative AI’s amplification of biases
Limiting  generative AI’s amplification of biases

Limiting generative AI’s amplification of biases

The Bloomberg group has shown that using ChatGPT in a recruitment process replicates certain prejudices at work in society. Fictitious CVs were submitted to the AI, with names selected in such a way as to evoke diverse ethnic backgrounds.

The findings are clear: given the same experience and skills, AI favors certain genders and ethnic backgrounds in selecting candidates. For instance, in the United States, ChatGPT clearly privileges people identified as “Hispanic women” for HR specialist positions, “Asian women” for financial analyst positions, and “white women” for software engineer positions. The disadvantaged groups are respectively those identified as “white men”, “black men” and “black women”. And the more the AI is asked to repeat the exercise, the more this bias is amplified.

To decide on the tasks we want to entrust to them, we should keep in mind how generative AIs work. They provide the most statistically relevant answers to the questions submitted. By construction, they therefore tend toward an average of the considerable volumes of information injected into them. Mechanically, if we inject prejudices, these same prejudices will emerge from their recommendations. It is up to us to spot and counter them.


Source: OpenAI’s GPT Is a Recruiter’s Dream Tool. Tests Show There’s Racial Bias, Leon Yin, Davey Alba, Leonardo Nicoletti, Bloomberg Technology + Equality, March 2024.

 

Free trial

Discover our synopses freely and without commitment!

Free trial

All publications

Explore