Researchers uncover covert racial bias within generative AI models

16/03/2024 | The Guardian

A new report from Cornell University has highlighted that generative artificial intelligence (AI) tools, such as OpenAI's ChatGPT and Google's Gemini, are increasingly demonstrating covert racism as they advance. Researchers from Cornell's technology and linguistics departments have revealed that these large language models (LLMs) hold racist stereotypes about African American Vernacular English (AAVE), a dialect spoken by Black Americans. The researchers noted that companies commonly use platforms such as these to screen job applicants, and their study shows that AI models react to less overt markers of race, such as dialect differences.

Read Full Story
Generative-AI, chatbots

What is this page?

You are reading a summary article on the Privacy Newsfeed, a free resource for DPOs and other professionals with privacy or data protection responsibilities helping them stay informed of industry news all in one place. The information here is a brief snippet relating to a single piece of original content or several articles about a common topic or thread. The main contributor is listed in the top left-hand corner, just beneath the article title.

The Privacy Newsfeed monitors over 300 global publications, of which more than 4,350 summary articles have been posted to the online archive dating back to the beginning of 2020. A weekly roundup is available by email every Friday.

Freevacy has been shortlisted in the Best Educator category.
The PICCASO Privacy Awards recognise the people making an outstanding contribution to this dynamic and fast-growing sector.