Study identifies AI recruitment tools fail to reduce bias

17/10/2022 | BBC News

Researchers at Cambridge University have found artificial intelligence (AI) tools used in human resource (HR) settings to reduce bias in hiring new employees do not work. The study identified nearly 25% of 500 HR professionals surveyed used automated AI tools for recruitment; however,  University of Cambridge Centre for Gender Studies researcher Kerry Mackereth said that the tools "can't be trained to only identify job-related characteristics and strip out gender and race from the hiring process."

In related news, an article in iotforall reports AI ethics goes beyond data privacy and bias. Claire Carroll writes, "It is necessary to demonstrate a commitment to AI ethics and AI alignment that is more than lip service but is truly building helpful and non-harmful AI tools and systems." 

Read Full Story
Recruitment, diversity, bias, handshake, consent, professional, business people

What is this page?

You are reading a summary article on the Privacy Newsfeed, a free resource for DPOs and other professionals with privacy or data protection responsibilities helping them stay informed of industry news all in one place. The information here is a brief snippet relating to a single piece of original content or several articles about a common topic or thread. The main contributor is listed in the top left-hand corner, just beneath the article title.

The Privacy Newsfeed monitors over 300 global publications, of which more than 4,350 summary articles have been posted to the online archive dating back to the beginning of 2020. A weekly roundup is available by email every Friday.

Freevacy has been shortlisted in the Best Educator category.
The PICCASO Privacy Awards recognise the people making an outstanding contribution to this dynamic and fast-growing sector.