Key AI safety questions following the EDPB stakeholder event

18/11/2024 | Half lawyer, half geek, mostly harmless

An article by author and lawyer Dr Kuan Hon on her personal blog addresses the thorny subject of processing personal data for AI-related purposes. Having recently participated in the EDPB stakeholder event to gather views to support the drafting of a consistency opinion on AI models, the article outlines Dr Hon's detailed response to the two questions raised.

On the question pertaining to whether legitimate interest is a valid lawful basis for processing personal data for AI-related purposes, Dr Hon strongly advocates in favour, particularly in mitigating bias and discrimination against individuals. Here, Dr Hon shares a personal account highlighting flaws in facial biometrics, given her East Asian features. While others have encountered even more severe repercussions from facial recognition technology, such as wrongful arrests and denials of service, Dr Hon argues that if AI systems are adequately trained on a diverse range of facial features, including more non-white individuals, such technologies would be less prone to inaccuracies and the consequential harm of misidentifications. 

Concerning the EDPB's question about whether AI models "contain" personal data, Dr Hon focuses on the use of a deployed AI system as that would be the main point at which the LLMs could regurgitate accurate or inaccurate personal data. However, instead of addressing the technicalities of whether it is possible or not, Dr Hon argues that preventing training data's regurgitation or extraction is actually the more worthwhile aim. 

In related news, the Information Accountability Foundation (IAF) has released a comprehensive report examining the considerations for conducting legitimate interest assessments (LIAs) related to activities such as AI training. As the complexity of data processing increases with the rise of AI and the widespread use of LLMs, many organisations find that legitimate interests serve as the most appropriate lawful basis for data processing under the GDPR. The report highlights the need for organisations to assess the risks involved in data processing through a balancing exercise that takes into account the rights and interests of all stakeholders, ensuring that the fundamental rights of data subjects are not compromised.

However, the report highlights an ongoing gap in understanding what businesses must demonstrate to validate their use of legitimate interests and what regulators expect in terms of compliance. This gap has fostered a lack of trust in the use of legitimate interests from both businesses and regulators. While the IAF has long advocated for a more nuanced approach to balancing stakeholder interests in LIAs, current practices have focused narrowly on individual data protection rights.

Read Full Story
Artificial intelligence, AI training data

What is this page?

You are reading a summary article on the Privacy Newsfeed, a free resource for DPOs and other professionals with privacy or data protection responsibilities helping them stay informed of industry news all in one place. The information here is a brief snippet relating to a single piece of original content or several articles about a common topic or thread. The main contributor is listed in the top left-hand corner, just beneath the article title.

The Privacy Newsfeed monitors over 300 global publications, of which more than 5,750 summary articles have been posted to the online archive dating back to the beginning of 2020. A weekly roundup is available by email every Friday.

Freevacy has been shortlisted in the Best Educator category.
The PICCASO Privacy Awards recognise the people making an outstanding contribution to this dynamic and fast-growing sector.