The IAPP has released a list of 12 major risks associated with artificial intelligence and privacy. The aim is to create a set of definitions for professionals to use in the absence of concrete regulations. The IAPP notes that these definitions may evolve as the "taxonomy of AI privacy risks is not static — it's a living framework that must evolve with the AI landscape." The 12 risks include increased surveillance, identification, aggregation, phrenology and physiognomy, secondary use, exclusion, insecurity, exposure, distortion, disclosure, increased accessibility, and intrusion. All these risks can exacerbate privacy concerns and invade personal space or solitude.
What is this page?
You are reading a summary article on the Privacy Newsfeed, a free resource for DPOs and other professionals with privacy or data protection responsibilities helping them stay informed of industry news all in one place. The information here is a brief snippet relating to a single piece of original content or several articles about a common topic or thread. The main contributor is listed in the top left-hand corner, just beneath the article title.
The Privacy Newsfeed monitors over 300 global publications, of which more than 5,750 summary articles have been posted to the online archive dating back to the beginning of 2020. A weekly roundup is available by email every Friday.