A recent policy paper by Google discusses how the only way to realise the long-term potential of artificial intelligence (AI) is to build it responsibly. The paper, written by Google's Global Director of Privacy Safety and Security Policy, Kate Charlet, calls for AI products to have privacy protections built in from the outset. While AI promises huge societal benefits, it also has the potential to aggravate existing challenges. Charlet explains that Google's approach to building AI tools is "guided by longstanding data protection practices," such as data minimisation, transparent data practices, and controls that empower users to make informed choices and manage their information.
What is this page?
You are reading a summary article on the Privacy Newsfeed, a free resource for DPOs and other professionals with privacy or data protection responsibilities helping them stay informed of industry news all in one place. The information here is a brief snippet relating to a single piece of original content or several articles about a common topic or thread. The main contributor is listed in the top left-hand corner, just beneath the article title.
The Privacy Newsfeed monitors over 300 global publications, of which more than 5,750 summary articles have been posted to the online archive dating back to the beginning of 2020. A weekly roundup is available by email every Friday.