Privacy risks associated with generative AI systems

28/02/2023 | IAPP

The popularity of generative AI systems, like ChatGPT, Google's Bard and Microsoft's Bing chatbot, has skyrocketed in recent weeks. Beneath the surface, massive amounts of data are required to interact with users in detailed conversational ways. Such data use is problematic and raises concerns about the data collection and privacy practices of generative AI technologies. 

In related news, TechCrunch reports OpenAI has announced it is planning to alter its practices and policies on the use of customer data in artificial intelligence model training for speech-to-text technologies ChatGPT and Whisper. Third-party app integrations through new application program interfaces will not use data for service improvements without user consent and will only retain API user data for 30-days.

Read Full Story
Generative AI, ChatGPT, Bard and Microsoft's Bing chatbot

What is this page?

You are reading a summary article on the Privacy Newsfeed, a free resource for DPOs and other professionals with privacy or data protection responsibilities helping them stay informed of industry news all in one place. The information here is a brief snippet relating to a single piece of original content or several articles about a common topic or thread. The main contributor is listed in the top left-hand corner, just beneath the article title.

The Privacy Newsfeed monitors over 300 global publications, of which more than 5,750 summary articles have been posted to the online archive dating back to the beginning of 2020. A weekly roundup is available by email every Friday.

Freevacy has been shortlisted in the Best Educator category.
The PICCASO Privacy Awards recognise the people making an outstanding contribution to this dynamic and fast-growing sector.