Protecting your privacy when using a Gen AI

23/01/2024 | The Wall Street Journal

An article in The Wall Street Journal (£) examines privacy concerns related to the use of generative artificial intelligence (Gen AI) chatbots and the vast amounts of information they have access to, some of which may contain sensitive personal information. Experts suggest that while there is always a risk of exposure, there are ways to limit that risk. The article outlines measures certain Gen AI creators have implemented to protect user privacy, such as allowing users to turn off chat history storage and providing options to delete conversations. However, the article concludes that the best way for consumers to protect themselves is to avoid sharing personal information with Gen AI tools and to be cautious when conversing with any AI.

£ - This article requires a subscription.

Read Full Story
Generative-AI, chatbots

What is this page?

You are reading a summary article on the Privacy Newsfeed, a free resource for DPOs and other professionals with privacy or data protection responsibilities helping them stay informed of industry news all in one place. The information here is a brief snippet relating to a single piece of original content or several articles about a common topic or thread. The main contributor is listed in the top left-hand corner, just beneath the article title.

The Privacy Newsfeed monitors over 300 global publications, of which more than 5,750 summary articles have been posted to the online archive dating back to the beginning of 2020. A weekly roundup is available by email every Friday.

Freevacy has been shortlisted in the Best Educator category.
The PICCASO Privacy Awards recognise the people making an outstanding contribution to this dynamic and fast-growing sector.