EPIC releases report on generative AI harms

23/05/2023 | EPIC

A report by the Electronic Privacy Information Center (EPIC) addresses concerns about the potential risks posed by new generative artificial intelligence (AI) tools like ChatGPT, Midjourney, and DALL-E. While these tools can produce new and realistic text, images, audio, and videos, the rapid integration of generative AI technology into consumer-facing products has made AI development less transparent and accountable. As a result, consumers now face increased risks of harm, such as information manipulation, impersonation, data breaches, intellectual property theft, and discrimination. EPIC's report: Generating Harms: Generative AI's Impact & Paths Forward, offers case studies, examples, and research-backed recommendations to address these concerns and provide a common understanding of the potential harms that generative AI can produce.

Read Full Story
generative artificial Intelligence, AI, Chatbots, foundation model, large language, models

What is this page?

You are reading a summary article on the Privacy Newsfeed, a free resource for DPOs and other professionals with privacy or data protection responsibilities helping them stay informed of industry news all in one place. The information here is a brief snippet relating to a single piece of original content or several articles about a common topic or thread. The main contributor is listed in the top left-hand corner, just beneath the article title.

The Privacy Newsfeed monitors over 300 global publications, of which more than 5,750 summary articles have been posted to the online archive dating back to the beginning of 2020. A weekly roundup is available by email every Friday.

Freevacy has been shortlisted in the Best Educator category.
The PICCASO Privacy Awards recognise the people making an outstanding contribution to this dynamic and fast-growing sector.