How and when to conduct AI risk assessments

07/08/2024 | IAPP

In a recent article for the IAPP, Jodi Daniels, the Founder and CEO of Red Clover Advisors, explores the potential risks associated with the use of artificial intelligence (AI), particularly generative AI, in business operations. Daniels highlights the need for companies to evaluate AI systems for potential risks to the organisation. Similar to other third-party risk management practices, she writes that organisations should conduct an initial assessment and establish regular reviews throughout the technology's lifecycle. Moreover, Daniels recommends that companies utilising generative AI systems should establish an AI governance programme encompassing policies, standards, and guidance for the entire lifecycle of AI system deployment, from procurement to sunsetting. By integrating AI assessments into a governance programme, organisations can ensure that their systems are reliable, ethical, and compliant, ultimately cultivating trust and accountability in their AI initiatives.

Read Full Story
Artificial Intelligence

What is this page?

You are reading a summary article on the Privacy Newsfeed, a free resource for DPOs and other professionals with privacy or data protection responsibilities helping them stay informed of industry news all in one place. The information here is a brief snippet relating to a single piece of original content or several articles about a common topic or thread. The main contributor is listed in the top left-hand corner, just beneath the article title.

The Privacy Newsfeed monitors over 300 global publications, of which more than 5,750 summary articles have been posted to the online archive dating back to the beginning of 2020. A weekly roundup is available by email every Friday.

Freevacy has been shortlisted in the Best Educator category.
The PICCASO Privacy Awards recognise the people making an outstanding contribution to this dynamic and fast-growing sector.