The Organisation for Economic Co-operation and Development (OECD) has published a policy paper outlining a common framework for reporting artificial intelligence (AI) incidents. The framework establishes a global standard for stakeholders across different jurisdictions and sectors. It allows countries to adopt a uniform reporting methodology while accommodating their unique domestic policies and legal structures.
The new OECD framework comprises 29 criteria and is designed to assist policymakers in comprehending AI incidents across a broad range of contexts. It aims to facilitate the identification of high-risk AI systems, assessing both current and emerging risks and evaluating AI's impact on individuals and the environment.
Separately, Axios reports that cybersecurity professionals are voicing concerns over the latest UK and US governments' approach to frame AI safety as a security issue.
Meanwhile, an article by IAPP contributors explores why organisations must establish a baseline for their data governance safeguards in order to navigate risks associated with AI technology.

What is this page?
You are reading a summary article on the Privacy Newsfeed, a free resource for DPOs and other professionals with privacy or data protection responsibilities helping them stay informed of industry news all in one place. The information here is a brief snippet relating to a single piece of original content or several articles about a common topic or thread. The main contributor is listed in the top left-hand corner, just beneath the article title.
The Privacy Newsfeed monitors over 300 global publications, of which more than 5,750 summary articles have been posted to the online archive dating back to the beginning of 2020. A weekly roundup is available by email every Friday.