Building trust in AI requires a standardised approach to transparency

01/04/2025 | OECD

In a guest post for the Organisation for Economic Co-operation and Development (OECD), Kamya Jagadish, Product Public Policy at Anthropic, discusses the growing need for artificial intelligence (AI) systems transparency. As AI becomes more advanced and pervasive, stakeholders demand clear information about its development, testing, and safety measures.

The article highlights that while transparency is gaining traction through voluntary commitments and emerging regulations, standardised practices are lacking. Jagadish points to approaches like Anthropic’s Transparency Hub, which aims to provide accessible information about AI development, along with the OECD framework for standardised reporting to streamline industry reporting requirements while making complex AI information understandable to a broad audience. The article discusses the challenges of meaningful transparency and other steps that can be taken towards greater AI transparency.

Read Full Story
Frosted glass, transparency

What is this page?

You are reading a summary article on the Privacy Newsfeed, a free resource for DPOs and other professionals with privacy or data protection responsibilities helping them stay informed of industry news all in one place. The information here is a brief snippet relating to a single piece of original content or several articles about a common topic or thread. The main contributor is listed in the top left-hand corner, just beneath the article title.

The Privacy Newsfeed monitors over 300 global publications, of which more than 3,250 summary articles have been posted to the online archive dating back to the beginning of 2020. A weekly roundup is available by email every Friday.