New research by the US Government Accountability Office (GAO) looking at the commercial development of generative artificial intelligence (Gen AI) technologies has revealed limitations in vulnerability testing. The GAO found commercial developers use several common practices to facilitate responsible development and deployment of Gen AI, including accuracy benchmark tests, pre-deployment multi-disciplinary model evaluation, and security red teaming. These quantitative and qualitative evaluation practices aim to ensure models provide accurate, contextual results and prevent harmful outputs.
However, developers also acknowledge that challenges remain in ensuring the safe and trustworthy deployment of Gen AI technologies. Such challenges include recognising that their models are not entirely reliable and indicating that even with mitigation strategies, models can still produce incorrect outcomes, contain biases, or remain vulnerable to attacks.
What is this page?
You are reading a summary article on the Privacy Newsfeed, a free resource for DPOs and other professionals with privacy or data protection responsibilities helping them stay informed of industry news all in one place. The information here is a brief snippet relating to a single piece of original content or several articles about a common topic or thread. The main contributor is listed in the top left-hand corner, just beneath the article title.
The Privacy Newsfeed monitors over 300 global publications, of which more than 5,750 summary articles have been posted to the online archive dating back to the beginning of 2020. A weekly roundup is available by email every Friday.