Originally used in Cold War simulations, the term "red teaming" has evolved into a critical stress testing process for artificial intelligence (AI) systems, large language models (LLMs) and generative artificial intelligence (Gen AI) to identify vulnerabilities such as problematic outputs and performance issues. However, safeguarding the results and communications of red-team testing is a serious concern for companies involved in developing or using these models to prevent exposure of their testing methods and any vulnerabilities uncovered. Where lawyers are involved, one way to protect this information is through legal professional privilege (LPP). In an article for the IAPP, Luminos.Law's Andrew Eichen, Ekene Chuks-Okeke and Brenda Leong examine how organisations are using red teaming to navigate risks associated with AI.
What is this page?
You are reading a summary article on the Privacy Newsfeed, a free resource for DPOs and other professionals with privacy or data protection responsibilities helping them stay informed of industry news all in one place. The information here is a brief snippet relating to a single piece of original content or several articles about a common topic or thread. The main contributor is listed in the top left-hand corner, just beneath the article title.
The Privacy Newsfeed monitors over 300 global publications, of which more than 5,750 summary articles have been posted to the online archive dating back to the beginning of 2020. A weekly roundup is available by email every Friday.