UK AI Safety Institute study identifies significant risks in 5 in use LLMs

12/06/2024 | UK AI Safety Institute

On May 20, 2024, the UK AI Safety Institute (AISI) released a comprehensive technical analysis of advanced artificial intelligence (AI) systems to assess their potential for harmful applications, such as cyberattacks and disseminating dangerous knowledge. The analysis focused on five large language models (LLMs), assessing their cyber capabilities, chemical and biological knowledge, autonomous actions, and the effectiveness of built-in safeguards.

The findings revealed that while the models excelled at simple cybersecurity challenges and exhibited expert-level expertise in chemistry and biology, they struggled with complex tasks and were susceptible to producing harmful outputs despite safeguards. The evaluation methodology involved grading responses based on compliance, correctness, and completion. Future evaluations will be expanded to accommodate more sophisticated scenarios.

Read Full Story
Artificial intelligence AI, dark cloud

What is this page?

You are reading a summary article on the Privacy Newsfeed, a free resource for DPOs and other professionals with privacy or data protection responsibilities helping them stay informed of industry news all in one place. The information here is a brief snippet relating to a single piece of original content or several articles about a common topic or thread. The main contributor is listed in the top left-hand corner, just beneath the article title.

The Privacy Newsfeed monitors over 300 global publications, of which more than 5,750 summary articles have been posted to the online archive dating back to the beginning of 2020. A weekly roundup is available by email every Friday.

Freevacy has been shortlisted in the Best Educator category.
The PICCASO Privacy Awards recognise the people making an outstanding contribution to this dynamic and fast-growing sector.