NIST releases draft guidance and AI safety testing tool

26/07/2024 | NIST

The US National Institute of Standards and Technology (NIST) has released new draft guidance and an open-source AI safety testing platform from the US AI Safety Institute in order to evaluate and mitigate preventable risks arising from the misuse of generative artificial intelligence (AI) and dual-use foundation models.

In addition, NIST published the final versions of three documents in its AI RRisk Management Framework: 

  • NIST AI 600-1: AI RMF Generative AI Profile - outlines 12 unique risks posed by Gen AI, including over 200 risk management actions:
  • NIST Special Publication (SP) 800-218A: Secure Software Development Practices for Gen AI and Dual-Use Foundation Models - expand on the SSDF to address the risk of Gen AI systems being  compromised with malicious training data;
  • NIST AI 100-5: A Plan for Global Engagement on AI Standards - designed to foster worldwide development and implementation of AI-related consensus standards, cooperation and coordination, and information sharing.
Read Full Story
Artificial intelligence

What is this page?

You are reading a summary article on the Privacy Newsfeed, a free resource for DPOs and other professionals with privacy or data protection responsibilities helping them stay informed of industry news all in one place. The information here is a brief snippet relating to a single piece of original content or several articles about a common topic or thread. The main contributor is listed in the top left-hand corner, just beneath the article title.

The Privacy Newsfeed monitors over 300 global publications, of which more than 4,350 summary articles have been posted to the online archive dating back to the beginning of 2020. A weekly roundup is available by email every Friday.

Freevacy has been shortlisted in the Best Educator category.
The PICCASO Privacy Awards recognise the people making an outstanding contribution to this dynamic and fast-growing sector.