The National Cyber Security Centre (NCSC) has recently warned about the potential cyber risks that come with using artificial intelligence (AI) large language models (LLMs). The NCSC noted that LLMs such as OpenAI's ChatGPT are being integrated into various products and services for internal and customer use. Many organisations across all sectors are currently exploring the possibility of integrating LLMs into their services or businesses. However, according to the NCSC, LLMs remain something that the academic and the technology communities don't yet fully understand their capabilities, weaknesses, and vulnerabilities.
One significant area of risk that the NCSC highlighted is prompt injection attacks, where attackers manipulate the output of LLMs to launch scams or other cyber attacks. Research suggests that LLMs cannot inherently distinguish between instructions and data that help complete the instructions, which can lead to reputational risk to an organisation.
In a related post, the NCSC explains why cyber security principles are still important when developing or implementing machine learning models.
What is this page?
You are reading a summary article on the Privacy Newsfeed, a free resource for DPOs and other professionals with privacy or data protection responsibilities helping them stay informed of industry news all in one place. The information here is a brief snippet relating to a single piece of original content or several articles about a common topic or thread. The main contributor is listed in the top left-hand corner, just beneath the article title.
The Privacy Newsfeed monitors over 300 global publications, of which more than 5,750 summary articles have been posted to the online archive dating back to the beginning of 2020. A weekly roundup is available by email every Friday.