EU to issue first assessment of high-risk AI products linked to RED

05/07/2024 | EURACTIV

According to a European Commission document obtained by EURACTIV, Artificial intelligence (AI)-based cybersecurity and emergency services components in internet-connected devices are expected to be categorised as high-risk under the Artificial Intelligence Act (AI Act)

Under the AI Act, high-risk AI systems, in addition to abiding by applicable sectoral legislation, are required to undergo rigorous testing, risk management, security measures, and documentation. For an AI system to be designated as high-risk, it must satisfy two criteria: first, the system or AI product must fall under existing legislation, and second, it must undergo a third-party assessment to demonstrate compliance with established rules. The document underscores that AI-enabled components related to cybersecurity and emergency services meet relevant criteria outlined in the 2014 Radio Equipment Directive (RED) and thereby warrant categorisation as high-risk systems.

Read Full Story
EU Flags

What is this page?

You are reading a summary article on the Privacy Newsfeed, a free resource for DPOs and other professionals with privacy or data protection responsibilities helping them stay informed of industry news all in one place. The information here is a brief snippet relating to a single piece of original content or several articles about a common topic or thread. The main contributor is listed in the top left-hand corner, just beneath the article title.

The Privacy Newsfeed monitors over 300 global publications, of which more than 4,350 summary articles have been posted to the online archive dating back to the beginning of 2020. A weekly roundup is available by email every Friday.

Freevacy has been shortlisted in the Best Educator category.
The PICCASO Privacy Awards recognise the people making an outstanding contribution to this dynamic and fast-growing sector.