New analysis has revealed that several of the leading artificial intelligence (AI) models are not fully compliant with European regulations, specifically in areas such as cyber resilience and prevention of discriminatory output. The findings stem from a new testing tool that evaluates generative AI models from major tech companies across multiple criteria contained within the Artificial Intelligence Act (AI Act).
The tool scores AI models on a scale from 0 to 1, assessing technical robustness and safety factors. Looking at a leaderboard of the best-performing AI models indicated that models from companies like Alibaba, Anthropic, OpenAI, Meta, and Mistral achieved average scores of 0.75 or higher. The Large Language Model (LLM) Checker also highlighted significant deficiencies in some models, underlining the necessity for these companies to allocate additional resources to meet compliance standards effectively.
What is this page?
You are reading a summary article on the Privacy Newsfeed, a free resource for DPOs and other professionals with privacy or data protection responsibilities helping them stay informed of industry news all in one place. The information here is a brief snippet relating to a single piece of original content or several articles about a common topic or thread. The main contributor is listed in the top left-hand corner, just beneath the article title.
The Privacy Newsfeed monitors over 300 global publications, of which more than 5,750 summary articles have been posted to the online archive dating back to the beginning of 2020. A weekly roundup is available by email every Friday.