On Wednesday, 6 November 2024, the Department for Science, Innovation and Technology (DSIT) published a detailed report and announced new support measures for businesses to help develop and use trustworthy artificial intelligence (AI) products and services.
In the report, Peter Kyle, Secretary of State for Science, Innovation and Technology, highlights that AI is pivotal to the government’s strategy for stimulating economic growth, enhancing public service delivery, and improving living standards for working individuals nationwide. Kyle expressed his commitment to fostering AI adoption in a safe and responsible manner, ensuring that the benefits of AI technology are widely distributed.
Central to this effort is the establishment of AI assurance, which encompasses the necessary tools and techniques to assess and communicate the reliability of AI systems, thereby setting clear expectations for companies involved in AI. Robust AI assurance is vital for instilling confidence among consumers, businesses, and regulators regarding the functionality of AI systems. Furthermore, Kyle noted that AI assurance represents a substantial economic activity, akin to the UK’s £4 billion cyber security assurance sector.
Previous research by the government in March 2023 identified 3,170 active UK companies providing AI products and services. The latest research indicates that of these companies, 524 operate in the AI Assurance space, including 84 that specialise in this area, up from 17 in the 2023 study.
AI Assurance providers fall into two main areas:
- The first offers consulting, advisory, and training services or tools to assist AI developers and deployers implement effective AI assurance strategies.
- The second focuses on providing technical tools to assess AI systems.
- A third group offering AI accreditation services could emerge in the future.
A key finding in the latest research reveals that organisations have a limited understanding of AI risks or how to address such risks through AI Assurance.
To address this shortfall in understanding, the government plans to introduce a roadmap for trusted third-party AI assurance later this year. The roadmap will set out the government's vision to establish a market of high-quality, trusted AI assurance service providers, including what actions will be required to achieve its aims, such as professional bodies that provide specific training and uphold minimum professional standards.
Other initiatives may include the introduction of kitemarks to communicate the trustworthiness of AI technologies.
Alongside the report, the government has launched a new AI Management Essentials (AIME) platform designed to offer UK businesses a comprehensive resource providing guidance on how to identify and mitigate the risks associated with AI technologies.
A consultation on the AIME tool is open for public feedback, in particular, from start-ups and SMEs who develop and/or use AI systems.
The consultation closes on 29 January 2025.
Additional commentary is available in the Financial Times (£).
Legal analysis of the 26-page report provided by Pinsent Masons.
What is this page?
You are reading a summary article on the Privacy Newsfeed, a free resource for DPOs and other professionals with privacy or data protection responsibilities helping them stay informed of industry news all in one place. The information here is a brief snippet relating to a single piece of original content or several articles about a common topic or thread. The main contributor is listed in the top left-hand corner, just beneath the article title.
The Privacy Newsfeed monitors over 300 global publications, of which more than 5,750 summary articles have been posted to the online archive dating back to the beginning of 2020. A weekly roundup is available by email every Friday.