The UK government is reportedly planning to introduce new legislation to regulate the use of artificial intelligence (AI) technology. The legislation would likely include limits on the creation of large language models, according to the Financial Times (£). It is expected to require companies developing sophisticated models to share their algorithms with the government and provide evidence of safety testing.
The news comes after Sarah Cardell, the chief executive of the Competition and Markets Authority (CMA), expressed concerns about potential harms associated with AI, including the possibility of biased algorithms and the creation of harmful materials.
The Department for Science, Innovation and Technology (DSIT) is said to be "developing its thinking" on what the legislation would cover, although it is unclear when it will be introduced.
Related article:
£ - This article requires a subscription.
What is this page?
You are reading a summary article on the Privacy Newsfeed, a free resource for DPOs and other professionals with privacy or data protection responsibilities helping them stay informed of industry news all in one place. The information here is a brief snippet relating to a single piece of original content or several articles about a common topic or thread. The main contributor is listed in the top left-hand corner, just beneath the article title.
The Privacy Newsfeed monitors over 300 global publications, of which more than 5,750 summary articles have been posted to the online archive dating back to the beginning of 2020. A weekly roundup is available by email every Friday.