France, Germany, and Italy have proposed a self-regulatory code of conduct for foundation models under the EU Artificial Intelligence Act (AI Act). The development comes after negotiations broke down over the three countries' objection to a tiered approach to foundation model regulation. The three EU countries have circulated a non-paper offering hope for a compromise, which will be discussed at the Council of the European Union's next Telecommunications and Information Society Working Party on 21 November.
Meanwhile, EURACTIV reports that MEPs discussed governance of the AI Act on Tuesday. A new compromise test has been proposed, establishing an AI Office to oversee enforcement aspects of the law, with EU countries carrying out its tasks. Additionally, an AI Board would be appointed to ensure consistent application of the law.
In related news, the French data protection authority, the CNIL, has issued eight AI how-to sheets on the creation of datasets for the development of artificial intelligence systems. A consultation on the AI database training guides is open for public comment; responses must be submitted by 15 December.
What is this page?
You are reading a summary article on the Privacy Newsfeed, a free resource for DPOs and other professionals with privacy or data protection responsibilities helping them stay informed of industry news all in one place. The information here is a brief snippet relating to a single piece of original content or several articles about a common topic or thread. The main contributor is listed in the top left-hand corner, just beneath the article title.
The Privacy Newsfeed monitors over 300 global publications, of which more than 5,750 summary articles have been posted to the online archive dating back to the beginning of 2020. A weekly roundup is available by email every Friday.