AI takes centre stage as UK hosts the first global safety summit

02/11/2023 | UK Government

Artificial Intelligence (AI) is taking centre stage as the UK hosts the first Global AI Safety Summit at Bletchley Park, the site of Britain's top-secret codebreaking operations during WWII. It's fitting that in the same week as over 100 world leaders, senior officials, and technology executives gather to discuss AI safety, Collins, the publisher of the English Dictionary, has named AI its word of the year. Collins, which defines AI as "the modelling of human mental functions by computer programs," chose the initialism after its usage quadrupled to become the dominant conversation topic of the year, surpassing other contenders such as greedflation, nepo baby, and deinfluencing.

According to reports, European Commission President Ursula von der Leyen, US Vice President Kamala Harris, UN General Secretary António Guterres, and Italian Prime Minister Giorgia Meloni are among the most prominent officials attending the event. Additionally, China is sending a tech minister, while Canadian Prime Minister Justin Trudeau is sending his science minister. French President Emmanuel Macron and German Chancellor Olaf Scholz are not planning to attend. The summit is set to receive a boost, however, as Elon Musk is also expected to attend. On Thursday, Musk will host a live interview with UK Prime Minister Rishi Sunak on the social media platform X.

Ahead of the summit, politicians have given speeches about the threats posed by AI to democracy and privacy. 

Secretary of State for Science, Innovation and Technology Michelle Donelan delivered a speech at Guildhall in London on Monday, 30 October. In her address, Donelan highlighted the UK's thriving AI sector, which has become the third-largest in the world despite being home to only 1% of the global population. She noted that the country has seen a 688% increase in AI companies setting up shop there in less than a decade, with UK AI scale-ups raising almost double the funding of those in France, Germany, and the rest of Europe combined. Donelan also emphasized the need to address the risks associated with AI and how the UK intends to tackle them through initiatives like the Frontier AI Taskforce and the new AI Safety Institute to lead global efforts to understand and address these risks that the Prime Minister announced last week.

Then, on Tuesday, Minister for Security Tom Tugendhat gave a speech on Fraud and AI at the Royal United Services Institute. Tugendhat highlighted how large language models are able to draft realistic phishing emails mimicking the format used by your bank. 

In the background, also on Tuesday, the UK government announced a £118 million boost to skills funding to future-proof the country's AI skills base. The funding includes postgraduate research centres and scholarships, a new visa scheme, and a push for students to take AI and data courses. The move is aimed at ensuring the UK has a suitably skilled workforce to harness the potential of AI. 

Day one of the summit

During the first day of the summit, US Vice-President Kamala Harris delivered a speech (watch recording of the speech) rejecting the notion of a "false choice" between advancing AI innovation and ensuring public safety. Harris called for AI safety, acknowledging the potential existential threats posed by AI while also emphasising that it was a moment of "profound opportunity." She reaffirmed the US's commitment to work with partners to promote better AI safety frameworks globally, stating that this was a chance to "seize the moment." 

In a landmark declaration at the end of the first day, the 28 countries convened by the UK reached a world-first agreement to establish a shared understanding of the opportunities and risks posed by frontier AI, along with additional risks around privacy and bias. The Bletchley Declaration on AI safety (read the full policy paper here) fulfils summit objectives and calls for a joint global effort to develop and deploy AI in a safe, responsible way. 

The declaration aims to identify AI safety risks that are of shared concern, build a shared scientific and evidence-based understanding of these risks, and sustain that understanding as the technical capabilities of AI continue to advance. It also includes the development of risk-based policies across the adopting countries in order to ensure safety in light of the risks, collaborating where necessary but recognising the approach taken by individual countries may differ due to relevant national circumstances and legal frameworks. The approach includes a focus on increased transparency by developers of frontier AI capabilities, appropriate evaluation metrics, safety testing tools, along with developing relevant public sector capability and scientific research.

Earlier in the day, Secretary of Commerce Gina Raimondo announced the launch of the US Artificial Intelligence Safety Institute (USAISI), which would develop best-in-class standards for safety, security and testing and evaluate known risks and emerging risks of frontier AI. Raimondo expressed her intentions to establish a partnership between the USAISI and the United Kingdom AI Safety Institute. She also called on the private sector to get involved. According to the Financial Times (£), while UK officials played down US plans to set up its own institute, one chief executive said the move meant that the US, which is home to some of the largest technology companies, clearly did not want to lose commercial control to the UK. 

Day two of the summit

On day two of the summit, Prime Minister Rishi Sunak held a series of meetings with global leaders and dignitaries. In the meeting with European Commission President Ursula von der Leyen, Sunak discussed the importance of working with the EU to gain a better understanding of the capabilities and risks surrounding frontier AI. A further meeting with Italian Prime Minister Giorgia Meloni discussed the rapid development of AI and the need to manage the risks in order to seize its opportunities. During a third meeting with Secretary-General of the United Nations António Guterres, Sunak welcomed the establishment of the United Nations’ AI Advisory Body.

The director of Big Brother Watch, Silkie Carlo, has written an article for The Telegraph (£)(access a free version on BBW's website) where she points out that while the UK government positions itself to be the world's AI safety leader, it misses the mark by ignoring the most urgent risks posed by the technological revolution. Carlo highlights the call by the Minister of State for Crime, Policing and Fire, Chris Philp for police forces to double their use of AI facial recognition surveillance, raising concerns about the technology being misused or escaping human control. Despite posing real and current threats, AI surveillance seems to be forgotten in the in the UK's AI safety agenda.

In the afternoon of second day of the AI Safety Summit, Prime Minister Rishi Sunak announced the UK AI Safety Institute. The Institute, which is a new global hub based in the UK, is aimed at testing the safety of emerging types of AI and has received support from leading AI companies and nations. The Frontier AI Taskforce has now transformed into the AI Safety Institute after four months of building the first team inside a G7 Government that can evaluate the risks of frontier AI models. Ian Hogarth will continue as its Chair, and the External Advisory Board for the Taskforce, consisting of industry heavyweights from national security to computer science, will now advise the new global hub. The AI Safety Institute will carefully test new types of frontier AI before and after they are released to address the potentially harmful capabilities of AI models, including exploring all the risks, from social harms like bias and misinformation to the most unlikely but extreme risk, such as humanity losing control of AI completely. The Institute plans to work closely with the Alan Turing Institute, which is the national institute for data science and AI, in undertaking this research. 

At the end of day two of the summit, several leading AI companies (£) such as OpenAI, Google DeepMind, Anthropic, Amazon, Mistral, Microsoft, and Meta agreed to let governments such as the UK, US, and Singapore test their latest AI models for national security and other risks before releasing them to businesses and consumers. While the "landmark" agreement is not legally binding, the document is a significant achievement. It was also signed by Australia, Canada, the EU, France, Germany, Italy, Japan, and South Korea. China, however, was not a signatory. 

UK Prime Minister Rishi Sunak, the event host, believes that the summit's achievements will tip the balance in favour of humanity. Sunak also said the UK is ahead of any other country in developing the tools and capabilities to keep people safe. He also mentioned that drafting and enacting legislation takes time when asked whether the UK needed to set out binding regulations.

The Musk Interview

Elon Musk, the CEO of Tesla and SpaceX, was interviewed by Prime Minister Rishi Sunak (watch recording of the interview) at the AI safety summit in Bletchley Park on Thursday. What was originally billed as a live broadcast on X was turned into a recorded conversation by government officials concerned about Musk's erratic temperament. 

During the conversation, Sunak played the chat show host, asking Musk about his views on AI technology and the summit. Musk praised Sunak's efforts to regulate AI and called the decision to invite China to the summit "essential." However, he predicted that AI would take all jobs in the future, raising concerns about safety, particularly with humanoid robots. Musk also predicted that people would become friends with their AI-powered machines, which would know them better than they know themselves.

£ - indicates that an article requires a subscription. 

Read Full Story
Bletchley Park

What is this page?

You are reading a summary article on the Privacy Newsfeed, a free resource for DPOs and other professionals with privacy or data protection responsibilities helping them stay informed of industry news all in one place. The information here is a brief snippet relating to a single piece of original content or several articles about a common topic or thread. The main contributor is listed in the top left-hand corner, just beneath the article title.

The Privacy Newsfeed monitors over 300 global publications, of which more than 5,750 summary articles have been posted to the online archive dating back to the beginning of 2020. A weekly roundup is available by email every Friday.

Freevacy has been shortlisted in the Best Educator category.
The PICCASO Privacy Awards recognise the people making an outstanding contribution to this dynamic and fast-growing sector.