The AI Safety Summit started vital conversations, but key questions remain
By Matt Redley & James Surallie
Artificial Intelligence (AI) is the topic on everyone’s lips, and last week, the conversation got even louder at the UK AI Safety Summit. It was a significant moment, bringing together policymakers, experts, and global governments to discuss the risks of so-called ‘frontier AI’ - and how to regulate it.
Yesterday afternoon (8 November), the SEC Newgate AI Network, launched in August 2023, put the summit under the microscope to de-brief on its successes and shortcoming.
The group helps coordinate and network corporates, advisors, investment banks, trade associations and policy makers in the UK, and is made up of representatives from almost 60 organisations - and growing.
Regularly hosting roundtables and public panel events, the network focuses on the business impact of the rise in generative AI and other AI-associated technologies. It also liaises directly with advisers in government and the Labour Party, bridging the Westminster/City divide, as well as updating leading journalists on its activities.
During the roundtable discussion, political manoeuvring prior to the summit was highlighted as something that risked derailing momentum. Just a few days before, US President Joe Biden unveiled a new executive order on artificial intelligence. Then, on day one of the summit, US Vice President, Kamala Harris, announced the establishment of a US AI safety institute.
Despite Michelle Donelan stressing that the government was unfazed by the US’s announcements, pointing to the fact that the majority of cutting-edge AI companies, such as OpenAI, are based in the US, it is that exact point which potentially hinders the UK’s leadership ambitions. According to the 2023 State of AI report, the US produced more than 70 percent of the most cited AI papers over the past three years, while US-based companies and universities account for nine out of the top 10 research institutions for AI.
While the US sought to showcase how they will lead the world on AI regulation, the UK Government arguably achieved a diplomatic coup, showcasing how they intend to lead the world on AI safety discussions. The first output was the Bletchley Declaration, which saw 27 nations and the EU broadly commit to work together to tackle the existential risks stemming from AI. While this is a positive step in outlining a shared vision for fostering safety and ethical considerations in AI development and deployment, the non-legally-binding document is a statement of intent and a call to action, but lacks detail around timelines, delivery and tangible objectives.
The document agreed the principles for addressing so-called ‘frontier AI risk,’ which includes identifying AI safety risks of shared concern, building a shared scientific and evidence-based understanding of these risks, and adding to that understanding as capabilities continue to increase. Additionally, this included building risk-based policies across countries to ensure safety in light of risks, including collaborating while recognising approaches may differ based on national circumstances and legal frameworks. Finally, the agreement includes promoting debates around increased transparency by private actors developing frontier AI capabilities.
Other agreements included plans for safety testing of frontier models, testing models, both pre- and post-deployment, and a role for governments in testing, particularly for critical national security, safety and society harms. Additionally, the creation of an AI Safety Institute was unveiled, with an ambition to make the group a global hub tasked with testing the safety of emerging types of AI and has said the institute would “advance the world’s knowledge of AI safety”.
Even though some may argue that the UK is getting left behind on AI regulation, with the EU and US now storming ahead, this summit was arguably more about optics than substance. It appears the summit’s main goal was to start a global conversation about frontier AI models, and, on that objective, it certainly succeeded.