From ‘safety’ to ‘science and solutions’… AI policy is pivoting.
Inferences from Minerva Technology Policy Advisors. Vol.16 - 3 June, 2024
Back in Washington for the start of June, after a busy month that included trips to tech conferences in Los Angeles, San Francisco and the B7 summit in Rome.
A lot has been going on in the meantime: South Korea hosted the AI Seoul Summit; a low-key, partly virtual, and less widely attended sequel to the inaugural UK AI Safety Summit at Bletchley Park; and French President Emmanuel Macron fixed the date and a notional agenda for the next big meeting in Paris early next year.
So what? Connecting the dots, it’s clear that the global AI policy conversation is undergoing a pivot. It is expanding beyond the safety focus that has dominated diplomatic discussions for the past two years to include how policymakers can support faster adoption and incorporating other topics that will be important for balancing AI’s opportunities and risks.
However, there are still some big holes in the agenda; with a broad cross-section of companies and Global South countries, including China, still underrepresented in the conversation.
What happened in Seoul? The leaders of Australia, Canada, the EU, France, Germany, Italy, Japan, South Korea, Singapore, the UK, and the US issued a declaration supporting ongoing work on AI “safety, innovation, and inclusivity.” Digital ministers from a wider group of countries, which included Chile, India, Indonesia, Israel, Kenya, Mexico, the Netherlands, Nigeria, New Zealand, the Philippines, Rwanda, Saudi Arabia, Spain, Switzerland, Turkey, Ukraine, and the UAE in addition to the usual suspects, issued a longer and more detailed statement fleshing out some concrete areas for cooperation.
Sixteen companies that build artificial intelligence systems also endorsed a series of voluntary safety commitments for “frontier AI” originally conceived at the meeting in Bletchley Park last November.
Why does it matter? Continuing a trend from the Bletchley Park meeting, the discussion on AI safety took up most, but not all, of the official statements released at the summit. One particular focus was on building and expanding the work of new national AI safety institutes that are being formed in the US, UK, and elsewhere, and the need for interoperable approaches to AI regulation.
But the Seoul summit communiqué also contained some hints at how the AI policy conversation is evolving beyond safety, and pivoting towards science-based standards and solutions. Other topics that made it into the document included:
Energy, and the environment. Digital ministers who met in Seoul gave a short nod to upstream energy consumption issues, which we previously discussed in Inferences. In the ministerial declaration’s section on “innovation,” they noted that developers should “take into consideration…[the] energy and resource consumption” of AI systems.” That’s an understatement, given the need to find new sources of energy that can support the widespread adoption of advanced AI without making it even harder to hit climate targets. Yet its inclusion shows how this highly geopolitical and difficult-to-resolve issue hanging over the future of AI is attracting growing attention on the global stage.
Data. References to data were peppered throughout the Seoul leaders’ statement and the ministerial declaration, after the subject only got a passing mention in the Bletchley Park communiqué. Participants highlighted the need for datasets that can aid in AI safety efforts and the need to factor the use of personal data, copyright, and intellectual property issues into governance frameworks. With major legal fights brewing over AI developers’ use of copyrighted or otherwise protected information to train AI systems, the latter topic will continue to attract growing attention in capitals and boardrooms. The strategic importance of data is also likely to move up the agenda as governments begin to realize how unique datasets can give companies a competitive edge at a time when most of the world’s freely available information has already been digested by large language models.
What was missing in Seoul? China. Beijing participated in a portion of the original Bletchley Park process in November, but was absent from the Seoul declaration and ministerial statement. US and Chinese officials recently met behind closed doors for bilateral discussions about AI safety in Geneva. The Chinese AI company Zhipu.ai, backed by the Chinese tech giants Alibab and Tencent, and more recently, Saudi Arabia, was also among the private sector firms that signed on to the voluntary set of “frontier AI” safety commitments in Seoul. Still, the lack of Chinese government endorsement of the meeting’s outcomes was notable. It suggests that the world’s leading economies are struggling to understand how or where China fits into the wider global conversation. Given China’s key role as the only other country on earth that has produced a tech ecosystem similar to that of Silicon Valley, it feels like a major hole in the global safety debate.
We’ll always have Paris. The six-month check-in on the Bletchley process in Seoul was always destined to be a sideshow to the “AI Action Summit,” now scheduled to take place in Paris on the 10th and 11th of February next year.
In a virtual address at the Seoul summit, Macron said the Paris meeting would center on “science, solutions, and standards.” The lack of an explicit mention of “safety” was striking, but fits with what we had been hearing about Macron’s desire to address a broader array of AI policy issues, including policies to speed adoption of beneficial uses of the technology.
In another speech to a group of AI leaders, Macron emphasized an even broader national agenda on AI, focused on; cultivating talent, ensuring adequate energy, computing, and cloud infrastructure to take advantage of AI (with France’s nuclear power industry highlighted as a key asset), strengthening investment in AI technologies and startups, encouraging innovative uses of AI, including a focus on education and workforce skills, and governance.
This is a much more expansive view of AI policy that will likely find a receptive audience, not just in Silicon Valley boardrooms, but also among Global South countries, which remain more focused on finding ways to gain access to AI and other cutting-edge digital capabilities than on the risk agenda that has dominated global discussions to date.
What we’re reading:
Matt Perault and Bruce Mehlman on how AI is creating strange bedfellows in the world of Washington DC technology power politics.
An analysis of what the law says about voice actors and generative AI, following the OpenAI-Scarlett Johansson brouhaha.
The French Artificial Intelligence Commission’s ambition statement.
What we’re looking ahead to:
13 - 15 June: Summit of G7 leaders in Borgo Egnazia.
15 - 18 July: IEEE International Conference on Artificial Intelligence Testing in Shanghai, China.
22 July: Priorities for AI policy and regulation in the UK Forum.
22 - 23 September: UN Summit of the Future.
10 - 11 February 2025 : AI Action Summit in Paris, France.