Nobel calling: how diffusion should drive broader gains from AI breakthroughs.
Inferences from Minerva Technology Policy Advisors. Vol.34 - 15 October, 2024

Artificial intelligence doubled up on Nobel Prize wins last week; and in lesser reported news, the Science Secretary in the UK launched a new Regulatory Innovation Office (RIO) to fast-track public access to breakthrough technologies.
So what? Surrounding both stories, there’s an ongoing political debate about ensuring that innovation becomes useful for a broader swath of society.
Firstly, the UK is attempting to bring breakthroughs to the public faster and more safely, with a new office encouraging investment and cutting red tape. The hope is to promote more widespread, practical adoption of emerging technologies. Then, in the case of the recent Nobel prize wins…
Who won? DeepMind co-founder Demis Hassabis was part of a team that won a Nobel prize for chemistry, recognizing his work on the protein structure prediction model AlphaFold (see our previous Inferences on the policy implications of computational biology here). A second Nobel, in this case for physics, went to the AI researcher Geoffrey Hinton and the physicist John Hopfield, for their work on neural networks, the approach to computing that underpins much of the current AI boom.
What’s happening? The prizes underscore the growing importance of the private sector in driving innovation at the cutting edge of artificial intelligence. Google can now celebrate its association with two new Nobel laureates; part of a bigger trend towards the private sector being in the driving seat when it comes to making breakthroughs.
This trend has political — and geopolitical — consequences. As concerns about a “race” to master advanced artificial intelligence is fuelling technological competition between the nations of the world, including the US and China; the idea that AI is the key to unlocking new frontiers for research is also prompting governments to tee up billions of dollars of investment in “sovereign” compute capacity to ensure access to the most advanced capabilities.
Why? As training costs for cutting-edge foundational models have soared, it has left breakthrough research in the field concentrated in the hands of a small number of (primarily US-based) technology companies. According to Tortoise, the share of cutting-edge artificial intelligence developed exclusively in academia has fallen sharply in the past decade; collaboration with corporations now accounts for almost all the innovation, with 80 percent of state-of-the-art models developed by profit-seeking companies rather than universities or non-profit research institutes.
Is that good or bad? Some skeptics have already dismissed the Nobel prize awards as the latest manifestation of corporate-fuelled hype around artificial intelligence. But the breakthrough in the quality of predictions of how proteins fold — and how smaller molecules (including, potentially, new drug candidates) are likely to interact with them — is legitimately exciting science, even if practical applications will take time to filter through. And while Geoffrey Hinton himself worries about whether the impact of neural networks will be a positive one for humanity, it’s hard to deny that his ideas have opened up new horizons in basic research.
Inference: As privately-backed labs push the limits of technological innovation, it will intensify debates about how best to ensure that society as a whole benefits from these investments.
This requires thinking not just about how to pursue more research and development for the public good, but also about the diffusion challenge that we talked about in our recent conversation with Jeffrey Ding: for benefits of AI to be widely shared, R&D breakthroughs will have to translate into practical, productivity-enhancing technologies that solve real problems for people and businesses.
How? Here’s one idea: enter the UK’s new Regulatory Innovation Office (RIO). It is an experiment in selective deregulation, with the aim of cutting red tape, and allowing businesses to invest in adopting technologies “from AI in healthcare to emergency delivery drones” that can then diffuse through the economy. Its approach is two-pronged: remove friction caused by bureaucracy, and encourage investment into new tech.
From the RIO to ROI. Speaking at the UK-hosted International Investment Summit this week, Prime Minister Keir Starmer touted the technology sector in the UK as a big draw for foreign capital. Also on stage were Ruth Porat, the president of Alphabet (which owns Google and DeepMind Technologies), and Larry Fink of BlackRock. The RIO’s proposed role in “speeding up approvals, providing regulatory certainty and reducing unnecessary delays” is music to their ears.
If you don’t build it, they will not come. Even if the motivation behind this innovation remains financial, the science has to be done sometime. Case in point, Google DeepMind built AlphaFold 3 in collaboration with Isomorphic Labs, a drug discovery company that was spun out of the group in 2021 headed by Hassabis, the newly minted Nobel Laureate. A DeepMind scientist quoted in a Nature writeup when the new software was released was explicit about its aims: “We have to strike a balance between making sure that this is accessible and has the impact in the scientific community as well as not compromising Isopmorphic’s ability to pursue commercial drug discovery.“
The upshot? Artificial intelligence is changing the world. It is doing so, first and foremost, through the companies that can afford to pioneer it; by optimizing their performance and maximizing their potential to discover and sell stuff. The next Nobel-worthy solution may be to ensure that these private-sector breakthroughs can be best harnessed for the broader public good.
What we’re reading:
This example of a real and tangible AI risk starting to manifest in the marketplace for AI scams.
This dive into Uber CEO, Dara Khosrowshahi’s outlook on the AV market.
This paper on online communications and political bias that raises some interesting questions ahead of the US Presidential Elections.
What we’re looking ahead to:
12 - 14 November: IEEE World Technology Summit on AI Infrastructure.
2025: G7 Leaders’ Summit will be in Kananaskis, Alberta.
10 - 11 February 2025: AI Action Summit in Paris, France.
2 - 4 June 2025: AI+ Expo and Ash Carter Exchange in Washington, DC.