The quantum winds of change are gathering...
Handwaving, or real headway; recent breakthroughs are changing the dynamics of useful computers that rely on qubits.
Governments are carefully monitoring the state of quantum science for signs of breakthroughs (or breakdowns) in encryption technologies, biological computation and much more. Last week may have seen a significant one.
What happened? Google unveiled a quantum processor called Willow. It’s being heralded as a game-changing innovation that can;
reduce the error rate of the qubits being used for computation (qubits are the basic unit of quantum computation, analogous to bits in standard computing); and
perform some calculations much faster than classical computers that rely on transistors.
So what? Willow is a sign of progress, not perfection. By focusing on error correction—the key obstacle to mainstreaming quantum computers—Willow is a step towards systems that could do useful work.
How? By efficiently reducing the noise in quantum systems. Arrays of qubits needed to perform complex calculations are fragile and prone to outside interference. For quantum computers to scale, correcting the errors caused by this interference is key.
With Willow, much of the workload of each qubit is still dedicated to the process of error correction rather than useful calculation. However, it represents a significant achievement in that it produces a stable, logical qubit that could, theoretically, be deployed at scale.
Interesting in theory. The idea is that qubits with high fault tolerance—logical qubits—can be deployed in larger and larger numbers, supported by an even greater number of physical qubits. Those logical qubits could also interact to create a system capable of very advanced computing at high speeds. There’s justified excitement about this prospect.
Challenging in practice. As the number of logical qubits increases, so will the physical size of the system, assuming the number of physical qubits required to conduct error correction remains in the same proportion.
Quantum computers are delicate, fine-tuned machines that require very specific (often supercooled) conditions. Housing many of them (like IBM’s Quantum System One below) in a secure environment and keeping systems stable long enough to perform complex calculations remains a serious challenge.
In the picture (which we took during a trip to IBM’s customer showroom floor in London) the quantum computing processor itself is a thumb-sized chip attached to the bottom of the rig, barely visible in the picture. The rest of the machine, which is as tall as a person, is basically for cooling apparatus.
A usefully large, fault tolerant quantum computer built using the same basic architecture might occupy a skyscraper-sized building.
Beyond these challenges of the physical architecture of quantum computing, there’s a question of timing. It has taken nearly a decade to go from a dozen logical qubits, to 100. Meanwhile, estimates of the number required for a “useful” quantum machine are around 1 million.
Another issue: quantum error correction is fickle, and the average performance of all the physical qubits in reducing noise is less important than the worst performing qubit. Interference from qubits that do not pull their weight can easily drag the whole quantum computer’s performance down.
One squillion years later. Benchmarking against classical computers—a standard part of how the quantum industry markets itself to the world—is also questionable. Google claims that “Willow performed a standard benchmark computation in under five minutes that would take one of today’s fastest supercomputers 10 septillion (that is, 1025) years.”
But at present, quantum computers and classical computers do not run programs or algorithms that are clearly cross-comparable, or even cross-interpretable.
The work done by a quantum system is hard to express in classical terms, and the benchmarking algorithm in question is not, itself, a useful one.
Why does any of this matter? Cryptographic systems relying on classical encryption techniques have long been threatened by the advent of quantum decryption; breakthroughs like Willow will put more attention on the issue.
The US government is already in the early stages of a push towards quantum-safe encryption, based on natural random or similar techniques that do not rely on tough-to-crack algorithms that could be vulnerable to powerful quantum computers. The US standards agency NIST announced post-quantum safe encryption standards earlier this year, for example.
Decryptageddon. But new quantum-safe encryption techniques will only protect new data as it is created. What about data secured by classical encryption that has already been harvested from the internet or stolen by hackers? Governments have long been aware of the risk that, when quantum computing breaks industry standard encryption it will spark a “decryptageddon” as illegible datasets that are currently sitting in storage become visible to allies and adversaries alike.
The resulting geopolitical dynamics. Decryptageddon will be an explicit privacy disaster, but it would also have geopolitical implications. Here are some of the possible destabilizing effects;
Adversary nations gain access to new information that allows them to identify operatives, launch blackmail campaigns and otherwise compromise their rivals.
Sensitive information revealed through quantum decryption compromises the safety and reliability of critical energy and transport infrastructure, leading to an elevated risk of sabotage; and
Domestic risks emerge from the publication of corporate information that causes civil unrest, related to public health, climate change or other sensitive issues. We think of this as citizens en-masse learning how the sausage is really made.
Quantum surprise: another less obvious risk vector may come from secret breakthroughs that take others by surprise, or the mere suspicion of secret breakthroughs prompting pre-emptive action. Innovations like Willow will push some national security officials and policymakers to feel that the moment of quantum singularity surprise is drawing closer; while the transitioning of military and other security sensitive technology onto quantum-safe protocols is not yet complete.
What’s the policy fallout? As the US prepares further controls on exports of advanced chips useful for artificial intelligence workloads—limiting their availability to countries that may be supplying China and other adversaries, the tensions of geopolitical relations over frontier technology show no signs of easing.
Controls, controls, controls. Expect a programme of controls developed specifically for quantum technologies in addition to the Commerce rule from earlier this year; targeting related equipment for building and maintaining quantum computers, cooling systems, “additive manufacturing items” that produce the complex metal alloys needed in their construction and the software elements that are needed to make them useful.
And spies. A heyday of quantum-espionage may already be heating up, as governments scramble to understand where the frontier of the technology actually lies.
The upshot? Quantum has a long way to go before it becomes a generalizable, transformative technology. Still, researchers inside large corporations are making progress. The relevant risks don’t begin or end with decryption. Major security concerns, and a huge economic boon could both be derived from faster, natural computing using quantum systems that unlock new scientific progress in areas like biology, applied physics, and space exploration.
Nations with novel capabilities in these fields, much like those in cutting-edge artificial intelligence, could harness technologies that manipulate genetic information, or simulate real environments in real-time. The geopolitical implications of such quantum advances are potentially staggering.
What we’re reading:
Reports on a “compositional interpretability” technique that could allow quantum computers to explain generative AI models.
More on the US-centric investment package announced by SoftBank.
Views from Cathrin Schaer on whether Europe’s laggard position in the AI race could prove to be an advantage in the long run.
What we’re looking ahead to:
6 - 7 February: The Inaugural Conference of the International Association for Safe and Ethical AI, Paris, France.
10 - 11 February 2025: AI Action Summit in Paris, France.
11 - 13 February 2025: World Governments Summit 2025, Dubai, United Arab Emirates.
12 February 2025: Chief AI Officer Summit UK, London.
April 2025 (expected): G7 Digital Ministerial, Canada.
2 - 4 June 2025: AI+ Expo and Ash Carter Exchange in Washington, DC.
9 - 11 July 2025: AI for Good Global Summit.