The post-generative paradigm—“natural” systems and the future of AI.
If current large language model engineering hits a plateau, as some scientists suspect it will, there are already alternative architectures waiting in the wings.

What’s happening? In an industry preoccupied with generative models, it’s worth noting that artificial intelligence is a generalisable phenomenon, and emerges from a particular computing substrate. In many ways, the hardware defines the system.
Recent events and statements by top scientists signal a paradigm shift in the way frontier systems are conceived and built.
So what? At the moment, the predominant substrate is classical, and relies on improving hardware (GPUs), purchasing lots of energy to run them, while ingesting lots of (mostly) text-based data, annotated by human thought.
This status quo may not hold for long. Some leading scientists, including Meta’s Chief Scientist, Yann LeCun, question the efficacy of current foundational models at demonstrating intelligence, especially at tasks for which they have not been intensely trained.
LeCun says that intelligence is not text-based, but depends on a sort of “natural data” that classical systems based on huge clusters of GPUs can never ingest.
What next? There are other substrates from which artificial intelligence could be generated. Some of them involve quantum mechanics.
In recent weeks, Quantinuum—the world’s largest integrated quantum computing company—has demonstrated that data generated by its H2 quantum computer can be used to train AI models. It also showed that outcomes similar to those from existing classical computers are possible with quantum computers, but with the use of dramatically less energy overall.
Context: As LeCun and others have indicated, serious questions remain about whether the current generation of classical models will ever be able to reason—or whether they demonstrate a sort of pattern recognition that is sufficiently detailed, in some contexts, to imitate reasoning. There is inertia in the innovation machine.
Why? Deepening geopolitical tensions, competitive dynamics, and the huge costs sunk into the current approach of scaling probabilistic programming. There is no evidence to suggest that for some complex problems, probabilistic pattern recognition will ever be enough.
So what? There are problems that classical computers cannot approach; and classically contrived machines don’t represent the “natural” processes which underlie challenges in healthcare, financial market analysis, logistics, and energy systems.
Quantinuum argues that its quantum computing based systems will. More broadly, it may be through quantum systems that machines can access forms of “natural” data, and eventually, knowledge.
How? They will leverage the unique precision of quantum-generated data. From the working of the brain, to the dynamics of weather events, this precision could enable computation of natural complexity. Initial research has shown that “shape, color, size, and position can be learned by machines” using quantum data, beyond tokenizable, mostly text-based data. Here lies the frontier of scientific problem-solving.
More context: Quantinuum has partnered with SoftBank—in a move with no publicly announced link to the Stargate project—with the aim of overcoming the limitations of classical artificial intelligence, and realizing a new generation of technologies that can tackle these problems in their own terms with a “quantum data centre”.
What’s new about this? Part of the breakthrough is in the generation of meaningful synthetic data—and more broadly in using quantum computers to unlock data forms that classical computers cannot access or generate.
Quantinuum plans to revolutionize a range of scientific approaches by infusing traditional analysis with this quantum-enhanced synthetic data and simulation capabilities. It aims to develop quantum-ready data centers as a substrate for a new type of pre-training and inference capability. It also wants to build out use cases that can be validated across a range of sectors, from telecommunications, to chemical engineering and drug discovery.
What does it mean for the broader foundational model market? The market is still vibrating from the release of DeepSeek’s R1 reasoning model, and the swing back to closed models for very specific tasks (see model interfaces both named “Deep Research” from OpenAI and Google).
R1 showed that techniques which reduce the cost of running foundational models are much needed, and will be cheered by the market in the long run. Other market leaders are showing the specificity and non-zero-shot learning might have some mileage left in this paradigm.
Classical models are already capable of sorting commands to reveal scientifically relevant information that would otherwise have remained hidden but at present, many applications are flawed, or useless. There are pirates, opportunists, and snake oil merchants along the path to more useful classical models.
They threaten to puncture a fast-inflating bubble of confidence that bigger and bigger clusters of cutting-edge components for pre-training will deliver more useful classical models.
The upshot? A shift towards use of quantum substrates could be a boon for countries like Australia and Canada—which punch above their weight in fundamental quantum science. And of course for those with deep capital markets that can support both incumbent firms and startups to build quantum machines.
A first order challenge in all ecosystems is building infrastructure that can support a new paradigm. And for the US, signs of a shift could reinvigorate debates about the need for export controls on quantum-enabling hardware as well as “frontier” artificial intelligence systems.
The science of artificial intelligence is revealing a number of alternative paradigms that do not gel with the current approach of scaling through volumes of energy, chips and human testing. Quantum systems may reveal one that is deeper, rather than simply faster and larger.