What happens if the bauble of “artificial exuberance” breaks?
If big bets on more sophisticated compute, data availability and architecture breakthroughs don’t pay off it will produce political as well as economic fallout.

Last year, Inferences coined the term “artificial exuberance” to describe the hype that was driving multi-billion-dollar investments in startups using artificial intelligence; and triggering policy debates about the existential risk it might pose.
Since then, hopes that intelligent machines will rapidly revolutionize business, science, and the workforce have pushed some stocks to all-time highs.
Companies and investors have been showing signs of bubble-like behavior by dangling huge paydays in front of specialised researchers and engineers. They’ve also made massive investments in infrastructure reminiscent of the 1990s telecom boom. The potential bubble has been inflated by capital raised at eye-watering valuations (see here, here, and here).
Recent weeks have been a reality check of sorts. One much-referenced MIT study found that 95 percent of firms using generative systems have yet to see any measurable financial return.
Another survey by the Census Bureau showed growth in investment into artificial intelligence by US companies with 250 or more employees has slowed over recent months.
Of course, markets care more about the future than the present; and it takes time for companies and institutions to figure out how to take advantage of revolutionary new technologies. Even so, the prices that investors are paying to own a share in leading artificial intelligence labs looks puzzling by traditional Wall Street logic.
Richard Windsor, a sharp observer of tech stocks, argues that they only make sense if investors are betting that one of the leading firms or labs will eventually develop software able to outperform humans across all tasks. It follows that investors are still betting on superintelligence.
If they’re wrong, the hangover could be painful. This is because the US stock market looks like a leveraged bet on a rapid path to artificial general intelligence. Back-of-the-envelope math shows that most of the top companies on the S&P 500 are either building artificial intelligence or selling the technology and services that power it. Together, they account for more than 40 percent of the index’s value.
The real economy is also feeling the impact. In July, Paul Kedrosky, an investor who has lived through multiple Silicon Valley boom-and-bust cycles, estimated that capital investments by artificial intelligence companies were tracking at around 1 to 2 percent of US GDP, up from just 0.1 percent in 2022.
If the exponential assumptions driving this spending turn out to be S-curves instead, an eventual slump in spending on new projects could affect industries ranging from semiconductors, construction, network infrastructure, and raw materials like steel and aluminum.
Bubbles burst. Baubles break, leaving pieces that can be re-used. The telecoms boom in the 1990s — which saw companies massively overinvest in infrastructure to power the mobile revolution — was an example of the latter.
A new winter in the artificial intelligence industry would leave behind a lot of valuable assets, including clusters of high-performance GPUs, networked data centers, and practitioners who have upskilled in everything from reinforcement learning, to data annotation, and prompt engineering that could eventually be put to productive use.
Overleveraged or unsustainable companies would fade away however. Leaving a smaller number of well-resourced stalwarts with the resources and reach to tough it out.
The once deafening concerns about existential risk, and excitement about rapid military breakthroughs based on artificial intelligence would fade. Industry consolidation would accelerate. Companies would refocus on the subset of software applications that are actually delivering economic value in the here and now.
The risk of huge market consolidation by surviving firms would be high. Creating a few giants able to rise from the ashes, and buy up the useful fragments of the bauble at bargain basement rates.
The political fallout of a bursting artificial intelligence bauble would be seismic. US policymakers across multiple administrations have been factoring in an exponential trajectory progress on software engineering since ChatGPT made its debut in late 2022.
The diffusion rule put forward by the Biden administration was based on the idea that the US faced an urgent need to stop authoritarian countries from accessing advanced semiconductors and other resources that would be key to producing superintelligence.
More recently, Commerce Secretary Howard Lutnick has argued that a boom in factory automation would help to offset the impact of the Trump administration’s new tariffs and drive a US manufacturing renaissance. Deep cuts to the federal workforce also appear to be predicated, partly, on a takeover by artificial intelligence systems that would enable government employees to do more with much less.
It’s getting easier to imagine a scenario where investors and policymakers wake up to find that their assumptions were based on overly-optimistic views. One in which a shattering bauble breaks the illusion of exponential progress.
There may have been a massive underestimation of the time it takes for new technologies to diffuse across economies, and the cost of that diffusion. If that’s the case, the resulting financial and political headache will be a doozy.
What we’re reading:
Minerva’s own Emily Benson on the trajectory of US-EU tech tensions.
Singapore-based Sapient have engineered a new type of system, the hierarchical reasoning model (HRM), which may solve the “brittle task decomposition” problem that Inferences has covered previously. Here’s the paper on it.
Brian Merchant on the role of the AI bauble (as we put it above) in the wider US economy.
What we’re looking ahead to:
9 - 23 Sep 2025: UN General Assembly (UNGA 80), New York.
22 - 23 Oct 2025: G20 Leaders’ Summit, Johannesburg.
10 - 20 Nov 2025: UN Climate Change Conference (COP30).
February 2026: India Global AI Summit (expected).




We are still at an early stage of application though. Like electricity, AI is a general purpose technology. And while electricity was widely used in factories at the end of the 19th century, it took another 30-40 years until Henry Ford found out how electricity can be put to best use in a factory setting. We're still missing the "Henry Ford moment" in AI. Otherwise, I'd agree: From a policy perspective, it seems that the focus on "superintelligence" is overblown: https://danielflorian.substack.com/p/we-should-prepare-for-mass-intelligence
Interested to see how the diffusion rule plays out in particular over the medium-long term...