Welcome to the fragmentation age of frontier technology.
Inferences from Minerva Technology Policy Advisors. Vol.38 - 12 November, 2024

The debate over “open-source” artificial intelligence — its risks and virtues — kicked into overdrive last week.
What happened? Meta made its Llama model available to the US military and defense contractors, days after reports surfaced that Chinese scientists had used it to build a military “dialogue and question-answering” tool for the People's Liberation Army (PLA).
The White House also issued its most substantive statement yet on advancing the United States’ leadership in artificial intelligence, specifically for national security.
So what? The three stories highlight the ongoing securitization of AI, and the tension between open innovation on the one hand, and security-enhancing export controls on the other. It’s a tension that could define a new age of technological dynamics around the world, as pressure in Washington grows to prevent China and other “adversary” countries from using US technology and know-how to fuel military breakthroughs.
The context: In the two years since the big leap in capabilities that accompanied ChatGPT, serious bureaucratic energy has been expended trying to balance the desire for openness with the need to identify and address potential national security risks of “frontier” AI systems.
As part of this, the relative risks of open-source AI vs more proprietary systems have been the subject of fierce debate (the term “open-source” itself is contested when it comes to AI, and Llama in particular). Read our previous analysis of the debate about constraining powerful “open-source” models here.
The hawks will say… that powerful AI models with freely available weights, protocols and technical guides, whether they are definitively “open-source” or not, could pose a considerable threat to national security, if they contain information that could give bad actors or political adversaries the ability to build new cyber, bio, or other weapons.
These risks could grow as future generations of powerful AI models become capable of performing more complex problem-solving tasks. And unlike proprietary AI models where access to the underlying system is gated, for example, by requiring users to access it via a website or an application programming interface that could be turned off if a serious risk presented itself, that option won’t be available with open models whose details are published all over the internet.
The doves will say… attempting to restrict access to open-source software by itself is unlikely to be effective, and may also be illegal, since US courts have ruled that open-source code is protected by the first amendment. It may also be counter-productive: open-source projects have been a driving force behind internet innovation, and have supported the proprietary market in which the United States was able to build its commanding lead in the digital realm. The next wave of innovation and adoption of artificial intelligence will similarly rely on open-source innovation to drive the cost of use down and make it easier for many different types of companies to build and experiment with productive uses of AI. Projection of US influence around the world could increasingly rely on wide availability of open AI innovation driven by leading US tech firms.
So far, it’s a draw: A report commissioned last year by the White House on risks posed by AI models whose weights were “widely available” concluded that it was too soon to tell whether there were unique risks from open source systems that might justify special controls; refraining from direct restrictions while teeing up more rigorous monitoring regimes.
The new US national security memorandum makes clear that Washington’s main security concern is “frontier” models with “capabilities to aid offensive cyber operations, accelerate development of biological and/or chemical weapons, autonomously carry out malicious behavior, automate development and deployment of other models with such capabilities,” whether they are just open-weight, or even more openly accessible and adaptable.
Still, claims that China is using more open US models to power military chatbots are likely to raise hackles among security hawks who think that existing US controls don’t go far enough.
The broader context: Four years ago, the RISC-V foundation — which sets standards for an open-source semiconductor architecture designed to compete with more proprietary designs from UK chip designer Arm — moved from the US to Switzerland to get out ahead of the tightening US technology control dragnet.
Also in lesser reported news last week; the Linux Foundation — the non-profit that supports open-source OS Linux — removed two dozen key maintainers, all with Russian email addresses, after its lawyers advised it that US Office of Foreign Asset Control sanctions against Russia meant specially designated nationals from the country couldn’t be included among its kernel maintainers.
This is another instance of the US tech policy toolkit targeting people and know-how, not just underlying technologies themselves, after a separate set of US rules that restricted “US persons” from helping China produce advanced chips.
So what? US concerns about blurred lines between China’s civilian and military sectors mean it’s not hard to imagine scenarios where the US attempts to cast the dragnet more widely, imposing similar constraints on Chinese contributors, not just to open-source development but also to academic research and collaboration on AI.
Bear in mind: China’s top chip manufacturers have been collaborating on harnessing open-source architecture of RISC-V for years, as a hedge against dependence on US-based providers in the face of tougher export controls.
Coupled with the recent reports that Chinese technology giant Huawei appears to have gained access to highly advanced chips manufactured by TSMC despite US controls; there are reasons to doubt the efficacy of the White House’s current approach.
The new Trump administration is signaling that it will continue the tough approach to China and more aggressive posturing over access to advanced technology, with Green Beret combat veteran and noted China hawk Rep. Mike Waltz tipped for the post of national security adviser. Starting with a wider rupture between the US and China on AI, a new era of tech fragmentation could be on the horizon.
What we’re reading:
More on Waltz’ suspected appointment.
A recent report from CSIS on the potential damage done to innovation by US semiconductor controls.
This piece from the OECD on the anatomy of an National AI policy.
What we’re looking ahead to:
12 - 14 November: IEEE World Technology Summit on AI Infrastructure.
3 - 4 December: Global Partnership on Artificial Intelligence Summit 2024, Belgrade, Serbia.
10 - 11 February 2025: AI Action Summit in Paris, France.
2 - 4 June 2025: AI+ Expo and Ash Carter Exchange in Washington, DC.