The strategic fallout from CrowdStrike’s error. Plus: AI governance in Africa
Inferences from Minerva Technology Policy Advisors. Vol.23 - 23 July, 2024
A global outage, affecting 8.5 million computers, crippled important networks in hospitals, banks, airports and many other businesses over the weekend. It wasn’t caused by a security breach, or a malicious attack; it was caused by bad code.
In case you hadn’t read (or weren’t affected)… what happened? Windows computers all over the world suffered system outages after CrowdStrike — an American cybersecurity company that serves threat intelligence and endpoint security — pushed out a flawed software update.
The result was “blue screen of death” failure on devices involved in infrastructure and businesses; travel and healthcare disruptions; and a serious wake-up call for countries focused on boosting their cyber resilience.
So what? An incident that directly affected less than 1 percent of all Windows computers worldwide caused cascading failures that disrupted commerce, travel, and essential services. It’s the kind of incident that could become more disastrous as a greater share of the global economy relies on complex, interconnected, cloud-based IT services.
Flashback: In 2017, weaponized code released by Russian hackers spread rapidly beyond its intended target, crippling IT systems globally and causing tens of billions of dollars in damage. It wiped out the entire IT system of Maersk — the global shipping giant — which had to coordinate movements of ships with phones, pen, and paper for days afterwards.
What’s different this time? The Crowdstrike failure was not a result of hacking by bad actors, but of testing failures. Trusted actors can make mistakes if their own protocols fail and the knock-on effects of security-privileged updates can be more disruptive than any hack. Of course, bad actors are capitalizing on the chaos; reports have surfaced of hackers sending emails dangling promises to remediate failed systems to trick people into installing malware.
Inference: Compounding, real-world issues can emerge from cascading digital ones; this is the reality that national governments must reckon with going forward as the integration of technology into our daily lives continues apace, especially if more sophisticated hacks can take advantage of instability caused by other failures.
What next? Governments need to consider whether the benefits of integration into global scale digital networks, relying on hyperscalers and their specialist security providers, is worth the inherent risk of mass outages and failures. The answer, almost certainly, is yes. The economic benefits of every company in the world having access to scalable digital infrastructure, maintained by companies that can afford to invest nation-state-level resources into cybersecurity, very likely outweigh the negative consequences of putting a lot of the economy’s eggs in a few hyperscalers’ baskets.
A spokesperson from Microsoft said the company had agreed to grant third-party developers access to the fundamental programming layers of its operating system in an antitrust dispute with the EU Commission in 2009. In doing so they increased its vulnerability to the deployment of faulty code by security entities like CrowdStrike, over which they do not have a sufficient level of testing controls to guarantee against failures.
In the aftermath of this latest crash, governments will likely revisit whether their policies are creating the right incentives for firms to invest in the rules and resilience needed to manage these types of disruptions when they arise.
As external pressures on societies mount; in the form of extreme weather events and wildfires, the potential for conflict with China, or Russian-backed cyber-militia, and the possibility of future pandemics that may prove more serious and lethal than Covid-19, digital resilience will become even more vital.
MEANWHILE… The African Observatory on Responsible AI, a leading policy and open research body, made recommendations for an “outcome-based” approach to regulation of artificial intelligence on the continent.
Last week, the group also announced receipt of a grant from the International Development Research Centre of Canada and the UK Foreign, Commonwealth & Development Office to continue its work, including certifying policy-makers and other stakeholders in ethics and human rights.
So what? The direction that African countries take in terms of governance around artificial intelligence is going to attract more attention in the global AI policy conversation in coming years.
How? Beyond the obvious point that African countries will be home to the world’s youngest, fastest growing populations, the continent may also have a huge opportunity if it can figure out how to tap the power of AI tools in sector-based applications like farming, industrial processing, transportation and communications as it aims to close the digital development gap. The Observatory’s recommendations report from earlier this year gives an indication of its policy trajectory.
What does it say? In short: focus on outcomes, and allow sector-based experts to inform regulation. This is an approach more closely aligned with the US, UK, and Japan than it is with the more prescriptive, top-down approach to AI in Europe.
It proposes closing the “AI governance deficit” i.e. helping policy-makers to catch up with the fast-changing and complex landscape of AI applications, by empowering existing regulators to understand where artificial intelligence might affect their remit. It also calls for industry experts to have a say in which outcomes should be avoided, or aimed at.
Taking a sector-based approach that includes industry input may rankle some in civil society who prefer a more comprehensive, horizontal rules-based approach to AI governance; but it reflects a degree of realism about the constraints facing governments on the continent as they attempt to set up guardrails around the use of AI, while also helping millions of people across Africa, who lack reliable, high-speed internet access, to plug into essential services.
What we’re reading:
Early appraisals of what a second Trump Administration might mean for AI policy in the US.
Reports from POLITICO of a possible UN AI Forum to rule all AI forums…and predictable pushback on the idea from G7 policymakers.
Richard Windsor’s bullets on SoftBank and Graphcore’s survival strategy.
A great thread on why the business of generative AI hasn’t actually been booming.
What we’re looking ahead to:
11 - 12 September: AI For Defense Summit, in Washington DC, 22 - 23 September: UN Summit of the Future.
23 - 24 September: Intel Vision 2024.
6 - 7 November: FT Live: Future of AI.
10 - 11 February 2025 : AI Action Summit in Paris, France.