To protect or defend anything (a person, a home, a bit of critical infrastructure, a country) it helps to think like an attacker (Martin 2019). You might think that leaving an upstairs window open doesn't increase your risk of burglary, until you lose your keys. Needing to get into your house without a key makes you think like a burglar. If it's pretty easy to get to that upstairs window and climb in – perhaps because you have an extendable ladder in an unlocked shed – you’ll think twice about leaving the window open when you're out (and you might also invest in a lock for the shed.)
The RBOC Network+ project is all about anticipating what capabilities we will need decades from now to protect against potentially catastrophic attacks. Our timeline is to the year 2050. We don’t know what will happen in 30 days, let alone 30 years: as the physicist Niels Bohr is supposed to have said, “prediction is very difficult, especially if it's about the future.” We can, of course, speculate as to what might happen, and many people do (and have). The problem with speculation is that there are just too many possibilities in any context over such a long timeline. When Stanley Kubrick and Arthur C. Clarke made 2001: A Space Odyssey in the late 1960s, they were also trying to imagine a future that was three decades away. But more than fifty years later we still don't have general artificial intelligence like HAL 9000, manned spaceflights to Jupiter, or bases on the moon, even if 2001 was impressively accurate in other respects. Blade Runner, released in 1982 (but based on a 1968 novel by Philip K. Dick) is also set a few decades in the future, and now that we have passed 2019 we can say with some confidence that bioengineers have not, yet, succeeded in creating perfect simulacra of human beings. The problem is not that thinking about the future is too difficult. The problem is that it's too easy.
In the security studies field, putting ourselves into the mind of an attacker can help to discipline futures thinking. This can be incorporated into structured techniques like the Delphi Method (which originated in the US defence sector in the early days of the Cold War “to forecast the effect of technology on warfare” ). It can be gamified: chess is a good example of a wargame that requires players to think like their opponents, while game theory (which also originated in 1940s America) enables scenarios involving attackers and defenders to be modelled (as the RAND Corporation attempted to do with Cold War nuclear strategy). And it can be supported with mathematical techniques such as Bayesian probability and predictive modelling.
In the RBOC project we will be developing scenarios which begin with an attack that causes catastrophic impacts in a major British city (we chose Manchester). The adversary's capability is one input into the scenario, but the point is not to predict but – as in science fiction – to explore. It takes us into the realm of the purely hypothetical, but with the aim of judging potential impacts, and how these may be prevented or mitigated.
We have not – yet – deployed any advanced methods to explore adversary capabilities but may do so as the project develops. As we are still in the early stages, we have created and are refining the scenario using a set of simple questions. For the attacker (or 'threat actor' to use the jargon), we ask: Who is the imagined attacker? Why do they want to attack? What capabilities might they have by 2050?
Answers to these will help us with the main question: how might the threat actor attack us? What will be the impact of various potential attack methods? What vulnerabilities might they be able to exploit?
Different adversaries have different aims and different capabilities, so knowing who is trying to attack you will present you with some likely aims, and their aims will provide strong clues as to which tools they will use for the job.
For the RBOC scenario, we discounted terrorist groups on the basis that they generally lack either the intention or capability to create a widescale catastrophic event. Although there is a rich repertoire of imagined terrorist catastrophe in popular culture, it remains as true today as in the 1970s that “terrorists want a lot of people watching, not a lot of people dead” (Jenkins 1975). The logic of terrorism is that relatively localised events can have outsize political consequences. Even 9/11, an event on an unprecedented scale, did not stop New York or Washington working. As with most terrorist spectaculars, the attacks themselves were highly localised even though the geopolitical impact was disproportionately vast.
Nor have the wildest predictions of that post-9/11 period come to pass. Terrorists have tried and, in some cases, succeeded in deploying chemical weapons (and even some rudimentary biological weapons) but again these have had a fairly limited impact. Even the worst case – the Tokyo subway attacks by the doomsday cult Aum Shirinkyo – caused far fewer deaths (14) than the tried-and-tested improvised explosive devices used in London in 2005 (52) and Manchester in 2017 (22). Al Qaida were very interested in radiological devices in the 2000s but lacked a credible, effective plan to manufacture and deploy them (Salama & Hansell 2005). That's because creating a viable device is hard and these terrorists, like almost all others, have had much more success with enterprising use of what's at hand (commercial fertiliser, black-market assault rifles, knives bought in DIY stores) rather than the adopting risky novel technologies. Terrorists wielding nukes in suitcases might make for an exciting TV or movie box set but this is not a realistic scenario for security professionals, even those looking thirty years ahead.
States, on the other hand, are capable of conducting catastrophic attacks on their adversaries, although these generally occur in (or as a prelude to) a state of war. Our brief suggested a peacetime scenario and hence a covertly organised stealth attack that either falls short of an act of war or that has an impact greater than the threat actor intended.
The most plausible scenario that fitted this set of conditions was an offensive cyber operation mounted by a well-resourced hostile state. Given the hysteria that often accompanies threat prediction in the cyber domain – with “electronic Pearl Harbours” taking the place of terrorists with nukes (Clarke and Knake 2010) – we recognised that certain conditions would need to be met for such an attack to be genuinely catastrophic. First, the attack would need to be extremely well-planned (over a period of years rather than months) and resourced. Second, the UK city would need to present a high degree of vulnerability to the attacker. Third, whether the attacker realised this or not, the interconnections and dependencies within the city's critical services (energy, health, transport, emergency response etc.) would need to be so complex that the attack would act as a trigger for cascading effects that would cumulate in catastrophe.
We also took inspiration from the knowledge that, despite talk of cyberwar, modern warfare does not confine itself to a single domain, but operates across the physical and virtual worlds, using kinetic and intangible methods to achieve military, strategic, political and tactical effects (see Moore 2022). We therefore thought it likely that such a well-planned and well-resourced attack would be accompanied by supporting operations designed to shape the operating environment, both to increase the likelihood of success and to amplify its impact. For this reason, we included a proxy threat actor – an organised crime group that is being operated or at least influenced by the hostile state.
Finally, recognising the importance of contingency in historical events, we considered that the likelihood of a successful attack would be greater if it coincided with another event/series of events that put pressure on critical services. That suggested a third type of threat actor - a potentially subversive organisation that exploits the catastrophe to promote itself and its vision. And just to make things even more tricky and given the trend of more frequent extreme weather events due to global heating, we decided an event such as a major winter storm with heavy snowfall or very high winds would be a plausible backdrop. A severe drought with urban wildfires would be equally likely but given that this was Manchester we thought a lot of rain would be more fitting.
References
Clarke, Richard, A. and Robert Knake. Cyber War: The Next Threat to National Security and What to Do about It. Ecco, 2010.
Jenkins, Brian. "International terrorism: A balance sheet." Survival 17.4 (1975): 158-164.
Martin, Paul. The rules of security: staying safe in a risky world. Oxford University Press, 2019.
Moore, Daniel. Offensive Cyber Operations: Understanding Intangible Warfare. Oxford University Press, 2022.
Salama, Sammy, and Lydia Hansell. "Does intent equal capability? Al-Qaeda and weapons of mass destruction." Nonproliferation Review 12.3 (2005): 615-653.