Articles

Modeling Terrorism

Data Science - Modeling Terrorism

The attack of September 11, 2001, showed that terrorism is capable of inflicting damages ($40 billion+) and loss of life (3,000+) that are multiples of the worst U.S. natural disasters (Hurricane Andrew at $20 billion+ and 40-60 lives; Northridge earthquake at $12.5 billion and approximately 25 lives) (Bassett and Schroeder [1998]; Mooney [2001]; Pawlowski [2001]). In the wake of this unprecedented disaster, insurers and reinsurers have been excluding terrorism risk from their offerings, with grave consequences for commercial property owners and lenders (Mooney [2001]). Most insurance analysts and actuaries would agree with Munich Re’s Christian Kluge: “There is no mathematical model for terrorism” (Fromme [2001]). But the need for one is clear.

Terrorism risk shares features with other forms of catastrophe risk, including a time series of historical events, yet goes beyond them with an extra layer of impenetrability. Defensive studies of terrorism risk resemble risk analyses of complex engineering systems (nuclear power plants, satellite launches, etc.). A particular scenario can be analyzed in terms of the probability of failure of critical subsystems. However, unlike natural disasters, it features human intelligence, and unlike industrial disasters it features human intent. To quantify the risk, much like solving the celebrated “three doors” puzzle, probability is not enough. Methods from operations research, including game theory and search theory, as well as certain specialized areas of statistics, may well be needed to construct an adequate modeling framework.

An Illustrative Model

To explore the possible application of game theory to the modeling of terrorism risk, let us consider the following simplified model:

  • There is a set of targets indexed by the letter i, numbered from 1 to N. Each target i has a value Vᵢ.
  • An attacker, with total resources Aᵧ, must choose a target and how much resource Aᵢ to assign to it.
  • A defender, with total resources Dᵧ, must decide how to allocate resources Dᵢ among the targets.
  • The total destruction of target i occurs with probability given by a function p(Vᵢ, Aᵢ, Dᵢ).
  • The attacker wants to maximize, and the defender wants to minimize, the expected loss EL which is given by the formula:

EL=∑ Vᵢ∙p(Vᵢ, Aᵢ, Dᵢ)                                     (1)

We can be justified in using expected value as the criterion if we consider the values Vᵢ to represent utilities—and as long as both sides have the same utilities.

In the parlance of game theory, we would say this is a zero-sum game with payoff EL to the attacker. The attacker’s strategy options consist of a choice of target and the assignment of a resource to it.  The defender’s strategy options consist of the simultaneous assignment of resources to all N targets.

Before going on to consider specific functional forms for the probability function, is there anything meaningful to be said? Indeed, there is.

First, if the probability of a successful attack goes down with an increase of applied defensive resources (and this ought to be true under any plausible formula for p), then the defender should use all the defense resources. There is no reason to hold back, because a quantity of unused resource could be applied to reduce the success probability and hence the EL of at least one target.

Second, the minimax criterion reveals a solution for the defender. Not knowing how the attacker will choose targets, the defender should choose a strategy that results in the lowest possible worst-case EL, regardless of which target the attacker selects. That implies the resulting EL among (defended) targets will be equal (and the EL among undefended targets will be less). Why? Imagine one target defended in such a way that its EL (if attacked) is greater than another. Then it pays to shift defense resources from the lower-EL target to the higher one, until they have equal EL s (lower than the original high-EL). The more valuable targets should thus be equalized in terms of their expected losses. Less valuable targets may be left undefended, because an attack there, even if successful with 100% certainty, will result in a loss that is less than the EL of the defended targets. Call this equilibrium expected loss EL°.

Third, this leaves the attacker with a set of, say, M defended targets, where the best he can do is achieve that same EL=EL° among any one of them. For the undefended targets, he can only achieve their value V, which is less than EL°. His best strategy (a mixed strategy) is to choose one of the M targets at random—but with what relative probabilities?

If the attacker can observe that the defender has allocated resources in the optimal fashion, then he is truly indifferent as to the choice of target, and the assignment of probabilities is indeterminate. The usual game-theoretic formulation, on the other hand, assumes both sides must make their strategic choices with no knowledge of the other side’s choice. If the defender might not be allocating resources optimally, then the attacker should use probabilities that protect him from doing any worse than EL° on average. This implies that each target’s selection probability should be inversely proportional to the marginal effectiveness of defense (the rate of change of EL with respect to changes in D) at that target. If p varied (approximately) linearly with D, for example, then selection probability would be (approximately) inversely proportional to target value: qᵢ =k/ Vᵢ.

Fourth, given the above, and if we knew what that equilibrium  EL° value was and could work out the pattern of attacker selection probabilities qᵢ, we could then derive the overall probability distribution of loss from an attack. Each of the M defended targets has a qᵢ  probability of being attacked, and each such target i, if attacked, has a probability of losing its value Vᵢ equal to pᵢ = EL°/ Vᵢ.

Note, nowhere did we need to refer to the central limit theorem, independent increments, extreme value theory, or any other such probabilistic assumptions and tools that are often applied to the study of nature or complex processes. The conclusions flowed from our assumptions about human intentions and rational behavior.

A Numerical Example

In this section, we use a particular function p(Vᵢ, Aᵢ, Dᵢ) which allows us to engage in specific computations and carry out a numerical example. The function is:

p(Vᵢ, Aᵢ, Dᵢ) = exp]-AᵢDᵢᵢ/sqrt(Vᵢ)]∙[ Aᵢ²/(Aᵢ²+ Vᵢ)]

This consists of two terms multiplied together. The first term represents the probability of a planned attack escaping detection. The second represents the probability of the attack succeeding in its technical execution, given that it goes undetected.

Note that in the extreme of no defense (D = 0), the probability of escaping detection (represented by the first term) is 100%. The probability of success is then a matter of having assigned attack resources sufficiently greater than the square root of V to bring the second term near to 100% as well. An assignment of the full budget Aᵢ of attack resources is optimal. However, if there is some defense assigned, this may not be the case. Past a certain point, attack resources become counterproductive because they raise the probability of detection faster than they raise the probability of technical success. This is illustrated in Figure 1 for a target with value of 1.

FIGURE 1. Increasing Attack Resources Improves Success Probability When There Are No Defenses but Can Be Counterproductive in the Face of Defenses

FIGURE 1. Increasing Attack Resources Improves Success Probability When There Are No Defenses but Can Be Counterproductive in the Face of Defenses

For a given value V and level of defense D, there is an optimal attack resource, call it A°, which may or may not be greater than the attack budget Aᵢ. In assigning defense resources, the defender needs to know what that attack budget is in order to compute the equilibrium EL°. However, if the attacker has very large resources, or has the potential for increasing them, it may suffice to assume unlimited attack resources, Aᵢ = ∞. In that case, only the optimal attack resource levels A° matter. We may consider this a “conservative” approach to defense resource assignment.

Consider the following example: There are 20 targets, each of which has value 1.5 times the one preceding it, with the biggest target having a value of 1. The attacker has essentially unlimited resources and the defender has 20 units to be allocated among the targets.

What are the optimal defense and attack strategies, the resulting equilibrium expected loss, and the resulting probability distribution of losses?

Figure 2 depicts the target values as the diagonal line of “*” symbols with dots in between them, with the biggest target being #20 on the right-hand side, having value 1 (topmost value in the vertical scale).

At this point, we need to justify why a pure strategy, and not a mixed strategy, is optimal for defense. In this numerical example, it follows from the convexity of p in D. Any mixed strategy for defense is dominated by its “average” pure strategy; therefore no mixed strategy can be optimal. The optimal defense strategy is depicted in the square boxes connected by solid lines. To fit on this scale, the defense numbers were divided by 10. Thus, the defense allocation for target #20 is really 5.2, not 0.52. Notice that only the most valuable 10 targets (#11-20) are defended. The smaller targets are left with zero defense allocation.

Optimal attack resources are traced by the X’s in the upper right. They are mostly between 0.3 and 0.35, being constrained not by total attack resources but by the need to escape detection by defense.

The resulting equilibrium EL is 0.018, which is maintained for all defended targets. This is shown in the trace of diamond symbols in the lower half of the exhibit. Undefended targets have A° less than the equilibrium A° because their values, which they can lose with 100% probability, are lower. That is why the A° curve coincides with the values for targets #1-10.

Optimal attack probabilities are approximately proportional to the square root of the value of the target. These are shown in Figure 2 as dashed lines.

FIGURE 2. Target Attack-Defense Example (explained in text)

FIGURE 2. Target Attack-Defense Example (explained in text)

As outlined previously, for defended targets, the probability of an attack being successful is EL/V. Combining these results, if an attack is attempted, the probability distribution of losses is as presented in Figure 3. There is an overall 89.3% probability that the attack is not successful. The remaining 10.7% is not spread evenly over the defended target values, however; it is more likely to be a successful attack on a smaller target. The probability of the largest loss (a value of 1 from the largest target) is only 0.36%.

FIGURE 3. Probability Distribution of Losses

FIGURE 3. Probability Distribution of Losses

From Illustration to Usefulness

The previous exposition was intended to give a sense of what kind of reasoning and analysis would be required for a risk model appropriate to the terrorism hazard. In order to make such a model useful for actuarial purposes, an enormous amount of work still needs to be done.

First, the model as presented is really only a severity model. It says, if an attack is attempted, then losses will occur with such and such probability. It says nothing about how often an attack will be attempted, which is the role of a frequency model. Gordon Woo [2002] discusses in depth how the characteristics of a terrorist organization, in particular its organizational structure, influence the frequency of planned attacks.

How well does the model represent reality? This is the central concern for any model. While remarkably simple models are often very useful (think of lattice models in finance), the goal is to capture some essential truth about the situation being modeled. Here are some issues that need to be addressed in this regard:

  • Does it make sense to consider defense resources as being assigned to targets? In reality, considerable counterterrorism resources are devoted to intelligence gathering and other non-target-specific activities. This model completely ignores them. Is that acceptable?
  • To be applied, a model needs to have its components operationally defined. Specifically, how are we to measure the value of a target? Dollars are the usual measure of value, but perhaps lives or even media air time are more important to the attacker. Assigning “utilities” is even more problematic. How to measure attack and defense resources? Again, a monetary unit is a candidate, but number of people devoted to the task (“FTE” in human resources jargon) might be as good or better.
  • Where does one acquire a realistic list of targets and values? The major catastrophe modeling firms have inventories of commercial and residential building stock in the U.S., as well as infrastructure information (bridges, tunnels, port facilities, etc.). These huge databases might be able to supply a realistic distribution of target values. An alternative is to estimate the distribution from aggregate statistics, say by a multifractal allocation technique (Lantsman et. al. [1999]; Major and Lantsman [2001]). This could obviate a large amount of computational effort.
  • The model’s defense resource constraint must be interpreted as total societal resources devoted to guarding and protecting valuable properties. This goes well beyond what can be read in the news- paper as the latest appropriation from Congress for military homeland defense, because it includes state and local civilian police forces as well as private- sector security resources.
  • An attack budget is the most speculative item, requiring, for most accuracy, information that might be classified, and therefore unavailable to the private sector analyst.
  • What if the attacker and defender do not have the same utilities? Then we are out of the realm of zero- sum games and need to look to the Nash equilibrium for a solution. This could complicate the analysis considerably.
  • What if the optimal defense cannot be executed perfectly, or even approximately? What is the attacker’s optimal strategy if defense weaknesses are known? If they can only be known after resources are expended searching for them? Analytical approaches to these questions could become quite complex. Paul Kleindorfer suggested an intelligent-agent or cellular-automaton model where attackers move through a grid seeking out attack opportunities. Such an approach could take us out of the realm of analytical models altogether and into full-blown simulation. Weaver et al. [2001] discuss a highly detailed simulation approach that incorporates realistic terrorist scenarios and the use of non-zero-sum game theory.
  • The model considers one attack at a time, applying the attacker resource constraint to each attempt. In reality, the attacker’s resources are dynamic, being acquired and expended over time. Even in this simple model, it often turns out to be to the attacker’s advantage to launch two less-well-funded attacks simultaneously than one optimally-funded attack. This expands the attacker’s options and complicates the analysis.
  • Given a believable structure to the model, how is one to fit parameters? Clearly, analysis of historical data will play a role. Even more so than in hurricane and earthquake studies, however, the need for keen insight and understanding (read: expertise), guiding the selection and adjustment of data, is acute.

Even more so than in the case of hurricane and earthquake modeling, it cannot be overemphasized that building a model such as the one outlined above is an exercise in futility at best (and self-delusion at worst) without adequate input from terrorism experts. As Woo [2002] puts it, “Any probabilistic framework for quantifying terrorism risk, however logically designed, will ultimately have to involve a measure of expert judgment.”

Conclusion

We saw how the terrorism risk differs in kind from natural and man-made (accidental) catastrophes because of the elements of intelligence and intent. As a consequence of that, in modeling the terrorism risk, probability is not enough. Analysis techniques borrowed from wartime operations research, especially game theory, are at least as valuable as the trusted standbys of convolution and Poisson distributions.

A highly simplified model was presented, revealing some counter intuitive maxims about defending targets, and showing a direct, if perhaps surprising, route to the probability distribution of losses.

Numerous issues standing between the illustrative model and a truly usable terrorism risk model were outlined. Access to terrorism expertise is a crucial ingredient. Despite the drastic simplification involved, a model in the spirit of the one presented here has the potential to offer useful insights to the insurance profession.

References

Bassett, D., and A. Schroeder. (1998). The Climate Canary Report. U.S. Department of Energy. (Available at http://www.supramics.com/climate/statement.html)
Fromme, H. (2001). “Munich Re in Favour of German Terror Pool.” Alexander Forbes Group. (Available at http://www.alexanderforbes.co.uk/afuknews/PressArchive/10182001MunichReInFavourOfGermanTerrorPool.htm)
Lantsman, Y., J.A. Major, and J.J. Mangano. (1999). “On the Multi- fractal Distribution of Insured Property.” Guy Carpenter & Company, Inc. (To appear in Fractals, World Scientific Publishers.)
Major, J.A., and Y. Lantsman. (2001). “Actuarial Applications of Multifractal Modeling Part I: Introduction and Spatial Applications.” CAS Forum, Casualty Actuarial Society, Winter.
Mooney, S. (2001). “Are Terrorism Risks Really Uninsurable?” National Underwriter, October 22, 2001. (Available at http:// www.guycarp.com/publications/nu/2001/1022.html)
Pawlowski, D. (2001) “Life Insurance Industry Loss Estimates from Sept. 11 Events.” Fitch IBCA. (Available at http://www.knowledgedigest.com/Special_Content/Articles_on_9_11_and_Insurance/articles_on_9_11_and insurance.html)
Weaver, R., et al. (2001). “Modeling and Simulating Terrorist Decision-making: A ‘Performance Moderator Function’ Approach to Generating Virtual Opponents.” University of Pennsylvania. (Available at http://www.seas.upenn.edu:8080/~barryg/terrist.pdf)
Woo, G. (2002). “Quantifying Insurance Terrorism Risk.” In M. Lane, ed., Alternative Risk Strategies. London: Risk Books, pp. 301-318.


Jeffrey StricklandAuthored by:
Jeffrey Strickland, Ph.D.

Jeffrey Strickland, Ph.D., is the Author of “Predictive Analytics Using R” and a Senior Analytics Scientist with Clarity Solution Group. He has performed predictive modeling, simulation and analysis for the Department of Defense, NASA, the Missile Defense Agency, and the Financial and Insurance Industries for over 20 years. Jeff is a Certified Modeling and Simulation professional (CMSP) and an Associate Systems Engineering Professional. He has published nearly 200 blogs on LinkedIn, is also a frequently invited guest speaker and the author of 20 books including:

  • Discrete Event simulation using ExtendSim
  • Crime Analysis and Mapping
  • Missile Flight Simulation
  • Mathematical modeling of Warfare and Combat Phenomenon
  • Predictive Modeling and Analytics
  • Using Math to Defeat the Enemy
  • Verification and Validation for Modeling and Simulation
  • Simulation Conceptual Modeling
  • System Engineering Process and Practices
  • Weird Scientist: the Creators of Quantum Physics
  • Albert Einstein: No one expected me to lay a golden eggs
  • The Men of Manhattan: the Creators of the Nuclear Era
  • Fundamentals of Combat Modeling

Connect with Jeffrey Strickland
Contact Jeffrey Strickland

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s