Articles

A One-Eyed Man in the Kingdom of the Blind: Predicting the Unpredictable

one-eyed“Almost nobody’s competent, Paul. It’s enough to make you cry to see how bad most people are at their jobs. If you can do a half-assed job of anything, you’re a one-eyed man in the kingdom of the blind.” –Kurt Vonnegut, Player Piano

Sometimes models are poor predictors

I don’t really build models—I am just kidding—rather, I use my crystal ball and get away with it in the “kingdom of the blind”. During Operation Desert Storm (back when I was a young Army officer), we used several predictive models for things like battle outcome, casualty rate, and so on. Some of these yielded really bad predictions.

The well respected Concepts Evaluation Model (CEM) was a piston model which we were using to determine how our forces would push Iraqi units out of Kuwait. It was not very accurate. Another model, the much respected Extended Air Defense Simulation (EADSIM), was used to try to calculate the effectiveness of our daily airstrikes against targets. EADSIM was not calibrated for the kind of forces we would fight during Desert Storm, and the results were being tweaked a little bit every day until they could get the model to behave like reality.

Neither one of these model projects were embarrassed by this, and they published an article in a Military Operations Research Society (MORS) publication, Warfare Modeling (Smith R., Military Simulation Techniques & Technology, 2006).

The Corps Battle Simulation (CBS) was being used to determine the level of attrition that would occur to US and Iraqi forces. It overestimated US casualties by several orders of magnitude, because the assumptions behind these models could not be applied to the kind of combat experienced in Desert Storm. The assumptions behind these models were based on a soviet-bloc force.

Sometimes you do not need a model

I recently met with two different customers who wanted models. They both wanted propensity to purchase models. After discussing their requirements, it turned out they had data for the phenomena of interest but had not used the data to determine if their product was appealing to their customers. They had data on 7MM that they could use for this purpose, but they were focus on 1.7MM for which they did not have this data. They did not need a predictive model. Instead, they need to experiment to understand their present situation (descriptive) and collect additional data if the information value was low.

Sometimes you should not try to model

Here are some things you should not try to model: death event, birth event, divorce event. Imagine you have predictive models for the death event and birth events. Suppose that a customer identified by a model is contacted by agent or representative.

“Sir, I understand you are going to die next May. Are all of your insurance needs being met?”

“Sir I understand that your wife is expecting a new baby in the next six months…”

These are events we should react to, but should not try to get ahead of. If you are selling life insurance, for example, focus on touches at birth date, rather than life events such as divorce. You will probably catch most events by contacting customers during their birth-month.

“Sir, we wanted to wish you a happy birthday and make sure that we are meeting all of your life insurance needs.”

Sometimes models are useful

You may have a model that informs you of a customer’s propensity to purchase a new mortgage. You might take the results and contact ten people with the top propensity score. You might find out that a third of them are renting and have no intention of getting a new home; hence, a new mortgage. Does that mean your model is wrong? Yes and no! I like to use this quote from the late George Box.

“All models are wrong, but some are useful.”

It is true, every predictive model I build has flaws! The models are abstractions of reality, based upon simplifying assumptions. We could not accurately predict a divorce event, because there are too many factors that we do not have data on (or at least should not have), like whether a husband is having an affair, whether one spouse is abusive to the other, and so on. A customer may have scored high in a new mortgage model, because they are moving and have the profile of customers who may be in the market for a mortgage. These factors might include high income, older in age, career field, and so on. Predictions are not 100 percent accurate. In Figure 1, the model will give you false negatives and false positives. Hopefully, we have followed a modeling process that reduces these errors, but just by the nature of statistics, error will occur (Type I and Type II errors), even if the rest of our modeling process is perfect.

Propensity Model

Figure 1. Propensity Model Performance – Cumulative Response

Where a model could be useful is where it is predicting responses that are greater than what we might know just by random selection. We might be picking up an additional 2-percent over random selection, and that 2-percent could be good (statistically significant) where we have a very large customer (or prospect) base, and it could be poor if we are talking about 2-percent of 100 customers (or prospects).

Conclusion

As modelers (or model project managers), we have to manage customer expectations up front, and as consumers of models, we have to know what to expect. A propensity model, for example, is going to take a large audience and pour it through a funnel (see Figure 2) with a narrow opening for customers with a high propensity for doing X action (e.g., purchasing).

Propensity FunnelFigure 2. Propensity model funnel


Jeffrey StricklandAuthored by:
Jeffrey Strickland, Ph.D.

Jeffrey Strickland, Ph.D., is the Author of “Predictive Analytics Using R” and a Senior Analytics Scientist with Clarity Solution Group. He has performed predictive modeling, simulation and analysis for the Department of Defense, NASA, the Missile Defense Agency, and the Financial and Insurance Industries for over 20 years. Jeff is a Certified Modeling and Simulation professional (CMSP) and an Associate Systems Engineering Professional. He has published nearly 200 blogs on LinkedIn, is also a frequently invited guest speaker and the author of 20 books including:

  • Discrete Event simulation using ExtendSim
  • Crime Analysis and Mapping
  • Missile Flight Simulation
  • Mathematical modeling of Warfare and Combat Phenomenon
  • Predictive Modeling and Analytics
  • Using Math to Defeat the Enemy
  • Verification and Validation for Modeling and Simulation
  • Simulation Conceptual Modeling
  • System Engineering Process and Practices
  • Weird Scientist: the Creators of Quantum Physics
  • Albert Einstein: No one expected me to lay a golden eggs
  • The Men of Manhattan: the Creators of the Nuclear Era
  • Fundamentals of Combat Modeling

Connect with Jeffrey Strickland
Contact Jeffrey Strickland

Advertisements

1 reply »

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s