By Isaiah McGowan

There’s an argument against quantitative risk analysis that comes early in discussions about FAIR (Factor Analysis of Information Risk). It goes like this: “We don’t have enough data to do quantitative risk analysis!”

This is a straw man argument. The first rebuttal I think of is: “Do you have enough data to do qualitative risk analysis without FAIR?” Most people answer yes.

Here’s the rub: You will use the same data for FAIR analyses as you currently use in other qualitative approaches to risk measurement.

Qualitative approaches versus FAIR is about utility, NOT DATA

FAIR provides a model for applying data in a semi-prescriptive context resulting in risk measured in dollars and cents (in other words, business terms). It’s not that FAIR requires more data than other approaches. Instead, it has a substantially higher utility to offer over qualitative measurement models because it is tailor-made for probabilistic modeling techniques.

At RiskLens, where our consultants perform risk analyses every day, we leverage Monte Carlo simulations and aggregation models that are only possible because the model lends itself to mathematical operations, something non-existent in other measurement approaches. Therefore, it’s utility as a model stands head and shoulders above the rest.

If FAIR isn’t driving our struggles over data, what is?

Over at the FAIR Institute blog, Jack Jones wrote a phenomenal post highlighting the two models necessary to perform sound risk analysis:

  1. The first model is the model of the problem space. We call it the Scope, and it describes the components of the analysis and how they relate. Think: Threats against Assets yield Loss Events (like data breaches or DDoS outages).
  2. The second model is the measurement model which in our case is FAIR. It is organized such that it is immediately ready for taking data and providing a measurement of the problem space.

The first model is necessary for successfully measuring any risk. Without appropriately outlining how a risk would play out and what components are involved we stand no chance of measuring the risk in any meaningful way, regardless of the measurement model we employ. This model of the problem is the mechanism for determining what data you need for your analysis. 

Why do people struggle finding data for FAIR analyses?

In our experience training thousands of analysts, one of the early stumbling blocks in applying FAIR is failing to articulate the model of the problem. It’s not the FAIR model itself, it’s the way in which we structure our world that makes us feel tremendous inertia in finding data. The scope of your analysis is the jockey cracking the whip and directing the analysts where to find the data. Mucking up the scope of an analysis will be the tell-tale sign that data gathering efforts will go awry. 

By applying proper scoping techniques, like those defined in our training course and employed in RiskLens software, analysts are better equipped to understand what are the intelligent questions necessary for drawing out the data.