There are lots of different methodologies for assessing or breaking down a risk. The most common is a two-factor approach where the likelihood and potential impact of an event combine to create a risk.
Likelihood (of the thing) + Impact (how the thing affects you) = risk
This is what’s used for The World’s Simplest Risk Model.
However, this two-dimensional approach leaves out one thing. Sometimes there are barriers between the event or threat and you. So in addition to the event and its potential impact, there’s the concept of vulnerability or exposure: factors that make you more or less susceptible to the event. These can be passive factors – e.g. physical distance – or active measures you’ve taken to reduce your vulnerability to an event.
An example of an active measure would be a firewall to protect your IT system. The threat (data loss from a hacking attack) is the same as is the impact if they were to be successful. The difference is that we can now factor in the protective measures you’ve taken to reduce your vulnerability to the threat and therefore reduce your risk.
This is why you sometimes see two-stage assessments with the ‘raw’ risk on one side – the basic calculation – and the treated risk on the other. The treated side shows the risk after these barriers or controls have been taken into consideration.
I don’t like this approach for several reasons but here are the two main ones.
- It’s slower because you have to run the assessment twice, once with controls and once without.
- It’s often confusing because it can be hard to tell if a control is an existing control or a possible control. So I don’t immediately know if the value for the treated risk is what it is now or what it could be.
So I like to think about risk in 3-D and pull all three elements into your assessment. Even though this might seem more complicated up front, it’s much easier in the long run. It’s also really useful when you start to think about risk treatments.
I realize that this seems like a contradiction to The World’s Simplest Risk Model (TWSRM) which only uses a 2×2 grid.
So am I saying that TWSRM doesn’t work? No, not really.
TWSRM lays out the absolute simplest approach to risk I could come up with. Unfortunately, the trade-off was that, to make it so simple, there had to be some compromises. It’s simple and works but isn’t the optimum model. But if you’ve never had anything to do with risk management, The World’s Simplest Risk Model is a good place to start.
3D risk takes this concept to the next level and is vastly more powerful but TWSRM still works.
The best model for risk, in my opinion, is to consider all three components at the same time, not in two stages. This means that your risk definition is:
Risk is a combination of the threat (event), the vulnerability (how insulated you are from the event), the impact (the potential effect of the event).
You can think of this mathematically too
risk = threat * vulnerability * impact
Now, you have everything in one place and a much better sense of all the components that make up the risk. When it comes to addressing the risk, this improved understanding lets you really see the factors that are generating the risk. You can think of these as three levers or dials which can be moved to adjust the risk up or down using the A4T options.
The key benefit to me is that you’re dealing with everything at once, not conducing multiple calculations. This is faster and more efficient for three reasons:
- Firstly, if a value is zero, you can discount the risk immediately. So if you have such good controls in place that there’s effectively no risk, you will see that right away instead of having to compete the second step make that conclusion.
- Secondly, you have fewer calculations to conduct. So you can run through the assessment much more efficiently.
- Thirdly, when it’s time to discuss how you are going to address the risk, thinking about three levers that you can adjust is a very useful visualization. This helps the group better understand the options available and select the best mix of A4T options they want to use in their risk strategy.
Here’s a simple example of the two models being used for the same situation.
An exposed live 220V cable (the threat) will cause me significant damage (impact) if I touch it. Using the 2-D approach, I assess the raw risk and see that this is a high risk – I’m going to get frazzled if I touch the wire. But if I have some controls in place, such as insulated gloves I can use to handle the wire, I need to repeat the calculation to determine what the residual risk looks like.
But if I just use the 3-D model, I can add in the controls right away. I consider the threat (electrocution)and the impact (shock or possible cardiac arrest) as I do in the 2D model but I also add in the controls. When I add vulnerability now (which is reduced because I use insulated gloves) I can asses the risk in one step.
A two-factor model is still sufficient in some circumstances, especially if you are just getting started but I’d recommend you start thinking about risk in 3D as soon as possible to speed up your process and start having richer, more detailed discussions about how to address your risks.
So the threat, vulnerability, impact model is faster, more efficient and gives you better data to help with decision-making. That’s a win, win, win in my opinion (and still KISS).