The effectiveness of risk matrices has long been debated with some questioning its value – so let’s assess other risk methods, including multi-criteria, writes Slawomir Pijanowski, risk management expert and consultant for Atos Consulting

Despite many books, articles, and papers on the limitations and errors of using risk matrices – by multiplying probability and impact – (Douglas W. Hubbard Failure of risk management book is my favourite), there is a lot of evidence that this kind of risk scoring still has a place.

It is still implemented worldwide in risk management, business continuity management, ERM, IT, IT security, operational risk and related information systems, as part of calculating the risk level.

The notion that it is a “simplified method”, complicates the assessment of risk and blurs its perception…which is risky, isn’t it?

Risk represented as multiplication of probability and impact means: ‘low probability and high impact’ could be seen as the same as ‘high probability and low impact’.

A 5 x 5 scale matrix, where 1 is low impact or low probability; and 5 is high impact or high probability, could be the same as: 1 x 5 = 5 x 1.

One risk may mean disaster or bankruptcy, and another may mean “just typical cost of business”. You can colour it differently – but scoring remains the “key methodology”.

There are many questions that challenge this scoring method, and I’ve listed them below:

  • How can you arrange or sort, differentiate, prioritise risks based on this method?
  • What is the probability that you can make good or bad decisions based on this scale?
  • If you manually select risks, why would you then need this scoring?
  • Will you include information drawn from this rating as valuable or supportive in making important decisions?
  • If you are not going to use the information from the above scoring during decision making, why are you wasting time on it and why are you allocating corporate resources on it?

There’s another argument against the use of probability scale and risk matrices, especially with regards to these terms: ‘almost certain’, ‘likely’, ‘possible’, ‘unlikely’, and ‘rare’. They mix frequency with probability. The word ‘possible’ connotes all the above (except the word “impossible” which is not in the scale).

Is this a massive “anchorage heuristics” type of cognitive bias or the failure of a man’s rationality? People are afraid of using the correct approach, even if it were presented to them, mainly because: “we received this methodology from head office, so we are not in the position to correct it”.

Assuming we had the answers for overcoming the above difficulties, how much would you pay to receive the solution to correctly addressing risk-based decision making?

The risk management paradigm change

And here is the real challenge for any follower of discussions around the ‘risk management paradigm change’. We know from Sharpe’s CAPM model that there is a “risk premium” concept for investing in risky assets; or prof. Kaplans’ “execution premium” for persistent performance of strategy, which means you should valuate or expect more return and premiums for such “additional” features, like riskier asset or good realisation of strategy being sufficient, good and translated into operational processes, activities or project – in short being designed for execution.

But the question remains: to what extent can we expect and valuate “risk-based decision-making premium” or just “more efficient risk management premium?”

Are we really able to assess the impact of bad decision making, by using risk matrices, even if we know how ambiguous they are in terms of probability?

To answer this question, we need to ask another question which addresses silent assumption between old and new: “Are all key decisions made within the company based on risk matrices?”

If not, then we must be very careful about assessing decision impact, because in some areas this is not impossible to do. For example, in engineering when you construct or design a system, device or installation, we are not always able to imagine all the side effects or we are surprised by the lack of consistency of the algorithm or programme, which may lead to device failure. Take the simple example of mobile phone power supply or car tyres blow-outs.

It is more possible to do this in areas of gaming, auctions, diplomacy, and competitiveness, as your decision impact is reflected quite quickly in the reaction of your competitors or game changers, particularly when you do not perceive them as your competitors. There is also a lack of reaction which provides meaningful information.

Having this in mind, stakeholders will ask, “so what, risk matrices are wrong?” and “Tell me how they impact the decisions in terms of performance?”

There is a lot of supporting evidence that for key decisions, multi-criteria should be used rather than risk matrices only. But we must be careful not to exaggerate the problem here. Just simply look at the below list:

Decision on:

  • New strategy approval, update or stopping and total change – are risk matrix used only?
  • Launch programs, investment project supporting strategy execution – are risk matrix used only?
  • Gaining new key managers, experts, employees with competences and attitude fitting new values vision and strategy – is a risk matrix present?
  • Leverage with external financing (debt), same question as above.
  • Restructuration (reduction of employees)
  • Selling useless assets, savings in poor years
  • Merger and acquisition, or defending against hostile acquisitions
  • Switching from normal management to crisis management mode
  • Buying other key resources, technologies
  • Self-insurance versus risk financing in traditional insurance market

A multi-criteria approach

In the above, multi-criteria in decision making should be used and not simple two criteria – probability and impact. I have never seen risk matrix used alone to make key company decisions. We can see clearly why there is justification for calling risk matrices ‘a problem’ and exaggerated in the context of the above decisions.

Thus, before we state the need for “ERM 4.0” – or any other marketing buzzwords used to describe the current need for more efficient and effective value-added risk management in the future – we must consider multi-criteria and somehow classify and apply them to risk assessment, so that they are driven by respective pricing models, showing correlation between additional gain from better decisions-based on risk assessment and the cost of maintenance of the model – in the areas where it can be really applicable.

In short: how much you would pay for better decisions, for a model that tells you how much additional gain is possible. Reduction of uncertainty as ‘added value’ is not as sufficient as ‘competitive advantage’, we must realise that uncertainty or risk models should generate information which provides the space for creativity, as the best win for the future is creating it though competitive advantage and creating the perception of uniqueness (see interface change of user interface ergonomic in the first Apple phone).

One way to determine whether a given method is better is by conducting a comparison of computations of various risk assessment methods to ascertain their sensitivity for changing initial assumptions and checking how much information or noise (ambiguity or omissions) it adds to the decision.

Moreover, when comparing methods carefully, you would quickly see that each method on the table has a few silent assumptions and those assumptions are different from method to method.

On the other hand, with each method there is a problem of measuring decision impact, which is not always justified or possible. Sometimes in maximum entropy choices, you are not able to assess whether your choice is bad or wrong. You must wait a while and see. And you can attack methodological aspects of what is described to be scientifically solid. I mean, attacking probability theory is hard to do, but if you prove that humans are not making decisions based on probability but other factors (see fuzzy logic of Lofti Zadeh and Heinz von Foerster constructivism), then whether you treat probability data as objective or subjective is a separate story – Monte Carlo being memoryless being applied to practice where long-term memory influences a lot.

Topics