If we accept there is a cognitive bias in decision making, how can we as risk professionals account for this and help our senior executive make better bias-free decisions? Risk Academy’s Alex Sidorenko discusses.

The earliest psychometric research was performed by psychologists Daniel Kahneman (who later won a Nobel prize in economics with Vernon Smith “for having integrated insights from psychological research into economic science, especially concerning human judgment and decision-making under uncertainty” (Kahneman, 2002) and his friend and colleague Amos Tversky. They performed a series of gambling experiments to understand how people evaluated probabilities. Their major finding was that people use a number of heuristics to evaluate information. These heuristics are usually useful shortcuts for thinking but may lead to inaccurate judgments in complex business situations of high uncertainty – in which case they become cognitive biases.

Fifteen years later, these findings would become hugely significant to the risk practitioners across the world. Which raises a question: why did it take so long?

Implications for risk practitioners

The significant role risk perception and research into cognitive biases play in risk management have finally been acknowledged by both ISO31000:2018 and COSO:ERM 2017. Some of the implications include:

  • · Decision makers tend to miss significant risks (professional deformation – only seeing familiar risks, overconfidence – refusing to consider negative scenarios, post-purchase rationalisation – refusing to accept new information, confirmation bias – filtering information according to own believes, normalcy bias – refusing to view alternatives and many others). People tend to miss important risks both individually and as a group. Additional biases like ‘group think’ affect the ability of risk managers to get meaningful risk information during workshops.
  • · Decision makers significantly overestimate or underestimate probabilities and potential impact risks may have on a decision or an objective. In fact, cognitive biases together with a generally low statistical literacy make people’s estimates about impact and probability borderline useless if not deceitful. Making people rate, rank or otherwise qualitative assessment of risks is no better than guessing.
  • · Decision makers tend to ignore or dismiss risks even once it is established that they have significant impact on a decision or objective. People have a whole set of biases that prevent them from taking meaningful action. For example, sometimes we prefer to implement risk mitigations that solve the immediate problem only to increase the overall risk exposure in the long run. Some people also tend to think that inaction is better than action, which often leads to much larger losses.
  • · Irrationality and effect of cognitive biases significantly increase on an empty stomach. Having low glucose in our blood prevents our brain switching from system 1 to system 2 thinking, making any kind of risk discussion before lunch or at the end of the day useless.

Overall, research into cognitive biases suggest that people are often irrational when making decisions under uncertainty, which significantly reduces the value of information risk managers receive from management. If expert opinions, rankings and ratings are the only or main source of information for the risk manager, the results of risk analysis are guaranteed to be inaccurate.

More information about effect cognitive biases have on risk analysis at work and in our day to day lives is available from the good risk management books: https://riskacademy.blog/2017/01/14/my-favourite-risk-management-books

Recommended solutions

Apparently, small doses of electricity applied to Wernicke’s area of our brain significantly reduces the effect of cognitive biases on our decision making. Ok, that’s obviously a joke. I mean, the research is real, but it’s highly unlikely we will be allowed to electrocute people before risk workshops, so here are some real solutions:

  • Stop using risk management techniques that primarily rely on human input. Ranking risks in terms or likelihood, consequence, velocity, viscosity and whatever else your external auditor will come up with next, mapping risks on a risk matrix and the similar are guaranteed to produce inaccurate and misleading results, so don’t use them for any significant decision.
  • Use mathematical methods for risk analysis that minimize the need for subjective human input. One way to overcome cognitive biases is to use scenario analysis or simulations when performing risk analysis, instead of traditional qualitative assessments. Quantitative risk analysis helps to present an independent opinion on strategic objectives, assess the likelihood of achieving them and the impact the risks may have on their achievement. But more importantly, quantitative risk analysis helps overcome cognitive biases and significantly reduce subjectivity. Some level of subjectivity still remains, as expert opinions may be required for some range and distribution estimates, however quantitative risk techniques still significantly outperform qualitative risk assessments. Here is an interesting study Douglas Hubbard quotes in his book How to Measure Anything in Cybersecurity Risk. For over 100 unmanned space probe missions, NASA has been applying both a soft “risk score” and more sophisticated Monte Carlo simulations to assess the risks of cost and schedule overruns and mission failures. The cost and schedule estimates from Monte Carlo simulations, on average, have less than half the error of the traditional estimates.
  • Better still, use mathematical methods that don’t rely on subjective human input at all. Mark Powel, an expert in mathematical risk analysis methods, says: “In maths, we use models for risk analysis but almost always there are terms or variables for which we just do not know what number to use. Most people guess these numbers and hope for the best. Instead, there are three methods that can be used to develop an uncertainty model for these numbers that maximize objectivity and eliminate subjective human input in our risk analysis. These methods are to find the uncertainty model that minimises the Fisher information (the measure of how much information the model adds to our risk analysis) (Jeffreys, 1939), find the model that maximizes the information entropy (entropy is a measure of disorder, i.e., the amount of disorder added to our risk analysis) (Lindley and Savage, 1971), and to find the model that maximizes the Expected Value of Perfect Information (the less information the model adds to risk analysis, the larger the EVPI) (Bernardo and Smith, 1995). Fortunately, all three of these diverse approaches give us the same objective uncertainty model for the same problem. Also, fortunately, these objective models have all been tabulated in textbooks for many risk problems we are likely to encounter so we don’t have to do all the math by hand.” I agree with Mark and highly recommend risk managers look into these.
  • If you ever have to use management input/guesses, calibrate them before asking for information and provide plenty of sugar. More information on management calibration for the purposes of risk analysis is provided in Douglas Hubbard’s books. More information on the effect sugar has on our ability to make decisions under uncertainty is provided in Daniel Kahneman’s and Gerd Gigerenzer’s books.
  • · Probably the hardest recommendation of all, change the decision-making process. Consider applying the decision quality framework developed by Professor Howard Raiffa of Harvard University and Professor Ronald A. Howard of Stanford University and made popular by Carl Spetzler in his book Decision Quality.

 The history of risk perception

The study of risk perception originated from the fact that experts and laypeople often disagreed about the riskiness of various technologies and natural hazards.

The mid-1960s experienced the rapid rise of nuclear technologies and the promise for clean and safe energy. However, public perception shifted against this new technology. Fears of both longitudinal dangers to the environment and immediate disasters creating radioactive wastelands turned the public against this new technology. The scientific and governmental communities asked why public perception was against the use of nuclear energy in spite of the fact that all the scientific experts were declaring how safe it really was. The problem, as perceived by the experts, was a difference between scientific facts and an exaggerated public perception of the dangers (Douglas, 1985).

Researchers tried to understand how people process information and make decisions under uncertainty. Early findings indicated that people use cognitive heuristics in sorting and simplifying information which leads to biases in comprehension. Later findings identified numerous factors responsible for influencing individual perceptions of risk, which included dread, newness, stigma, and other factors (Tversky & Kahneman, 1974).

Research also detected that risk perceptions are influenced by the emotional state of the perceiver (Bodenhausen, 1993). According to valence theory, positive emotions lead to optimistic risk perceptions whereas negative emotions incite a more pessimistic view of risk (Lerner, 2000).

A bit of warning about cognitive biases

Besides the cognitive biases inherent in how people think and behave under uncertainty, there are more pragmatic factors that influence the way we make decisions, including poor motivation and remuneration structures, conflict of interest, ethics, corruption, poor compliance regimes, lack of internal controls and so on. All of this makes any type of significant decision-making based on purely expert opinions and perceptions, highly subjective and unreliable.

Cognitive biases themselves are not set in stone. When scientists tried to replicate many of the tests performed by researches in 1970s, they found inconclusive or even contradictory findings, arguing that some findings related to cognitive biases we know today may be inaccurate or exaggerated.

A recent critical review of loss aversion (one of the most significant contribution of psychology to behavioural economics according to Kahneman) by D.Gal and D.Rucker of Northwestern University, published in the Journal of Consumer Psychology, loss aversion is potentially a fallacy. According to the authors there is no general cognitive bias that leads people to avoid losses more vigorously than to pursue gains. Contrary to claims based on loss aversion, price increases (i.e., losses for consumers) do not impact consumer behaviour more than price decreases (i.e., gains for consumers). Messages that frame an appeal in terms of a loss (e.g., “you will lose out by not buying our product”) are no more persuasive than messages that frame an appeal in terms of a gain (e.g., “you will gain by buying our product”). Is this study beginning of the end of cognitive biases or will this study itself be found inconclusive in 5 years time? Only time will tell. I can only vouch for myself, understanding and using cognitive biases explained a lot in my role as the Head of Risk at one of the large sovereign funds and made my job much much easier.

Another famous risk practitioner and author, Nassim Nicholas Taleb, when I met him in New York in June 2018 argued that cognitive biases may explain individual behaviours under sometimes sterile conditions, however, they should not be used to justify or explain behaviour of complex systems like societies. I tend to agree.

 p