AI brings the promise of progress and threat of disorder, as the launch of ChatGPT demonstrates

At the end of last year, ChatGPT – an AI-driven chatbot – was launched. Its ability to answer almost any question with lucid detail saw it reach over 100 million users by January, making it one of the fastest growing consumer applications ever.

One savvy student even managed to use the programme to write his assignment and then used a 3D printer to write out the answers for him. The arrival of ChatGPT has since made colleges and universities revisit their plagiarism policies.

The arrival and rapid adoption of ChatGPT was a microcosm of AI as a concept – innovative functionality with uncertain consequences.

The global AI market size was estimated at $120 billion in 2022 and it is expected to hit $1,597 billion by 2030, with Asia-Pacific the fastest growing region, according to Precedence Research.

For risk managers, this has left the conundrum of early adoption: wait too long and you get left behind, go too early and unforeseen issues derail you.

So should risk managers be utilising AI?

How risk managers can deploy AI

AI is broadly defined as the ability of a computer, or a robot controlled by a computer, to perform tasks which require human intelligence. Its utility is seen to be in problem-solving and the ability to automate processes.

“AI can assist in risk management in many ways,” said Gabriella Ezeani, senior consultant for technology at FTI Consulting.

“In a variety of industries where predictive and forward-looking analysis is essential, AI can be used to analyse unstructured data to identify patterns and predict a variety of future scenarios, allowing organisations to take action to mitigate risk.”

Ezeani said that AI in risk management is often used for fraud detection in industries such as banking and insurance. In this case, unsupervised learning algorithms can be trained to analyse data and uncover fraud trends and patterns.

“The use of AI is not limited to use cases where it is used to identify risks, but also to reduce the subjectivity of decision-making in risk management,” said Ezeani.

“AI models can be trained to discover or highlight information in a unique way that can be fed back into the decision-making process. When used in conjunction with human capabilities, it can open up new avenues for the development of risk strategies.”

Kurt Lee, risk manager of Daol Investment & Securities based in Seoul, who also has a Masters in AI from Yonsei University, said: “AI can be used to assist in forecasting the outlook of credit ratings.

“It can be trained to learn the relationships between the fundamental data of companies and their credit rating transitions. Many papers have since been published related to this technique.”

Advantages of using AI for risk

Steve Nunez, head of data & AI at Zuhlke Asia, said that common AI advantages for risk management are around consistency and producing unbiased risk decisions.

“AI systems as typically deployed are explainable and work well in regulated environments. They are also ‘blind’ to race, religion, and so on, leading to more unbiased decisions,” said Nunez.

“The major advantage of the new machine learning systems is using unstructured data. There is a lot of work in natural language processing (NLP) and especially knowledge graphs that help in uncovering fraud.”

Ezeani said that AI has been touted as a way of identifying and mitigating potential risks more efficiently and quickly, and at lower costs.

“AI can be scalable to meet the demands of handling large and complex amounts of unstructured data, and if trained effectively, can also deliver consistent risk outputs,” she said.

For Lee, AI modelling it is usually known to outperform complex mathematical models. As most AI models do not require prior knowledge to build, it is possible to design simple models and measure their performance.

Deploying AI in risk - uncertain consequences

While such advantages and efficiencies appeal to risk managers, AI deployment is a risk in itself.

“AI has great capability in guessing the psuedo optimal solution for similar pairs of datasets used for its training,” continued Lee. “However, they are known to be weak at forecasting results out of outliers, such as datasets not in the range AI has previously seen. It could result in strange outputs that may require human attention.”

For Ezeani, deployment of AI for risk management raises a myriad of risks and challenges to organisations. Firstly, around algorithm bias and discrimination. “Algorithms may reflect and reinforce existing human biases and discriminatory patterns by making inferences from the data on which they are trained.

“In the context of risk management, these risks are exacerbated because the AI system used at scale can systematise these biases.”

Secondly for Ezeani, there can be issues around ‘false positives’: “AI systems, like most technologies, are prone to false positives and can make mistakes that can be particularly damaging, as AI systems can create a negative feedback loop that prevents future false positives from being detected.

“If the model in question is difficult to understand or, more accurately, for a human to interpret, such errors may be difficult to detect.”

Are you compliant?

Ezeani added that a third issue is around governance, where deploying AI within an organisation requires a coherent and collaborative approach to AI governance.

“To do this, organisations must truly grapple with new risk paradigms that they may have never encountered before, and also ensure that these decisions are in compliance with relevant laws, including data protection, human rights and new AI regulations that are taking shape around the world.” 

For Nunez, AI risks can be divided into four categories:

  • Attacks: where ‘hackers’ manipulate the input data, affecting the outputs of a machine learning model
  • Failures: for example, when a medical system suggests an effective treatment
  • Abuse: AI being used for improper purposes
  • User error: inadequate training of users of AI

“Any of these risks could result legal, reputational or monetary damages,” said Nunez.

“Upcoming risks include non-compliance with regulatory regimes, where the penalties can be severe. For example, the EU’s AI Act proposes penalties of 30 million Euros or 6% of total worldwide annual turnover. Unfortunately, I think many companies are going to be taken by surprise when these regulations come into force.”

For risk managers, this leaves them in a speculative position of whether AI can be their friend or foe. An area rich with opportunity but fraught with danger, risk managers are being asked to tread a careful path to AI adoption, keeping their eyes open at every stage.