Liabilities could arise from the unintended consequences of decisions made by algorithms and artificial intelligence 

An increasingly digitised society is facing rapidly emerging risks as decisions made by algorithms and artificial intelligence are playing a larger role in everyday life. This is according to a report produced by Zurich Insurance Group and Microsoft.

It points out that unleashing the power of data and artificial intelligence creates “endless business opportunities to ultimately improve the quality of our lives.” But with those opportunities come a “broad spectrum of risks encompassing not only regulatory compliance, but also liability and reputational risk if algorithmic decision-making triggers unintended and potentially harmful consequences.”

Managing AI algorithmic risk is particularly important because there are few insurance products that will cover it. The insurance industry is just beginning to understand the risk and develop coverage that will address it, because of a lack of loss experience data and models that can estimate the frequency and severity of potential losses.

“Moreover,” the report notes, “because of the interconnectedness of business, losses associated with AI risks may spread fast across the world, increasing substantially the accumulation of risk and raising insurability issues due to the lack of risk diversification.”

The report details the growing notion of AI algorithmic risk and suggests ways risk managers and insurers can manage it. The analysis includes an in-depth look at relevant cases in the areas of product liability, professional indemnity and medical malpractice that can provide guidance in minimising the exposure and potential harm to customers and an organisation’s reputation.

As the use of AI becomes ubiquitous in sectors such as transportation and manufacturing, safety will be paramount for human-machine interactions. “Product defects could even result from communication errors between two machines or between machine and interface,” notes the report.

Using aerospace and autonomous vehicle manufacturers as examples of where the product liability risk could arise, it suggests a risk management approach should include:

  • A systematic risk analysis and examination of the proposed AI system to uncover potential failures
  • Establishing performance metrics based on the sensitivity and use of the system to ensure targets are achieved
  • Building correction mechanisms and/or fallback options to detect and correct underperformance or allow human interaction to rectify issues
  • Maintaining a version control system to document development and history of the AI system
  • Adherence to best practices for responsible use of technology
  • Adoption of standards and/or certification of the AI system, which can provide assurance of technical performance and adherence to ethical standards