An accident in Arizona involving an autonomous vehicle has reportedly led to the death of a pedestrian

News that a collision has taken place involving an autonomous vehicle (AV) leading to the death of a pedestrian poses questions for firms considering AVs’ applications and risks.

The accident in Tempe, Arizona involved an Uber self-driving car with a human back-up driver present in the vehicle.

Uber has suspended testing in Tempe as well as in Pittsburgh, San Francisco and Toronto.

“Insurers and the firms developing autonomous vehicles will be keen to understand the full implications of this accident as more details emerge,” said Nigel Brook, partner with legal firm Clyde & Co.

“However, the reality is that the development of this technology will continue apace. An event like this, while tragic, is not unanticipated,” said Brook.

Jenifer Baxter, head of engineering at the Institution of Mechanical Engineers, commented: “This tragic event serves to draw attention to the challenges of incorporating autonomous vehicles into an incumbent system operating with manned vehicles, cyclists, pedestrians and other road users.

“Uber are right to suspend their trials of autonomous vehicles until the cause of this accident is fully understood,” said Baxter.

Vehicles with partial automation include: Level 2, in which the vehicle has partial automation, but the driver retains overall control; and Level 3, in which all aspects can be done autonomously, with provision for a human to intervene at all times.

“A safety driver was present in the Uber vehicle but seemingly couldn’t prevent this accident,” said Brook.

“If the system hands back control to the human driver at short notice, how readily can they react? This isn’t so much about the technology; it’s about how quickly someone can re-engage with their surroundings and avoid any potential hazards,” he said.

Baxter noted that in 2016 the Institution of Mechanical Engineers made a case study on AVs, raising the need to address societal questions before highly and fully automated cars are legally accepted on roads, including having the right regulatory framework in place.

“Engineers will need to create an environment where connected autonomous vehicles can operate safely with or without an operator during the transition period to a fully autonomous vehicle system. This transition period could last for several decades,” said Baxter.

AVs rely on collecting and analysing large quantities of data, which could help investigators with the recent accident.

“Ultimately, this trove of data will improve safety and help detect fraudulent insurance claims, which will benefit every motorist,” said Brook.

The shift towards AVs is expected to include a shift in the risks involved in driving, away from driver error and towards product liability.

A lawsuit has already been filed after a collision between a self-driving car and a motorcyclist, which occurred last year, Clyde & Co highlighted.

In 2017, motorcyclist Oscar Nilsson was travelling behind a self-driving Chevy Bolt in San Francisco when the vehicle began to switch lanes.

Nilsson drove forward into the space, but the vehicle abandoned its lane change colliding with him and knocking him to the ground.

On 22 January 2018, Nilsson sued General Motors in what is the first known lawsuit involving a self-driving vehicle.

The lawsuit alleges that there was a safety driver in the Chevy Bolt at the time of the accident, but that “he kept his hands off the Self-Driving Vehicle’s steering wheel.”

The lawsuit further alleges that the safety driver initially commanded the vehicle to change lanes to the left, but that the AV veered back into Nilsson’s lane, hitting his motorcycle and knocking him to the ground.

“Let’s be clear: no one has claimed these vehicles are accident-proof,” said Brook.

“What we do know is that ultimately, they should be considerably safer than human drivers,” he added.

For a recent special report on the risks involved in the rise of AVs, undertaken by StrategicRISK in association with AIG, click here.