Questions over autonomy
Self-driving cars are starting to revolutionise how we move from one place to another. They are also raising major safety and responsibility issues.
In an article in the Journal of Information Technology, Professor Michael Myers, of the University of Auckland, and co-authors discuss trustworthiness as well as the allocation of responsibility for autonomous driving, focusing on ethical and legal safety challenges.
“Our research indicates there are contradictions in how responsibility is assigned for supposedly safe autonomous systems,” says Myers, pictured, of the university’s business school.
“These contradictions are linked, and reveal ongoing confusion and lack of clarity about how responsibility is shared among different parties involved.”
In an early case that demonstrates uncertainty around responsibility in the event of an accident, the United States National Transportation Safety Board (NTSB) found human error was to blame in a 2016 Tesla crash.
The board later revised its decision and criticised the car company for allowing its autopilot feature to be activated on roads it hadn’t been designed for.
Autonomous driving systems mix a high level of socio-technical complexity with significant risks, says Myers, highlighting the massive recall of more than two million Teslas in December 2023.
“It’s ironic that one of the problems automation intends to solve, such as allowing individuals to relax instead of driving, still requires the driver to actively monitor the system if it’s not fully autonomous. It’s clear drivers aren’t always doing that and this creates significant safety concerns.”
The monitoring of autonomous systems requires a human to understand system operations, says Myers, but the NTSB says humans are “notoriously inefficient” at doing so.
“We’re also finding that this kind of technology often leads to deskilling and, if an issue arises, a person may not have the skills needed to react when required.”
Many vehicles come preloaded with software that’s regularly updated, and the researchers say that although manufacturers continue to promote automation, people often have no choice about the level of automation installed in a vehicle and little knowledge of how it operates. Then, if there’s an accident, people tend to be blamed.
Motorists, however, might not know if they’re totally in control in emergencies, say the authors, and as self-driving technology develops the question of who’s liable in the event of a crash needs far greater attention.
Because automated driving systems are connected to the external environment, they can’t be tested in every situation, says Myers. As a result, they can be unpredictable, due to extreme weather, wildlife or road conditions that the car is unfamiliar with.
“We are rushing headlong into automation without understanding all the consequences,” says Myers. “Our project demonstrates the need for research that critically examines the social, political and technical aspects of autonomous driving systems, especially in relation to safety, responsibility and trust.”