Last updated May 3, 2019 at 9:48 am
Billions of dollars is being put toward autonomous vehicle research and development. However, there is concern for the way humans and other animals will respond.
Globally, road crashes kill 1.3 million people a year and injure nearly 50 million more. Autonomous vehicles (AVs) have been identified as a potential solution to this issue if they can learn to identify and avoid situations leading to crashes.
Unlike human drivers, these vehicles won’t get tired, drive drunk, look at their phone, or speed. What’s more, AVs will reduce congestion and pollution, increase access to public transport, be cheaper, improve mobility for people with disabilities, and make transport fun again. Right?
Rightly or wrongly, billions of dollars are being poured into autonomous vehicle research and development to pursue this autopia. However, barely any resource or thought is being given to the question of how humans will ultimately respond to the AV fleet. In a city full of autonomous cars, how might our behaviour and use of city streets change?
In one scenario, people could act on the knowledge that these vehicles will stop any time someone chooses to step in front of them, bringing traffic to a halt.
Humans (and animals) will adapt
One of humans’ great strengths is our adaptability. We quickly learn to manipulate and exploit our environment. A future road environment saturated with autonomous vehicles will be no different.
For example, think about why you don’t walk out in front of traffic or drive through stop signs. Because other cars could injure or hurt you, right?
But autonomous vehicles promise something new. They are being designed to “act flawlessly”.
There are two elements to this: the first is not making mistakes, and the second is compensating for the occasional errors and misjudgements that fallible humans make. Autonomous vehicles promise alignment with Asimov’s First Law of Robotics:
A robot may not injure a human.
Now imagine crossing a road or highway in a city saturated by autonomous cars where the threat of being run over disappears. You (or any other mildly intelligent animal) might quickly learn that oncoming traffic poses no threat at all. Replicated thousands of times across a dense inner city, this could produce gridlock among safety-conscious autonomous vehicles, but virtual freedom of movement for humans – maybe even heralding a return to pedestrian rights of yesteryear.
A simple example of how this might happen comes from game theory. Take two scenarios at an intersection where pedestrians and vehicles negotiate priority to cross first. Each receives known “pay-offs” for behaviour in the context of the other’s action. The higher the comparative pay-off for either party, the more likely the action.
In the left-hand scenario below, the Nash equilibrium (the optimum combined action of both parties) exists in the lower left quadrant where the pedestrian has a small incentive to “stay” to avoid being injured by the manually driven car, and the driver has a strong incentive to “go”.
However, in the scenario on the right, the autonomous vehicles has a desire to act flawlessly and pose no threat to the pedestrian at all. While this might be great for safety, the pedestrian can now adopt a strategy of “go” at all times, forcing the AV to stay put.
Can this potential problem be overcome?
One solution might be to program algorithms into vehicles that make them occasionally, purposefully, run into people, animals or other vehicles. Although this would maintain a level of fear and caution in the population, legally and morally it is hard to see how this would be acceptable.
Another option could be infrastructure separating autonomous vehicles from vulnerable road users, such as pedestrians and cyclists. But the cost and reduction in amenity this would create would be enormous. Further, this type of solution could be applied now, negating much of the need for AV software and technology development in the first place.
A final, duplicitous idea is to simply turn off the safety systems that cause so-called “erratic vehicle behaviour” (i.e., slowing down to avoid hitting people). This is reported to have occurred when a self-driving Uber struck and killed a pedestrian in Arizona last year. However, if this is the solution, you then have to ask, “What is the transport problem autonomous vehicles are actually trying to solve?”
It won’t happen overnight
In the scenarios above most of the fleet are autonomous vehicles, and humans adapt to their consistently safe behaviour. However, the complete transition to autonomous vehicles will not occur overnight and might create new crash situations that are, so far, poorly understood.
For example, we are developing simulations of interactions between vulnerable road users and a mixed fleet of autonomous vehicles and human-driven cars. These models show how inconsistencies between the behaviour of manual and autonomous vehicle types could even lead to more crashes during the transition.
The other day we built a small #ABM of #autonomous #cars being introduced into the fleet among current 'manual' cars, interacting with #pedestrians and #cyclists. Take home message is that even though AVs might 'act flawlessly', crashes may still increase as humans adapt :/ pic.twitter.com/juNzOLA63W
— var = Jason_Thompson (@Agent_Jase) April 11, 2019
The future for AVs under threat?
As AV technology rolls on, and the marketing hype surrounding them continues to draw attention and burn up investment dollars, it should be remembered that humans and animals are still going to behave how we always have by continually adapting and exploiting weaknesses in our environment.
Part of the promise of autonomous vehicles is their proposed safety through deference to human life. But, if the point of transport systems is to enable efficient movement of people and goods for the benefit of society, this strength of AVs might prove to be their ultimate weakness as a viable mass transport mode.