Technology Won’t Stop Self-Driving Cars Coming to the Gulf. But Morality Might

Faisal Al Yafai

When Google Glass debuted in 2012, it was hyped as the first step toward a hands-free future of easily accessible information. But within three years, Google had shelved the product. What killed Google Glass was not technology, but morality. The public hated the idea of being filmed without their knowledge. And countries like the Gulf states, which have laws against taking images in public without consent, were expected to ban it. With the public mood strongly against the technology, software developers didn’t bother investing in applications for it and it died a quiet death.

Social and moral issues can sometimes put the brakes on technological innovation. The same could yet happen with self-driving cars, which, despite the vast acres of publicity and hype, and despite trials in several countries, including in the Gulf states, still exist in a technological, legal and moral gray area.

A car crash in Arizona has thrown these issues into sharp relief. A self-driving Uber car killed a woman as she walked in the street, in the first fatal crash involving an autonomous vehicle and a pedestrian. While all the facts are not yet known, two particular aspects are salient: the first is that a human driver was present, but he did not take control before the collision. The second is that, according to police, the vehicle did not slow down as it approached the victim, suggesting that neither the artificial intelligence (AI) system on board nor the human realized the problem.

Artificial intelligence will soon be all around us, but self-driving cars will be the first big test, where robots with the power to kill humans will exist in such large numbers. Whether they are accepted and adopted will turn not on questions of technology alone, but on those of morality, questions that have been debated for thousands of years.

At the root of the technology is an old philosophical problem: how do you choose who gets to live and who dies? In recent years, and specifically with regard to self-driving vehicles, the question has been phrased as the “Trolley Problem.” It runs like this: a trolley is running down the track toward five people. If it collides with them, it will certainly kill them. You can pull a lever that will redirect the trolley, but only on to another track, where it will instead kill one person. What do you do?

The question is particularly relevant because the scenario asks that you decide two problems. Not only must you decide whether five human lives are worth more than one, but you must decide whether it matters that you will be morally culpable for the death of that one person. Do nothing and five die, or actively kill one.

This moral quandary is multiplied many times over in the real world of driving, where the variations add up: is it better for the car to drive into a school bus or a coach of older people? Does the AI of the car have to protect the driver at the expense of a pedestrian? In these situations, humans make split-second decisions that they often cannot recall or explain. But machines must be taught which decisions to make and it is possible to imagine a future scenario in a court of law where the reasoning of the machine is laid bare for its victims to hear.

Nor is it enough for these issues to be understood implicitly, or to be contained in detailed terms and conditions. As the current furor over data mining of Facebook profiles highlights, the public can agree to terms and conditions without fully understanding their implications. Once those implications are spelled out, public consent can be withdrawn. The same could happen with the AI algorithms that govern self-driving cars.

Already some governments have started to set guidelines. Germany, for instance, has adopted guidelines that self-driving vehicles cannot prioritize one human life over another based on age, gender, race or disability. The US has yet to do so, although individual states have their own rules. In the Gulf, the UAE has tasked a government agency with drawing up regulations for self-driving vehicles, but these are still at the drafting stage.

Worse for advocates of autonomous vehicles, the best arguments in favor of self-driving cars lack emotional weight. The issue is not merely about how many people self-driving vehicles will kill but how many will not be killed because of their use. Media reports were quick to point out that on the same day the self-driving car killed one person, human drivers would statistically have killed 16 across the US. Replace all human drivers with self-driving vehicles and 16 more people would have lived that day.

In the Gulf countries, that figure is higher. Saudi Arabia, for example, has a road traffic death of 27.4 per 100,000 persons, the highest of the G20. The UAE does best in the Gulf. At 10.9 it has the same level as the US, but both still lag behind the best countries, Sweden and Britain, with around 2.9 deaths per 100,000. But such statistics are difficult for the public to understand. And there is also the visceral question of justice – families who have lost loved ones want to know who is guilty. They want someone to face trial and for justice to be served. Self-driving vehicles remove the possibility of blame in the same way. An algorithm can’t go to jail.

When self-driving cars are finally ready to share road space with messy and unpredictable human drivers in large numbers, there will be accidents. How those accidents come about and what decisions the machines made will ultimately determine whether the public can accept them. The future of self-driving cars will be decided less by precise algorithms on a shiny screen and more by vague questions from dusty philosophical books.

Faisal Al Yafai is currently writing a book on the Middle East and is a frequent commentator on international TV news networks. He has worked for news outlets such as The Guardian and the BBC, and reported on the Middle East, Eastern Europe, Asia and Africa.

Mark Wilson/Getty Images/AFP