Autonomous vehicles::How do you define “safe driving” in terms a machine can understand?

Writing the robotic rules of the road

WHEN people learn to drive, they subconsciously absorb what are colloquially known as the “rules of the road”. When is it safe to go around a double-parked vehicle? When pulling out of a side street into traffic, what is the smallest gap you should try to fit into, and how much should oncoming traffic be expected to brake? The rules, of course, are no such thing: they are ambiguous, open to interpretation and rely heavily on common sense. The rules can be broken in an emergency, or to avoid an accident. As a result, when accidents happen, it is not always clear who is at fault.

All this poses a big problem for people building autonomous vehicles (AVs). They want such vehicles to be able to share the roads smoothly with human drivers and to behave in predictable ways. Above all they want everyone to be safe. That means formalising the rules of the road in a precise way that machines can understand. The problem, says Karl Iagnemma of nuTonomy, an AV firm that was spun out of the Massachusetts Institute of Technology, is that every company is doing this in a different way. That is why some in the industry think the time has come to devise a standardised set of rules for how AVs should behave in different situations.
Get our daily newsletter

Upgrade your inbox and get our Daily Dispatch and Editor’s Picks.
Latest stories

Can safe-driving rules really be defined mathematically? It sounds crazy; but if it could be done, it would provide welcome clarity for both engineers and regulators. A clear set of rules would free carmakers from having to make implicit ethical choices about how vehicles should behave in a given situation; they would just have to implement the rules. In the event of an accident, suggests Amnon Shashua of Mobileye, a provider of AV technology, an AV company would not be liable if its vehicle followed the rules. But if a sensor failure or software bug meant that the rules were broken, the company would then be liable. There would still be plenty of scope for innovation around sensor design and control systems. But the robotic rules of the road would be clearly defined.

Dr Shashua and his colleagues published a first attempt to devise such rules in a paper that came out late last year. Their framework, called “Responsibility-Sensitive Safety”, lays down mathematical rules for various events, such as lane-changing, pulling out into traffic and driving cautiously when pedestrians or other vehicles are partially occluded. The framework covers all 37 pre-crash scenarios in the accident database maintained by NHTSA, America’s car-safety regulator. Dr Shashua would like it to be adopted as the basis of an open industry standard. In the meantime, his company is already using these ideas in the autonomous-driving platform it is developing with BMW, Fiat Chrysler and several parts-makers.

Last month Voyage, another new AV company, made a similar proposal, called “Open Autonomous Safety”. It also defines the correct, safe behaviour for vehicles in a range of circumstances, including pedestrians being in the road, nearby vehicles reversing and arrival at a four-way stop. In addition, Voyage has made its internal safety procedures, materials and test code all “open source”, with the aim of providing “a foundational safety resource in the industry”.

This is all a good start, says Dr Iagnemma, whose own company is also planning an announcement in this area. Bryant Walker Smith, a law professor at the University of South Carolina who studies driverless-car regulations, similarly welcomes the proposals from Mobileye and Voyage, but warns that it is too soon for regulators to “calcify dynamic conversations that are fundamentally technical in nature”. It will take years rather than months for the industry to cohere around a standard, Dr Iagnemma predicts. But he is optimistic that this will happen eventually, because discussions are already under way and because many people working in the field of autonomous vehicles are recent recruits from academia, who consider sharing and open-sourcing to be second nature.

One area where sharing would speed up the development of a safety standard is so-called “edge cases”—rare events that tax the capabilities of autonomous systems, such as unexpected behaviour by other drivers, debris on the road, plastic bags blowing in front of a vehicle and so on. Because such events occur infrequently, and computers lack the common sense to decide how to respond, training AVs to cope with edge cases is hard. But by sharing with each other data from edge cases that have actually happened, AV firms can test their systems in simulators to see how they would respond, and adjust them where needed, benefiting from each other’s experience. Normally, companies might be reluctant to help competitors in this way, notes Dr Iagnemma, but with AVs, “an accident affects the whole industry, and is bad for all of us”.

That is because the road-safety debate about autonomous vehicles is driven by emotion, not logic. “If we’re willing to say we’re happy with humans killing themselves on roads, we don’t have a principled basis to regulate AVs,” says Mr Walker Smith, who thinks much more could be done with human drivers to improve road safety: reducing and enforcing speed limits, for example. But the truth is that AVs will always be held to higher safety standards than human drivers.

Just how much higher? A study published last year by the RAND Corporation, a think-tank, did the number-crunching. It found that deploying AVs even when they are only 10% safer than human drivers would save far more lives in the long run (more than 500,000 over 30 years in America alone) than waiting until they are, say 90% safer. But such stark utilitarianism sits poorly with how most people view the world, because AVs would still cause a lot of deaths. Indeed, Dr Shashua thinks a good target to aim for would be 99.9% safer—in other words, 1,000 times better than human beings. That would be such an obvious improvement that it would be difficult to argue against it. The wider point, though, is that even if it turns out to be possible to build AVs governed by mathematically rigorous rules of the road, the industry’s progress would still be subject to the vagaries of human nature.
This article appeared in the Science and technology section of the print edition under the headline “Robotic rules of the road”

Tell us what you think of

Need assistance with your subscription?
Sign up to get more from The Economist

Get 3 free articles per week, daily newsletters and more.

Published since September 1843 to take part in
“a severe contest between intelligence, which presses forward,
and an unworthy, timid ignorance obstructing our progress.”

Copyright © The Economist Newspaper Limited 2018. All rights reserved.


Leave a Reply

Your email address will not be published. Required fields are marked *