Technology Assessments and Social Experiments

Chapter 13 Case Study: Autonomous Cars as Social Experiment?

The proliferation of artificial intelligence (AI) in society is often a social experiment. This is because AI is generally employed to perform tasks that humans normally perform, and often these are tasks that occur in or have effects on the general public. The case of self-driving cars is perhaps the most obvious case. The Trump administration is planning to remove safety rules that require things like steering wheels and brakes in cars. The National Highway Traffic Safety Authority wants to enable companies like Uber and Waymo (owned by Alphabet, the parent company of Google) to field fully autonomous cars as soon as possible, and many designs have no human backup controls. Department of Transportation Secretary Elaine Chao argues that driverless cars have the potential to lower the fatalities on the nation’s highways. Waymo intends to launch driverless taxi service in Arizona soon, but for the time being, those vehicles will be equipped with steering wheels and pedals as backup. Not all experts are enthusiastic about fast-tracking driverless cars, according to a Reuters article. The Center for Auto Safety thinks more evidence is needed by the NHTSA that driverless cars are safe “before involuntarily involving human beings in their testing.”

Autonomous cars make decisions that not only affect their drivers but also other vehicles on the road and pedestrians. An Uber self-driving car killed a pedestrian after the car’s sensors failed to identify her as a human. She was detected a full six seconds before the crash but was classified as an unknown object. The system should have required the vehicle to slow down when it was confused, but it did not realize it needed to slow down until just 1.6 seconds before the impact. Experts worry that improvements to sensors can only go so far; there is a tremendous wealth of understanding that human drivers have that AI does not. For example, one expert, Raj Rajkumar at the Robotics Institute at Carnegie Mellon, told Technology Review that when a human sees a toy in the road, the human generally recognizes this means there could be a child nearby out of sight who might suddenly appear to retrieve the toy. It would not be easy to teach computers to interpret all the objects they detect and draw all the relevant conclusions. Nor could human engineers easily come up with a list of the items of significance that a computer might need to be taught to identify and all the possible relevant circumstances such objects might signify. Another expert at CMU, Herman Herman, talks about how the safety and predictability of the technology changes when scaled up. When there are just a few autonomous cars on the road, they might be quite safe. But what if most of the cars were autonomous; would they still be safe? Concerns remain about bandwidth required for this many autonomous cars, sensors interfering with one another’s sensors in close proximity, and stakes of things such as software crashes. Another issue is that autonomous cars enter a road governed as much by social norms and game theoretic decisions by human agents as much as by traffic laws. Will autonomous cars be able to predict human driver behavior the way human drivers can? Or, will human drivers learn to take advantage of autonomous cars’ social blindness and out-game them—for instance, not yielding the right of way on the understanding that the autonomous car will always stop to avoid the accident. Allowing autonomous cars on the freeway is indeed a social experiment that could change not only who drives but also the social norms governing the road. It is an experiment that few of us have consented to join.

Should the NHTSA view the introduction of autonomous cars as a social experiment? Why or why not? If autonomous cars are a social experiment, what should be done about the problem of consent? Finally, if viewed as a social experiment, what would count as success or failure? More lives saved? More human autonomy? A ratio of the two?

Case study by Robert Reed

Back to top