National Academies Press: OpenBook

Frontiers of Engineering: Reports on Leading-Edge Engineering from the 2019 Symposium (2020)

Chapter: Why Everyone Has It Wrong About the Ethics of Autonomous Vehicles - John Basl and Jeff Behrends

« Previous: Perceptions of Low-Cost Autonomous Driving - Tae Eun Choe, Xiaoshu Liu, Guang Chen, Weide Zhang, Yuliang Guo, and Ka Wai Tsoi
Suggested Citation:"Why Everyone Has It Wrong About the Ethics of Autonomous Vehicles - John Basl and Jeff Behrends." National Academy of Engineering. 2020. Frontiers of Engineering: Reports on Leading-Edge Engineering from the 2019 Symposium. Washington, DC: The National Academies Press. doi: 10.17226/25620.
×

Why Everyone Has It Wrong About the Ethics of Autonomous Vehicles

JOHN BASL
Northeastern University

JEFF BEHRENDS
Harvard University

Autonomous vehicles (AVs) raise a host of ethical challenges, including determining how they should interact with human drivers in mixed-traffic environments, assigning responsibility when an AV crashes or causes a crash, and how to manage the social and economic impacts of AVs that displace human workers, among others. However, public and academic discussion of the ethics of AVs has been dominated by the question of how to program AVs to manage accident scenarios, and in particular whether and how to draw on so-called “trolley cases” to help resolve this issue. Some in the debate are optimistic that trolley cases are especially useful when addressing accident scenarios, while others are pessimistic, insisting that such cases are of little to no value.

We summarize the debate between the optimists and pessimists, articulate why both sides have failed to recognize the appropriate relationship between trolley cases and AV design, and explain how to better draw on the resources of philosophy to resolve issues in the ethics of AV design and development.

AV ACCIDENT SCENARIOS AND TROLLEY CASES

Autonomous vehicles will inevitably be in accident scenarios in which an accident that causes harm (to pedestrians, passengers, etc.) is unavoidable. Whereas human drivers in these circumstances have very limited ability to navigate them with any sort of control, AVs might be in a position to “decide” how to distribute the harms. It has seemed to many that because with AVs there is some ability to exercise control over how harms are distributed, it is essential to think carefully about how to program AVs for accident scenarios. The question is how to do so?

Suggested Citation:"Why Everyone Has It Wrong About the Ethics of Autonomous Vehicles - John Basl and Jeff Behrends." National Academy of Engineering. 2020. Frontiers of Engineering: Reports on Leading-Edge Engineering from the 2019 Symposium. Washington, DC: The National Academies Press. doi: 10.17226/25620.
×

It has not escaped notice that some accident scenarios bear a resemblance to what are known in philosophy as “trolley cases.” These are imagined scenarios in which a runaway trolley will result in the death of some number of individuals unless a choice is made to divert or otherwise alter the trolley’s course, resulting in some other number of deaths.

In the classic trolley case, the trolley is headed down a track and will kill five people who cannot escape. A bystander has the ability to pull a switch and divert the trolley onto another track. However, on this track there is one person who cannot escape and will die if the trolley is diverted. A similar-seeming scenario involves an AV that is traveling down a street when suddenly a group of pedestrians runs into the street. The only way to avoid hitting them is to take a turn that will result in the death of a pedestrian on the sidewalk.

In another version of the trolley case, a trolley cannot stop and will kill five people unless an object of sufficient weight is pushed in front of it. A bystander has the option of pushing a large person off a bridge and onto the tracks in a way that would stop the train before it kills the five. Again, a case involving an AV might have a similar structure: Perhaps an empty AV has gone out of control and will hit five pedestrians unless another AV with a single passenger drives itself into the first AV.

TROLLEY OPTIMISM

Trolley Optimism is the view that trolley cases can and should inform how AVs are programmed to behave in these sorts of accident scenarios. The general proposal is that various kinds of trolley cases can be constructed, a verdict is reached about what action or behavior is appropriate in that case, and then that verdict is applied in the case of AVs, programming them to behave in a way that mirrors the correct decision in the analogous trolley case (Hübner and White 2018; Lin 2013; Wallach and Allen 2009).

While trolley cases may be born of philosophy, Trolley Optimism is not confined to philosophy departments (see Achenbach 2015; Doctorow 2015; Hao 2018; Marshall 2018; Worstall 2014). Consider the Massachusetts Institute of Technology’s (MIT’s) Moral Machine project, which has a variety of components. One is a website that presents visitors with different accident scenarios and asks how the visitor thinks the car ought to behave in that scenario. The scenarios involve many variables, testing visitors’ judgments about, for example, how to trade off people and animals, men and women, the elderly and children, and those who obey walk signals and those who don’t.

While some might see the Moral Machine project as simply a tool for collecting sociological data, others think that the data, in aggregate, should be used to decide how AVs should be programmed to behave in accident scenarios (Noothigattu et al. 2017). Whereas philosophers might endorse a type of Trolley Optimism that aims to determine the correct thing to do in a trolley case and

Suggested Citation:"Why Everyone Has It Wrong About the Ethics of Autonomous Vehicles - John Basl and Jeff Behrends." National Academy of Engineering. 2020. Frontiers of Engineering: Reports on Leading-Edge Engineering from the 2019 Symposium. Washington, DC: The National Academies Press. doi: 10.17226/25620.
×

program AV behavior in accident scenarios accordingly, the Moral Machine’s democratic variant leaves it up to the people.

SOME QUESTIONABLE GROUNDS FOR PESSIMISM

Trolley Pessimism is the view that it is a mistake to draw on trolley cases to think about how to program AVs to behave in specific accident scenarios. Different forms of Trolley Pessimism can be distinguished on the basis of what mistake they identify.

Challenges to the Validity of Thought Experiments

One basis for Trolley Pessimism is a distaste for using thought experiments to arrive at conclusions. Sometimes, this is grounded in the idea that thought experiments that philosophers deploy are so idealized and unrealistic that they are useless for navigating the real world.

We think these sorts of objections rest on a mistaken view of the function and value of thought experiments; we set that aside except to note that a key motivation for Trolley Optimism is that accident scenarios seem to closely resemble trolley cases. If trolley cases are useless for thinking about accident scenarios it isn’t because the cases are too unrealistic to be of any use. At the very least, a plausible basis for pessimism must articulate the differences between trolley cases and AV accident scenarios that prevent reasonable conclusions about what to do in the latter based on judgments about the former.

Disanalogy

Another basis for Trolley Pessimism tries to show that there is indeed some point of difference between trolley cases and the behavior of AVs in accident scenarios that makes verdicts in the former inapplicable to decisions about what to do about the latter. What are the differences between trolley cases and AV accident scenarios that justify this form of pessimism?

Nyholm and Smids (2016) point to several points of disanalogy. For example, trolley cases set aside questions of the moral and legal liability of those who are deciding how to act. The person who will decide whether to divert the trolley, it is assumed, will not be held responsible or liable for whichever choice they make. But these considerations should inform deliberations about how AVs should behave in accident scenarios.

Another point of disanalogy is that, in trolley cases, the outcomes of various decisions are stipulated to be known with certainty, whereas in the case of AV accident scenarios, despite what one may want or intend a vehicle to do, there is some uncertainty about whether the vehicle’s behavior will generate the desired outcome.

Suggested Citation:"Why Everyone Has It Wrong About the Ethics of Autonomous Vehicles - John Basl and Jeff Behrends." National Academy of Engineering. 2020. Frontiers of Engineering: Reports on Leading-Edge Engineering from the 2019 Symposium. Washington, DC: The National Academies Press. doi: 10.17226/25620.
×

Again, we think this is not a plausible basis for Trolley Pessimism, as explained below.

Ways to Address Trolley Pessimism

While it is true that traditional trolley cases do stipulate away issues of legal and moral liability and stipulate outcomes with certainty, there is in principle no reason why thought experiments can’t take these variables into account.

It is possible to develop a case that asks what should be done assuming some particular legal liability regime, enumerating the costs to the agent making the decision. Similarly, a case could be constructed in which pulling a switch has an 80 percent chance of altering the course of a trolley, incorporating deliberations about whether this alters one’s moral obligations. The creators of the Moral Machine might even be invited to build these variables into their cases, collect data about what people think should be done in those circumstances, and then aggregate the data to dictate the behavior of AVs in accident scenarios.

THE TECHNOLOGICAL BASIS FOR TROLLEY PESSIMISM: LESSONS FROM MACHINE LEARNING

There is a better basis for pessimism, in the very nature of AV enabling technology: machine learning (ML) algorithms.

What Is Machine Learning?

For those unfamiliar with ML algorithms, we contrast them with what we call traditional algorithms. An algorithm is a set of instructions for executing a task or series of tasks to generate some output given some input. In a traditional algorithm the instructions are laid out by hand, each step specified by a programmer or designer. In contrast, ML algorithms themselves generate algorithms, which do not have the steps used to carry out some task specified by a programmer.

A good analogy for some forms of machine learning, namely supervised and reinforcement learning, is dog training. It is not possible to just program a dog to respond to the words “sit,” “stay,” “come,” and “heel” by wiring its brain by hand. Instead, when training a dog, it is common to arrange for situations where the dog will engage in some desired behavior and then reward the dog. For example, a trainer might hold a treat in front of a dog’s nose and then lift the treat into the air, causing the dog naturally to raise its head and drop its back legs. The dog is then rewarded. After many repetitions, the word “sit” is said right before the treat is lifted. Eventually the dog sits on command, having learned an output for the input “sit.”

For machine learning, a programmer provides an ML algorithm with a training set, a dataset that includes information about which outputs are desirable and

Suggested Citation:"Why Everyone Has It Wrong About the Ethics of Autonomous Vehicles - John Basl and Jeff Behrends." National Academy of Engineering. 2020. Frontiers of Engineering: Reports on Leading-Edge Engineering from the 2019 Symposium. Washington, DC: The National Academies Press. doi: 10.17226/25620.
×

which are not. The learner then generates an algorithm that is meant to not only yield appropriate input–output pairs when it is fed inputs that match those in the test set, but to extrapolate beyond the test set, yielding, the programmer hopes, desirable outputs for new input data.

Machine Learning and Autonomous Vehicles

Machine learning is a powerful tool. It allows programmers to develop algorithms to solve problems that would otherwise be extremely tedious or impossible.

The AVs likely to be on the road in the foreseeable future will rely on ML technologies; at the very least, machine learning is at the heart of the detection systems used in AVs. Those systems take in data from various sensors (radar, lidar, cameras) and translate the data to some output that other AV systems use to drive the car, to maintain its position within driving lanes, to slow when there is a car in front of it but not when there is merely a piece of litter.

The fact that AVs depend so heavily on ML algorithms grounds a case for Trolley Pessimism. To see why, first note that how an AV behaves in any given accident scenario is mediated by how the algorithm that governs its behavior is trained. The ML training set must be organized to achieve the AV’s behavior in a particular accident scenario. For example, to produce an AV that suddenly confronts a scenario where it must swerve and risk harm to its passenger or maintain course and hit a number of pedestrians, such scenarios must be included in the training set and a particular input–output pair marked as desirable.

This is not the only way to achieve the desired behavior; the point is that behavior in particular scenarios is influenced by choices that programmers and designers make about how to train the ML algorithms. These choices involve ethical choices.

Ethical Choices in Machine Learning

AV programmers will have to make choices about, for example, what proportion of the training data is dedicated to accident scenarios at all. One programmer might focus on nonaccident or typical driving scenarios, including no data about how a car should behave in accident scenarios. Another might dedicate half the training data to everyday driving scenarios and half to accident scenarios. Let’s imagine these two programmers are on the same team and arguing about what proportion of the training set should be dedicated to scenarios where the car detects itself to be in an accident where harms can’t be avoided. The first programmer argues that the car will very rarely be in those kinds of situations and instead should be trained for the most likely scenarios. The second argues that even if the accident scenarios are rare, it’s extremely important to make sure the car does the right thing! The first programmer counters that if they dedicate enough of the training set to getting certain behaviors in accident scenarios, it could make the car

Suggested Citation:"Why Everyone Has It Wrong About the Ethics of Autonomous Vehicles - John Basl and Jeff Behrends." National Academy of Engineering. 2020. Frontiers of Engineering: Reports on Leading-Edge Engineering from the 2019 Symposium. Washington, DC: The National Academies Press. doi: 10.17226/25620.
×

less safe in typical driving scenarios or even put the car into accident scenarios more often! Clearly this argument over how to train the algorithm that will help govern AV behavior is an ethical one: it invokes various value judgments and judgments about how those values are implicated in potential outcomes.

It follows from the facts that decisions about how to organize the training regime for AV behavior are ethical decisions and that they mediate questions about how AVs should behave in particular driving situations. Trolley cases do not provide direct guidance about how AVs should behave in accident scenarios, despite any superficial similarities. There are several ways to see why.

Let’s suppose that in the imagined argument between the programmers above, the first programmer is correct, that the algorithms that generate AV behavior should not be based on any data about accident scenarios. That is, after engaging in careful deliberation about relevant values, no accident scenarios or anyone’s verdicts about how an AV should behave in them should inform the training of ML algorithms used in the AV. The resulting algorithm will still generate behaviors in such scenarios, but the training set won’t have been designed to generate any particular behaviors in those scenarios. In this case, the answer to the question “should programmers try to model the behaviors of AVs on the verdicts of trolley cases?” is clearly “no!” because the programmers have accepted that they shouldn’t be trying to train for accident scenarios at all.

A Thought Experiment

Another way to illustrate the point is to recognize the way trolley cases—thought experiments, imagined scenarios used to help test more general principles—typically function in ethical theorizing.

Let’s imagine we are wondering whether we should accept a principle that we should act in such a way as to maximize the total number of lives saved (holding fixed things like whether the people whose lives are saved are good people, how large their families are, etc.). Someone asks us to consider the standard trolley case. We imagine a train hurtling down the tracks and must decide whether diverting the trolley onto a track that results in fewer deaths is the right thing to do.

Let’s assume we come to see this trolley case as supporting the principle that we should maximize total lives saved. If we think that principle is true, programmers and designers should abide it by when deciding how to train AVs. The Trolley Optimist might think that the above case justifies efforts to ensure that an AV in an accident scenario will not drive into a larger crowd to spare a smaller. However, it could very well turn out that abiding by the principle we’ve settled on has the implication that we are not justified in doing so.

To see why, imagine that in the programmers’ debate above both are committed to maximizing lives saved. The first programmer argues that this can be done by avoiding accident scenarios as much as possible and to do that they should not train the algorithm for accident scenarios at all but for how to stay out of them.

Suggested Citation:"Why Everyone Has It Wrong About the Ethics of Autonomous Vehicles - John Basl and Jeff Behrends." National Academy of Engineering. 2020. Frontiers of Engineering: Reports on Leading-Edge Engineering from the 2019 Symposium. Washington, DC: The National Academies Press. doi: 10.17226/25620.
×

This might have the result that when an AV is in an accident scenario it does veer into a larger crowd to save a smaller, but given that the programmers’ decision is to program for the whole range of behaviors the car will encounter, they haven’t failed to take into account the lesson of the trolley case; they’ve taken it into account in just the right way. The other programmer might come to agree, seeing that if they were to emphasize a training regime that included more accident scenarios that looked like trolley cases, the AV would end up in those scenarios more often or perform poorly in other driving scenarios, causing additional fatalities. This programmer might see it as regrettable that the best way to maximize lives saved overall, given the decision the design team faces, will produce an AV that veers to kill the five instead of the one in a very narrow range of cases, while still acknowledging that this is the approach that conforms with the principle.

CONCLUSION

To be clear, we are not endorsing any particular view of how AVs should be trained or a particular principle as governing that decision. Our point is that the Trolley Optimist makes a mistake in thinking that the lesson from trolley cases is a lesson for how an AV should behave in a superficially similar case.

The ethical question that designers face is not about the right thing to do in a specific scenario but about how to design for the wide range of scenarios that AVs will find themselves in. Choices about how to design for one scenario are not isolated from design choices for others.

The upshot of this is not pessimism about the need for ethics in AV design, nor that trolley cases are useless for the task. The upshot is that designers and ethicists must be much more careful evaluating the appropriate decision and consider how the technologies at issue relate to the ethical principles and reasoning to be deployed.

We hope this paper motivates a closer working relationship between ethicists and designers of AVs to ensure that the right problems are solved in the right way.

REFERENCES

Achenbach J. 2015. Driverless cars are colliding with the creepy trolley problem. Washington Post, Dec 29.

Doctorow C. 2015. The problem with self-driving cars: Who controls the code? The Guardian, Dec 23.

Hao K. 2018. Should a self-driving car kill the baby or the grandma? Depends where you’re from. MIT Technology Review, Oct 24.

Hübner D, White L. 2018. Crash algorithms for autonomous cars: How the trolley problem can move us beyond harm minimisation. Ethical Theory and Moral Practice 21(3):685–98.

Lin P. 2013. The ethics of autonomous cars. The Atlantic, Oct 8.

Marshall A. 2018. What can the trolley problem teach self-driving car engineers? Wired, Oct 24.

Noothigattu R, Gaikwad SS, Awad E, Dsouza S, Rahwan I, Ravikumar P, Procaccia AD. 2017. A votingbased system for ethical decision making. Available at https://arxiv.org/abs/1709.06692.

Suggested Citation:"Why Everyone Has It Wrong About the Ethics of Autonomous Vehicles - John Basl and Jeff Behrends." National Academy of Engineering. 2020. Frontiers of Engineering: Reports on Leading-Edge Engineering from the 2019 Symposium. Washington, DC: The National Academies Press. doi: 10.17226/25620.
×

Nyholm S, Smids J. 2016. The ethics of accident-algorithms for self-driving cars: An applied trolley problem? Ethical Theory and Moral Practice 19(5):1275–89.

Wallach W, Allen C. 2009. Moral Machines: Teaching Robots Right from Wrong. Oxford: Oxford University Press.

Worstall T. 2014. When should your driverless car from Google be allowed to kill you? Forbes, Jun 18.

Suggested Citation:"Why Everyone Has It Wrong About the Ethics of Autonomous Vehicles - John Basl and Jeff Behrends." National Academy of Engineering. 2020. Frontiers of Engineering: Reports on Leading-Edge Engineering from the 2019 Symposium. Washington, DC: The National Academies Press. doi: 10.17226/25620.
×
Page 75
Suggested Citation:"Why Everyone Has It Wrong About the Ethics of Autonomous Vehicles - John Basl and Jeff Behrends." National Academy of Engineering. 2020. Frontiers of Engineering: Reports on Leading-Edge Engineering from the 2019 Symposium. Washington, DC: The National Academies Press. doi: 10.17226/25620.
×
Page 76
Suggested Citation:"Why Everyone Has It Wrong About the Ethics of Autonomous Vehicles - John Basl and Jeff Behrends." National Academy of Engineering. 2020. Frontiers of Engineering: Reports on Leading-Edge Engineering from the 2019 Symposium. Washington, DC: The National Academies Press. doi: 10.17226/25620.
×
Page 77
Suggested Citation:"Why Everyone Has It Wrong About the Ethics of Autonomous Vehicles - John Basl and Jeff Behrends." National Academy of Engineering. 2020. Frontiers of Engineering: Reports on Leading-Edge Engineering from the 2019 Symposium. Washington, DC: The National Academies Press. doi: 10.17226/25620.
×
Page 78
Suggested Citation:"Why Everyone Has It Wrong About the Ethics of Autonomous Vehicles - John Basl and Jeff Behrends." National Academy of Engineering. 2020. Frontiers of Engineering: Reports on Leading-Edge Engineering from the 2019 Symposium. Washington, DC: The National Academies Press. doi: 10.17226/25620.
×
Page 79
Suggested Citation:"Why Everyone Has It Wrong About the Ethics of Autonomous Vehicles - John Basl and Jeff Behrends." National Academy of Engineering. 2020. Frontiers of Engineering: Reports on Leading-Edge Engineering from the 2019 Symposium. Washington, DC: The National Academies Press. doi: 10.17226/25620.
×
Page 80
Suggested Citation:"Why Everyone Has It Wrong About the Ethics of Autonomous Vehicles - John Basl and Jeff Behrends." National Academy of Engineering. 2020. Frontiers of Engineering: Reports on Leading-Edge Engineering from the 2019 Symposium. Washington, DC: The National Academies Press. doi: 10.17226/25620.
×
Page 81
Suggested Citation:"Why Everyone Has It Wrong About the Ethics of Autonomous Vehicles - John Basl and Jeff Behrends." National Academy of Engineering. 2020. Frontiers of Engineering: Reports on Leading-Edge Engineering from the 2019 Symposium. Washington, DC: The National Academies Press. doi: 10.17226/25620.
×
Page 82
Next: Influencing Interactions Between Human Drivers and Autonomous Vehicles - Dorsa Sadigh »
Frontiers of Engineering: Reports on Leading-Edge Engineering from the 2019 Symposium Get This Book
×
 Frontiers of Engineering: Reports on Leading-Edge Engineering from the 2019 Symposium
Buy Paperback | $50.00 Buy Ebook | $40.99
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

This volume of Frontiers of Engineering presents papers on the topics covered at the National Academy of Engineering’s 2019 US Frontiers of Engineering Symposium, hosted by Boeing in North Charleston, South Carolina, September 25-27. At the annual 2 1/2-day event, 100 of this country's best and brightest early-career engineers - from academia, industry, and government and a variety of engineering disciplines - learn from their peers about pioneering work in different areas of engineering. Frontiers of Engineering conveys the excitement of this unique meeting and highlights innovative developments in engineering research and technical work.

READ FREE ONLINE

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    Switch between the Original Pages, where you can read the report as it appeared in print, and Text Pages for the web version, where you can highlight and search the text.

    « Back Next »
  6. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  7. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  8. ×

    View our suggested citation for this chapter.

    « Back Next »
  9. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!