Skip to main content

Currently Skimming:

Why Everyone Has It Wrong About the Ethics of Autonomous Vehicles - John Basl and Jeff Behrends
Pages 75-82

The Chapter Skim interface presents what we've algorithmically identified as the most significant single chunk of text within every page in the chapter.
Select key terms on the right to highlight them within pages of the chapter.


From page 75...
... We summarize the debate between the optimists and pessimists, articulate why both sides have failed to recognize the appropriate relationship between trolley cases and AV design, and explain how to better draw on the resources of philosophy to resolve issues in the ethics of AV design and development. AV ACCIDENT SCENARIOS AND TROLLEY CASES Autonomous vehicles will inevitably be in accident scenarios in which an accident that causes harm (to pedestrians, passengers, etc.)
From page 76...
... Again, a case involving an AV might have a similar structure: Perhaps an empty AV has gone out of control and will hit five pedestrians unless another AV with a single passenger drives itself into the first AV. TROLLEY OPTIMISM Trolley Optimism is the view that trolley cases can and should inform how AVs are programmed to behave in these sorts of accident scenarios.
From page 77...
... Sometimes, this is grounded in the idea that thought experiments that philosophers deploy are so idealized and unrealistic that they are useless for navigating the real world. We think these sorts of objections rest on a mistaken view of the function and value of thought experiments; we set that aside except to note that a key motivation for Trolley Optimism is that accident scenarios seem to closely resemble trolley cases.
From page 78...
... In a traditional algorithm the instructions are laid out by hand, each step specified by a programmer or designer. In contrast, ML algorithms themselves generate algorithms, which do not have the steps used to carry out some task specified by a programmer.
From page 79...
... For example, to produce an AV that suddenly confronts a scenario where it must swerve and risk harm to its passenger or maintain course and hit a number of pedestrians, such scenarios must be included in the training set and a particular input–output pair marked as desirable. This is not the only way to achieve the desired behavior; the point is that behavior in particular scenarios is influenced by choices that programmers and d ­ esigners make about how to train the ML algorithms.
From page 80...
... That is, after engaging in careful deliberation about relevant values, no accident scenarios or anyone's verdicts about how an AV should behave in them should inform the training of ML algorithms used in the AV. The resulting algorithm will still generate behaviors in such scenarios, but the training set won't have been designed to generate any particular behaviors in those scenarios.
From page 81...
... This programmer might see it as regrettable that the best way to maximize lives saved overall, given the decision the design team faces, will produce an AV that veers to kill the five instead of the one in a very narrow range of cases, while still acknowledging that this is the approach that conforms with the principle. CONCLUSION To be clear, we are not endorsing any particular view of how AVs should be trained or a particular principle as governing that decision.
From page 82...
... Ethical Theory and Moral Practice 19(5)


This material may be derived from roughly machine-read images, and so is provided only to facilitate research.
More information on Chapter Skim is available.