With the introduction and growth of Driverless Cars (or Autonomous Vehicles “AVs“) a pressing ethical question has arisen over how to program these cars in situations that will clearly result in an unavoidable collision. Faced with the choice of saving the “driver” (or the occupants of the vehicle) by plowing into a group of pedestrians, or sacrificing the occupants for the greater good, should the AV be programmed to save as many lives as possible?
A study by Azim Shariff of the Culture and Morality Lab at the University of Oregon with the help of additional researchers from France and MIT, tested the public’s attitudes toward these kinds of decisions. The study asked respondents to choose what they thought should be programmed to happen where they were the occupant of the AV in question as well as from the perspective of the pedestrians (or some other innocent party such as a school bus or another vehicle containing children).
Reponses to the survey were mixed, with a certain percentage saying they would choose the option that favoured the greater good. But the real answer becomes clear if we look at the issue from a broader perspective: set all AVs to protect the owner/driver. In an unavoidable collision between two AVs, both will respond to protect themselves.
As we march toward more AVs on our roads one thing is becoming clear, “…driverless cars will be far better at avoiding collisions than humans.” So says a report from the Conference Board of Canada. That report predicts an 80% reduction in traffic fatalities once we achieve era when driverless cars become the majority of vehicles on the road.
With human drivers we can assume they would naturally default to protecting themselves, and their motor reactions in a split-second decision would reflect that – swerve to avoid accident; protect the driver. Once the majority of vehicles on the road are AVs, the basic programming default in all of them should be to protect the driver. That way, even the vehicle about to be hit will respond, if it can, to protect its driver. Superseding that baseline programming with lines of code that instruct an AV to make proactive choices about who lives and dies introduces far too many thorny and complicated decisions. Moreover, what makes us so certain the computer will have enough data to make the right proactive decision to “kill or injure the fewest (or oldest) people” in any scenario. What if that school bus potentially carrying 20 children is empty?
The only reasonable solution is to program every AV to protect its occupants so that (eventually) all the cars on the road have that at their core.
And as we march toward more AVs on our roads one thing is becoming clear, “…driverless cars will be far better at avoiding collisions than humans.” So says a report from the Conference Board of Canada. That report predicts an 80% reduction in traffic fatalities once we achieve era when driverless cars become the majority of vehicles on the road.
Comment
Leave a Reply
You must be logged in to post a comment.
No one cares who lives. Tell them who PAYS when someone dies.
Can we assume that the AVs can talk to each other to calculate the optimal outcome??