According to the first law of robotics, a robot “may not injure a human being or, through inaction, allow a human being to come to harm.”
But what if the robot’s choice (or, in this case, the self-driving car’s choice) was between running over pedestrians at a crosswalk or swerving into a concrete barrier and killing the car’s occupants?
It was science fiction for Isaac Asimov, but with autonomous, self-driving cars now becoming a reality, debate rages over not just how ready the public might or might not be for autonomous vehicles (a recent poll found that 26 per cent of Canadians “can’t wait” for the new technology to arrive) but also on the necessity of equipping the machines with moral decision-making capabilities to respond to morally-fraught scenarios.
On the one hand, we have those who say that it’s too early in the development of autonomous vehicle tech to get bogged down in moral dilemmas. Karl Iagnemma, CEO of NuTonomy, a company whose vehicles are currently being tested on public roads in Boston, says that his company’s goal is to create a safe vehicle, not a “sophisticated ethical creature.” “When a driverless car looks out on the world, it’s not able to distinguish the age of a pedestrian or the number of occupants in a car,” Iagnemma said. “Even if we wanted to imbue an autonomous vehicle with an ethical engine, we don’t have the technical capability today to do so.”
But on the other hand, others say that the future is coming and it will definitely involve autonomous vehicles having to make split-second decisions, often with human lives on the line. From this perspective, a group of researchers at MIT have been putting the general public through an interactive project called the Moral Machine which attempts to glean something of the common opinion on how self-driving cars should behave when faced with morally complex situations.
The program offers a series of simplistic ‘this or that’ choices – say, between having the autonomous car hit the concrete barrier and kill all three of its passengers or run over four dogs and an elderly man – and tallies up the users’ ultimate judgments along metrics gauging, for instance, how concerned the user is with the age, gender or even fitness of people when it comes to ethical decision-making.
The research group sees their work as valuable not in slowing down or halting the progress towards self-driving cars and similar technologies but in learning more about people’s beliefs so as to help ease the transition to driverless tech. “There is a real risk that if we don’t understand those psychological barriers and address them through regulation and public outreach, we may undermine the entire enterprise ,” said Iyad Rahwan, an associate professor at the MIT Media Lab. “People will say they’re not comfortable with this. It would stifle what I think will be a very good thing for humanity.”
In a study on the topic published in the journal Science, researchers found that while users are broadly utilitarian in their moral thinking (preferring to choose the option that causes the fewest number of deaths), they nonetheless consistently preferred to sacrifice the lives of passengers for the sake of pedestrians. Yet, perhaps unsurprisingly, the same people said that any autonomous vehicles they themselves rode in should be programmed to protect passengers at all costs.