Sunday, January 17, 2016
Self-driving cars need to make the right ethical judgements
As an autonomous car drives down a street, a frail old man suddenly steps into its path from the right. Simultaneously, a child steps into its path from the left. It is too late to brake. If the car swerves to the right, the old man dies, the child lives. If it swerves to the left, the old man lives, the child dies. If it continues straight ahead both will die. What is the ethically correct decision for the car?
Variations of this kind of ethical dilemmas – often referred to as the ‘trolley problem’ currently receive much attention. At first glance this seems to be a really difficult question. In the following we show that the problem is largely irrelevant for self-driving cars. We progress from weakest to strongest arguments:
a) No good solutions to these dilemma exist or can exist. Humans are not able to make a ‘right’ choice when faced with such situations either.
This dilemma is a good starter for night-long discussions. None of the alternatives one comes up with is ‘ethically right’. If a human driver is in the same situation, he will necessarily make a choice but any action he chooses, is a bad action. How can we require a machine to make an ethical choice that no human is capable of making?
b) These dilemmas assume certainty and knowledge that does not exist in such situations.
For these dilemmas to work, the harmful outcomes for all of the actions must be known and certain. But in practice, nothing is certain. There is no certainty about the extent of damage for each of the actions. There is no certainty about the behavior of either of victims as the car approaches them. These cars can not have exact knowledge about the age, gender, health etc. of the persons in front of them, and can not correctly predict the resulting harm.
c) These dilemmas are always incredibly contrived. The probability that a car faces such a situation is extremely low.
Why don’t we discuss such dilemmas today where billions of trips are being taken daily in cars and several thousand people die each day in traffic accidents? Cases where drivers face such situations are extremely rare today and may be even less probable for self-driving cars. From a practical perspective, therefore, these dilemmas may be completely irrelevant.
d) The question is wrong.
When looking at ethical questions there can be a huge difference between considering what is right and considering what is wrong. The ethical dilemma is usually presented in such a way that the self-driving car needs to take the ethically ‘right’ decision. As we know, the trolley problem has no ethically right solution- because in principle we can not weigh one life against another – , which makes it practically impossible for self-driving cars to solve the dilemma.
But – like humans who face this problem – self-driving cars do NOT need to adopt ethically right decisions. Our legal system and our ethics have evolved sufficiently to realize that many problems exist where it is hard to decide whether an action is legally or ethically correct. The standard by which we measure actual behavior against the law and against our moral compass therefore is not so much whether an action is ethically right but rather whether an action is ethically wrong: Actions must not violate laws or ethical standards! This difference in the problem statement matters! Instead of requiring self-driving cars to positively take ethically correct decisions, what our society really requires of them is that they avoid making ethically wrong decisions!
If we reformulate the dilemma in this way, the fundamental problems vanish. Neither is it right to kill the child nor is it right to kill the old man. But as it is impossible to avoid one of these outcomes, neither action can be characterized as being legally or ethically wrong. While both outcomes are bad and deplorable no court would find the algorithms at fault because they led to one or the other harmful actions. Exactly because there is no ‘right’ decision that either the victim must be the child or the victim must be the old man, no court will argue that the actual action taken by the self-driving car in this specific scenario was wrong.
In summary, much of the current discussion about the ethical dilemmas of life and death decisions related to self-driving cars is misplaced because it is concerned with finding right decisions where no right decisions are possible instead of realizing that self-driving cars can get by as long as they are able to avoid making decisions that are wrong.
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment