Insoluble

An article in Wired (“Self-driving Cars Will Kill People. Who Decides Who Dies?” by Jay Donde, 9/21/17) nicely captures the way a new technology can create unprecedented problems. Ethicists love problems like this because they offer textbook cases for the application of moral theories. Thus, the problem in question is how to program self-driving vehicles so that they will be as safe as possible: not only for the passengers but also for bystanders (pedestrians, passengers in other cars, etc.).

 

           One particularly exquisite problem, or problem-type, was already well-known to ethicists before the advent of self-driving technology. It is known as the trolley problem after its original incarnation, but it has since taken many tantalizing forms. The basic scenario is that you are the conductor of a trolley whose brakes suddenly become inoperative. The trolley is hurtling down a hill toward five workers on the tracks. The only way to avoid killing them all is to turn the trolley onto a sidetrack. Unfortunately there is a solitary worker on that track. So the moral problem is: Should you divert the trolley, thereby deliberately killing an innocent person?

 

            Most people answer “Yes.” This seems to confirm the moral theory of utilitarianism, according to which the right thing to do is maximize the good; and, clearly, sparing the lives of five innocent people is better than sparing the life of only one, all other things equal.

 

            However, a variant on the above asks you to consider standing on a bridge over the tracks on which there is a runaway trolley. It so happens that there is also a fat man on this bridge, and only by pushing him off the bridge and into the path of the trolley could the trolley be prevented from crashing into five workers working on the line. So here is the identical calculus: Spare five lives by sacrificing one. But in this case many people’s intuitions would balk. Certainly mine do. I would never push that fat man off the bridge. It would never even have occurred to me if not for this thought experiment; but even so it is “unthinkable” in the sense of being beyond the pale of ethical action.

 

            But what is the difference? That is, what makes an ethical difference between the two cases? According to moral theory, the second case exemplifies the theory of deontology, according to which it is always wrong to treat somebody merely as a means. And what could be a better instance of than that “merely using” the innocent fat man to save the lives of five innocent strangers.

 

            But our different intuitions in the two cases create a real problem for moral theory, for presumably only one of the two theories could be true since they lead to contradictory moral injunctions. For even in the fat man case a utilitarian would say, “You should push the fat man off the bridge” (I know a noted utilitarian who maintains just that), but a deontologist would absolutely forbid it.

 

            When I was a moralist, I was a staunch deontologist; and I was horrified by utilitarianism. However, even then I had to admit that the deontologist also faced some tricky problems. For example, what if the fat man was a useless old bum but the five people on the tracks were gifted children with bright prospects?

 

            But now, as an amoralist, I brush aside all of these conundrums. They strike me as a game played by arbitrary rules having nothing to do with reality. For I no longer believe there is such a thing as morality; hence moral theory is about literally nothing. You might as well be debating about how many angels can dance on the head of a pin.

 

            All of the facts in these cases are clear. We know, ex hypothesi, the outcomes of the various actions or options available to us. We also know what our intuitions are. And that is all we need to know. We don’t need to make a further inference that relies on a moral theory in order to “figure out” what to do. We will simply act, based on our (beliefs about) the facts and our intuitions (or desires). For our intuitions are not a direct channel to some moral truth but only a feeling based on one’s particular genes or upbringing or experiences or other purely causal factors.

 

           Thus, if I were the trolley conductor, I would probably (other things equal) turn the trolley onto the sidetrack. If I were standing on the bridge, I might shout out to the five people standing on the track even if this were futile, but I would not (other things equal) push the fat man into the path of the runaway trolley. There is no question of what would be the right or wrong thing to do, although for, say, a jury, there could be the question of whether they wanted to encourage or discourage a certain kind of action in their society.

 

            So what about self-driving vehicles? How should the designer program them to deal with a trolley-type situation, where it’s, say, five lives versus one? As we have seen, there is really a host of different scenarios, so the designer will have much to ponder. But I think it is absurd to suppose he or she would ever come up with a single solution that applied to all possible types of cases, which could be countless … but even if there were just two main types. Meanwhile, there will be constraints of time and money that dictate a practicable solution, which is guaranteed to steer the vehicle in a way that would violate most people’s intuitions in some situation or other. (Similarly I have heard engineers claim they could build an airplane that would never crash, but the cost would be prohibitive. Risk is inherent to living.)

 

            The “moral” I draw from all of this is that morality is a futile pursuit. It makes no sense – nether practically nor metaphysically – to be seeking “the right answer” to a moral question. And ethicists who gleefully point to self-driving vehicles as an issue that forces us to do moral theory are, in my opinion, indulging in wishful thinking to make their field seem useful and important. You cannot apply an insoluble issue to a practical problem. Self-driving cars cannot force a solution to an insoluble problem.

 

My espousal of amoralism has been influenced by cases such as this. I would now almost define philosophical problems as insoluble ones, but see them still as having a purpose, namely to prompt us to reflect and discuss them so that we as a society can then, not agree, but vote on what to do … having previously agreed (so to speak) to do what we vote to do. The basis of our decision will not be a solution arrived at by reason, since reason could lead thoughtful and informed people to different and even opposing conclusions; rather, it would be a decision arrived at by voting (or whatever is the prevailing mode of group decision-making in a given group) after having reasoned together about the issue.

 

By the way, I think the most exquisite trolley-type problem facing the designers of self-driving cars would be when the choice is between having the car plow into pedestrians or veer away from them and kill the occupants. But I also think this will be the easiest to resolve, since – morals, schmorals -- what will obviously happen is that consumers would simply refuse to buy a car they knew was designed to spare pedestrians at their or their passengers’ expense. Case closed.

Popular posts from this blog

Closing the Gaps

Who Is more likely to be a psychopath: the rational moralist or the emotional amoralist?

Eating of the Tree: The phenomenology of the moral moment