The Trolley Problem Fallacy.

Man In A Costume At The Train Station - Tom Leishman

We all know the trolley problem. It goes:

There is a runaway trolley barreling down the railway tracks. Ahead, on the tracks, there are five people tied up and unable to move. The trolley is headed straight for them. You are standing some distance off in the train yard, next to a lever. If you pull this lever, the trolley will switch to a different set of tracks. However, you notice that there is one person on the side track.

You have two options:

(1) Do nothing and allow the trolley to kill the five people on the main track;

(2) Pull the lever, diverting the trolley onto the side track where it will kill one person.

Which is the more ethical option? Or, more simply: What is the right thing to do?

Most would usually choose to pull the lever, reasoning that killing one is better than five. However, because the “right thing to do” must be objective, the question is posed again but instead the solo person happens to be a loved one. Does the person being posed the question still consider pulling the lever the right thing to do? What if one of the five will invent the cure for cancer? What if the solo person solves world hunger?

This sort of amendment of the question can be used iteratively to probe the objectivity of the answer. What one may originally think is an objective answer to the question, when probed would often change their minds, providing yet another supposedly objective reasoning to why they changed their answer. You push them enough with the amendment of the question and you typically, eventually, get a response that amounts to them not doing anything at all, taking the stance that by removing oneself from the situation, it is no longer their moral responsibility. They effectively wipe themselves clean of the situation.

The ability to remove oneself from the situation (the effective inverse of the oft spoken phrase: “Walk a mile in someone else’s shoes“) is what is wrong with these contrived ethical dilemmas. In fact, it’s probably the case that people also use this tactic to justify their own inactions in real-life dilemmas (bystander effect maybe?).

So I propose a new moral dilemma, one that does not allow one to remove oneself from the situation:

The Lifeboat.

You find yourself stranded at sea on a lifeboat along with 9 other people. There are enough rations to last all 10 of you 4 weeks, if you heavily ration them. One person on the boat has worked in search and rescue for the last twenty years. That person says that out of all of their successful search and rescue operations, the shortest time they have ever taken to find someone lost out at sea was 8 weeks. This means that unless you guys can survive for at least 8 weeks, the probability of being found and rescued is practically non-existent. What do you do as a member of the 10 on the lifeboat?

The Debate.

In order to have enough rations to last 8 weeks, the lifeboat population needs to be reduced by half. Will you try to reduce the population? Or will you hold onto the slim (effectively non-existent) chance of being rescued before the 4 weeks are up? If you choose to reduce the population, then how would you choose who lives and who dies? On the other hand, given the information we have, deciding to hope for the best is effectively consigning everyone to death; your actions, regardless of it being effectively to do nothing, will effectively kill everyone else. On the other hand, If you can come to terms with the fact that by doing nothing, you, yourself will absolutely die, then does it not make sense to offer your own life up to allow the remaining 9 a slightly better chance at survival?

The complexity of the dilemma makes a solution intractable. I like debating this situation because it’s a good and simplified foil to our societal problems. I don’t think there is an objectively correct answer to these kinds of questions, only subjectively tolerable ones, much like the ones we’ve come up within our own lives. The criticism I have about other hypothetical dilemmas are that doing nothing is subjectively tolerable and that’s not good enough for me. If we want to have fruitful ethical debates, doing nothing cannot be an idea that one can even entertain as valid.

The solution?

There is no correct solution but my proposal is as follows:

I personally would want to maximize the chances of surviving. The chances of being found is effectively zero; the belief that there’s still a chance is tantamount to winning the lottery. And because of that, I effectively will die, therefore, I should resign myself to death. If I resign myself to death, then it is logical for me to prematurely kill myself to give the remaining 9 a greater chance at survival. However, I cannot expect the others to arrive to the same conclusion, thus maximizing the chance at survival. Therefore, I would do everything in my power to gain command of the lifeboat, then reduce the population by 4. I’m not going to go into specifics about how they are selected; I’ll leave it to the readers to debate about what a fair selection process might be. However, I said four and not five because the fifth person will be me; recall that I have already resigned myself to death. But there’s another reason why the fifth person must be me. It’s obvious that forced population reduction will be reviled by people, so I would not be a trusted member of the remaining population. People would also question whether my intent behind eliminating half of the population was virtuous or whether it was out of the greed to survive. Thus, I believe that a person who can make those difficult decisions must also be willing to forfeit their own lives as a proof of intent. Also, by giving my own life up I would turn myself into my own effigy; any shame of survivors guilt can be excused and blamed on me.

You should also read:

The Good Enough Fallacy.

Good enough is not good enough. The Good Enough fallacy is an example of, what I call, The Causality Inversion fallacy applied against the Pareto principle. The Causality Inversion fallacy is where an actor, desiring a certain outcome, misattributes correlation for causality and actively applies the correlative condition, thinking that it is causal to the desired outcome. An common…