When I was first formally introduced to ethics last summer, my professor quickly pointed out to me a distinction between consequentialist and categorical reasoning. Consequentialist thinking looks exclusively at the future state of affairs caused by the action in evaluating how moral or rational it is, while the categorical camp deals with the intrinsic differences in the act itself, putting aside its results. Mill’s utilitarian principle of the greatest good for the greatest number demonstrates consequentialism, and Kant’s categorical imperative serves as an example for the latter. In Kant’s imperative, certain actions like lying, cheating, or stealing (which actions we choose specifically matter less) are unquestionably ruled unethical due to their intrinsic turpitude and breach of rights—there are no exceptions.
The trolley problem helped me crystalize the tension between these two types of thinking, which are practically mirror images of each other in what they focus on. In each hypothetical situation, you’re obligated to choose either one or the other option.
Case #1: You’re the conductor of a train with broken brakes that’s barreling towards five railroad workers who are hammering away at the tracks at this unseemly hour due to a scheduling miscommunication. You’ve found a lever that you can use to turn the train onto a branching track on which a helpless tourist has found himself. Do you turn the lever or not?
Case #2: In a similar scenario, a runaway train is headed straight for another five railroad workers, but this time, you’re watching the scene unfold atop an overhead platform. You notice the same helpless tourist on your side that peers over the platform’s edge to see what’s happening. You know that if you pushed the tourist off the edge, he’d land on the tracks so as to stop the train altogether. To push or not to push?
Case #3: Shifting gears a bit, you’re now a surgeon at a hospital performing clinical duty when our fellow tourist meanders in for a routine checkup. Unbeknownst to the tourist, five other railroad workers missing various vital organs are sitting in your office, begging for an organ transplant as they are on the verge of death. Fantastically, the blood types and organ requirements work out such that you could take all five organs from the tourist and save the lives of all five railroad workers. Do you perform the operation on the tourist?
A few qualifications: Despite any of your potentially magical solutions to the dilemma, there are no easy outs—you can be 100% sure that failing to take the life of the tourist will result in the death of the railroad workers. Also, just to isolate the scenario, assume that no legal or social repercussions will ensue. Finally, we are to wear a veil of ignorance (see Rawls) as to who these people are—as you don’t know anything about them, assume they are the average tourist or the average railroad worker. Though, you may choose to hypothesize the fact that the average railroad worker may be more or less valuable than our random clueless tourist.
Stepping back and comparing the three cases with each other, we notice that all of them make you choose between passively allowing five railroad workers to die and actively killing one helpless tourist. Noting no significant difference in the value of a railroad worker’s life to that of a tourist (especially not when we have five railroad workers), the rational choice in this isolated exercise seems to be to save five lives at the cost of one. Yet, our natural inclination tends away from this option as we go from the first to second to third case. How do we explain this phenomenon?
What initially seems to be out of the question is to rely strictly on utilitarian principles—through this lens, all three cases appear more or less straightforward to kill the tourist. While this may be the stance taken by some, it still doesn’t explain why we’re so much more hesitant to operate on an innocent tourist in the final case than to flip the switch in the first scenario (I’m assuming we all would). That’s when categorical thinking in terms of rights and responsibilities can come into play. As a train conductor, you have an obligation to navigate the train in a safe, intelligent manner. On the other hand, as an onlooker on the platform, you have no reason to get involved in the ethics of the train crisis unfolding beneath you.
Truth be told, the “it’s not your place” argument doesn’t make much sense to me. While certainly being paid to conduct the train in the first example is an excellent reason to turn the lever in the first scenario, the fact that nobody asked you to save lives in the second example shouldn’t stop you from being ethical. Helping the world isn’t something that we should always expect someone to ask from us—the call of duty is always upon us, though the onus is on us to acknowledge it. If you choose not to push the tourist off the platform, sure, nobody’s going to blame you per se about not helping out. Yet, I’d argue you’re just as blameworthy as the train conductor who doesn’t turn the lever and takes five lives. The fact of the matter is that when you were consciously thinking of whether to push or not, you did involve yourself in the impending disaster—you became complicit. Choosing to not do something is still an active choice.
As to explain the final case, a valid consequentialist approach I see appeals to social contract theory and the notion of trust. As a society, we trust doctors to look after our best interest. In a world where the surgeon rips out the tourist’s organs to distribute to the greater good, the fabric of mutual respect and dependency is ripped to shreds. While a single isolated instance may do only marginal damage, widespread replication would bring society to its knees—we could trust each other in only the most monitored, remedial ways. All in all, I’d save five at the expense of one for cases one and two, but not three, and I don’t really see the appeal of categorical thinking at least in this example.