The Setup
Philippa Foot devised the infamous Trolley Problem, later reimagined as the transplant problem concerning medical ethics:
Five mortally ill patients are in care at a hospital, all of whom will soon die. At the same time, a sixth man is undergoing a routine checkup at the same hospital. A transplant surgeon in residence finds that the only medical means of saving the five ailing patients would be to slay the sixth and transplant into them his healthy organs. Legal ramifications and other peripheral matters disregarded, it morally right to do so?
When presented with this ethical dilemma, most people answer a resounding “no” that it would not be right to kill the sixth man. This is usually not based on any robust ethical framework but rather the feeling that it simply isn’t fair. We tend to have strong intuitions about our freedoms and empathize more with the healthy man being forced to die rather than the sick patients whose destiny we see as predetermined.
For utilitarians, however, the answer is a straightforward “yes.” All else equal, overall utility is clearly maximized by saving the 5 at the cost of the 1. It’s not even a complicated question. And despite any initial misgivings, it’s easy after some reflection to be on the side of utilitarianism. The surgeon’s decision is simply between 2 world states: the first in which 1 person lives, and the other in which 5 people live. As long as utilitarian surgeons are confident that in this hypothetical world without legal ramifications or other practical considerations they would choose to save the 5, this isn’t gaslighting. And I accept that most utilitarian surgeons are probably happy to make that argument.
Side note- recently argued there are other thought experiments that reveal far greater discrepancies between utilitarian conclusions and our moral intuitions. See here for more
The Gaslighting
However, my criticism is simply that this ethical dilemma presents a man making a choice about other people’s lives. What if we slightly reframe this problem such that the utilitarian must apply his philosophy to make a choice about his own life?
A local hospital has 6 patients: five young adults dying of organ failure but otherwise healthy, and one healthy minor there with his parents for a routine checkup. It is possible to save all 5 of the ill patients by transplanting the minor’s organs, killing him in the process. The procedure is legal as long as consent is obtained, but as the healthy patient is a minor it is the signatures of his parents that is required. The surgeon briefs them on the situation, and, as a utilitarian, he explains the ethical decision would be to sacrifice their child for the greater good.
Are there any utilitarians in this scenario, in the role of the parents, who would accept the surgeon’s argument and agree to trade the life of their child for the lives of 5 strangers?
My premise is: “obviously not.” There are no parents, utilitarians or otherwise, who would willingly agree to sacrifice their healthy child to save the lives of 5 strangers. But utilitarianism unambiguously proclaims this the ethical choice! If left up to our utilitarian surgeon, he would sacrifice the 1 to save the 5 with full confidence that his decision is ethically justified. And then if the next morning he was put into the same situation himself, he would let the 5 die and walk out the door with his child without a second thought. They’re gaslighting us!
Utilitarianism has a fundamental ‘eye of the beholder’ problem.
Utilitarianism, along with most other mainstream ethical philosophy, starts with the assumption that there are objective moral truths and then constructs a comprehensive framework for discovering and applying those truths to our lives.
This is a necessary assumption of utilitarianism. Maximizing the ‘greater good’ is only a sensible goal if ‘good’ is objective- otherwise the answer to any ethical problem is dependent on the person doing the answering.
But it is only when we are playing the role of dispassionate robot overseer making decisions about other people that this objectivity is on display. The moment that we ourselves are introduced into the problem, we find only inconsistency. Because while the noble utilitarian may indeed value the good of 5 strangers more than the good of 1 stranger, he also values his own good more than the good of any number of strangers. The ‘good’ of your own children is worth more to you than the ‘good’ of your neighbor’s children- and neither you nor he would fault each other for that.
I find this to be a glaring inconsistency that I cannot square. How can I ask others to do what I would not be willing to do in their place? How can I judge others for behavior that I would engage in myself? What hope of relevance has any ethical framework that does not recognize this?
Won’t You Please Think of the Shrimp?
A number of philosophers on Substack have taken up the mantle on behalf of shrimp lately. In particular,
argues that “you should save 10^100 shrimp instead of 1 human”, and argues that “sparing infinite shrimp from extreme torture is vastly better than saving one human from death”, and further that “one of the best charities you can give to is the shrimp welfare project.”Their arguments are clear, logical, and entertaining- and I highly recommend giving them a read. They start with the generally accepted premise that ‘suffering is bad’, use thought experiments to demonstrate that the ratio of (1 human life / 1 shrimp life) does not equal infinity, and consequently conclude that there must be some N finite amount of shrimp suffering that is worth the same as 1 human. Demonstrating that N is probably less than incomprehensibly large numbers like 10^100 follows fairly easily.
I accept their arguments at face value. I just have a simple question to ask: If it was your own life that must be sacrificed to save 10^100 shrimp, would you volunteer? If it was the life of your partner, or your mother, or your child that was required, would you sign the waiver?
My premise is, again: “obviously not.”
Follow up question: Does this make you an unethical person? Or if not unethical, at least morally inconsistent?
I’m genuinely looking forward to their response. I’m sure this isn’t the first time such an accusation has been levied, and that there must be compelling reasons to deem it irrelevant. And if there’s anything philosophy students are good at, it’s clever refutations!
Personally, I want a system of morality that doesn’t gaslight me.
In other words, one that does not instruct us to make choices against our own self-interest on the basis of some assumed objective moral truths. Rather than pretend away or condemn our inherent selfish desires, I want a philosophy that accepts them as a necessary starting place.
Of course, a world in which we all behave only according to our immediate selfish desires- like animals- does not sound appealing. But accepting the dominant role that self-interest plays in each of our perspectives does not mean we must give up on collaboration. What separates us from the animals is that we are capable of negotiating with one another to solve collective action problems. If there were no means of communication, there would be no means of building trust in one another, and then all of ethics would be doomed from the start. But we are able to communicate, and by doing so we can do 2 important things:
Utilize comparative advantage. If you are skilled at sewing, and I am skilled at farming, we can trade our labor such that we are both better off than if we didn’t cooperate.
Avoid tragedy of the commons. It may be in your short-term interest to steal from me, but I will retaliate- and the ramifications of living in a society with rampant theft are less attractive than if you didn’t steal to begin with. By trusting each other with an implicit (or explicit) social contract, we are all better off.
With time, the benefits of collaboration become clear, and we start to value each other. A stranger becomes a neighbor becomes a friend, and suddenly the lines between ‘their good’ and ‘our good’ become somewhat blurred. The benefit of the doubt is extended. Civilization forms.
What I want is an ethical philosophy that 1) accepts that we will generally behave according to our own self-interest, but 2) encourages collaborative behavior anyways.
This does not require an objective, absolute morality.
All that is required is the concept of reciprocal empathy. The golden rule of "Do unto others as you would have them do unto you" can be rearranged as “Ask others to do only what you would do in their place” or any number of other related maxims.
This ethos naturally leads to compromise and collaboration. If I am cold and you are hungry, it takes only a tiny drop of empathy to realize that you might be willing to trade a blanket for a sandwich. If Bob grows up in a culture that emphasizes gift giving, but that makes Alice uncomfortable, Bob doesn’t apply the golden rule literally to mean “I should give her a gift because that is what I want” but rather realizes that what he would want is to be respected and understood. She applies the same logic. He holds off on getting her a gift, and she surprises him with one. A friendship is formed <3.
Rather than seeing our self-interest as evil or even undesirable, it instead becomes the foundation for empathizing with and trusting each another. I can’t trust a wild animal to do what is in its best interest and not attack me. But I can trust my neighbor to accept a deal that benefits both of us, even if those benefits become increasingly intangible in complex society. And over time, as we form a friendship, the benefits of our collaboration become more abstract. Afterall, what else is friendship other than people caring about each other’s happiness?
The truth is that I’m not friends with the entire world. I’m friends with my friends! Their ‘good’ is what I care about most. If that were not the case and all ‘good’ to every person meant the same to me, as alleged by Utilitarianism, then to be my friend would mean nothing at all.
I like more philosophers on Substack! Let's go! Keep em coming.
I'll just say that I think human intuitions are wrong. While I certainly wouldn't sacrifice my child for 5 random strangers, that's not morality. If you told me I had to sacrifice my child or save 1 billion strangers, and I got to choose, I think the "moral" choice is to save the 1 billion, and the "selfish" choice is to save my child.
You say morality should "not instruct us to make choices against our own self-interest" and this flies in the face of what I believe morality to be; I think morality is about selflessness, not selfishness. I think improving other people's lives is good even if it doesn't make me feel warm and fuzzy inside in a self-interested way. I think that people have moral weight even if I don't care about them.
Flo Bacus is not a utilitarian, she's a Kantian. Her argument does not require assuming utilitarianism, just that lots and lots of suffering is really really bad.