36 Comments
User's avatar
Kyle Star's avatar

I like more philosophers on Substack! Let's go! Keep em coming.

I'll just say that I think human intuitions are wrong. While I certainly wouldn't sacrifice my child for 5 random strangers, that's not morality. If you told me I had to sacrifice my child or save 1 billion strangers, and I got to choose, I think the "moral" choice is to save the 1 billion, and the "selfish" choice is to save my child.

You say morality should "not instruct us to make choices against our own self-interest" and this flies in the face of what I believe morality to be; I think morality is about selflessness, not selfishness. I think improving other people's lives is good even if it doesn't make me feel warm and fuzzy inside in a self-interested way. I think that people have moral weight even if I don't care about them.

Expand full comment
Dylan's avatar

So, your claim is that although you might make the 'selfish' choice yourself this does not refute the idea that the 'moral' choice would be otherwise.

I won't dispute that in principle. But is there any value in an ethical framework that consistently says the 'moral' choice is something that nobody is willing to do? I think it's more likely that this apparent inconsistency demonstrates that the philosophy is missing something profound about human nature. And that to separate things into 'selfish = bad' and 'selfless = good' is doomed to failure because selfishness is an inherent principle of life. Better to have a philosophy that accepts this and defines 'moral' behavior within this context, than to have one that requires something that can never be.

Expand full comment
Steffee's avatar

"But is there any value in an ethical framework that consistently says the 'moral' choice is something that nobody is willing to do?"

I don't think utilitarianism "consistently" asks for things that nobody is willing to do.

Maybe there's nobody on the planet willing to sacrifice their child for 5 strangers, but I bet there are parents who would sacrifice their child to save the rest of humanity. Somewhere between 5 strangers and 8 billion, these parents will have a line. Utilitarianism pushed that line in the correct direction.

Expand full comment
Hugh Hawkins's avatar

This just means that utilitarians should grade on a curve, given human selfishness, and try to set rules that everyone can follow. Maybe people only have to donate 10% of their income to effective charities, since that’s a better rule for a movement to propagate to gain wide acceptance and therefore more total donations.

Expand full comment
Dylan's avatar

I suppose you could try to come up with a new form utilitarianism that accepts some boundaries based on selfishness, but don't you think that would quickly look like something pretty different than utilitarianism?

Expand full comment
Hugh Hawkins's avatar

What I'm suggesting is just rule utilitarianism. Or, it's basically utilitarianism if you start thinking from the point of view of a social movement (or from the POV of any large-ish group or organization).

Some rules are too strict (people won't join) and some rules are too loose (people won't change their behavior enough). So you need to balance both factors. This is still utilitarianism! The whole point of rule utilitarianism is that in order to maximize utility, it might not be best to just tell everyone to follow act utilitarianism strictly, other tactics might be useful.

Expand full comment
Dylan's avatar

That does sound fairly reasonable. It feels like most formulations of the rules would still likely result in scenarios like this, but I can see how that shifts to a more pragmatic concern than a philosophical one

Expand full comment
Jack Thompson's avatar

Flo Bacus is not a utilitarian, she's a Kantian. Her argument does not require assuming utilitarianism, just that lots and lots of suffering is really really bad.

Expand full comment
Dylan's avatar

True. To her, we could pose a slightly different question: "Would the means justify the ends if one could forcibly sacrifice 1 human to save the 10^100 shrimp?"

Expand full comment
Jack Thompson's avatar

But it seems like your argument gets it exactly backwards for Flo. As a Kantian, she is much more likely to sacrifice *herself* to save 10^100 shrimp than to sacrifice somebody else, because there may be prohibitions on killing but not on voluntary self-sacrifice. Flo Bacus endorsing shrimp welfare seems like evidence *against* your utilitarian-gaslighting hypothesis.

Expand full comment
Dylan's avatar

If Flo (or anyone, actually) was willing to die for shrimp welfare, then I agree that would be evidence against utilitarian-gaslighting. I just don’t think her argument implies anything at all about her willingness to die for the shrimp!

(copied from my reply on your note)

Expand full comment
Jacco Rubens's avatar

This is just a rehashing of the demandingness objection.

There is always going to be a gap between moral theory and what the holder, even if they are the most dedicated and morally motivated individual, is going to be willing to do. We are selfish creatures. Sacrificing a child is probably the clearest example of where our evolutionary instincts are going to be overwhelmingly strong.

I don't think it's difficult or problematic for a utilitarian to say "Yes, it would be the moral thing for me to sacrifice my child to save the five, but I am not morally strong enough to agree to it".

Expand full comment
Dylan's avatar

I agree that a utilitarian could say “well nobody would do it, but it’s still the moral answer.” but my point is if the ethical framework constantly leads to conclusions that nobody would do, I think it’s fighting a losing battle and I’d rather have a philosophy that accepts our inherent selfishness as necessary rather than unethical.

Expand full comment
Jacco Rubens's avatar

Mm. I agree that would be ideal. But to me, trying to reconcile human selfishness and evolutionary instincts with morality is much more of a losing battle, so I'm willing to accept some conflict!

Expand full comment
Travis Gritter's avatar

This was a great read! I had to think about it for a bit. I have a couple of response:

First, there are times where acting against your own self-interest for the benefit of others is celebrated precisely because it is hard to do. For example, consider the soldier who jumps on the grenade in the trench to save their comrades. The reason we celebrate actions like this is because we don’t expect people to make such a decision, yet some do anyway. Just because most people won’t make a selfless decision for the greater good, doesn’t mean it’s not the moral decision.

Second, a morality around rational self-interest does have its problems too. For example, what is our moral obligation to those who can’t reciprocate? Such as future generations, the disabled, animals (i.e shrimp) and so on. I think there is moral weight to helping these groups that’s difficult to get to with only rational self-interest.

Third, "...my premise is, again: 'obviously not.'" - where do these moral intuitions come from? Brain scans show that if you ask if someone would push a fat man in front of a trolley to save 5, it triggers areas in the brain with social emotions. Whereas simply pulling the trolley switch activates regions associated more with rational calculation. This highlights the problem with moral theory—we're trying to systematize what are essentially emotional/instinctual responses. Utilitarian reasoning works well in many contexts, so I don't think they are gaslighting us. But I do agree that we shouldn’t just rely on analytic calculus for moral decisions and should take our innate evolved moral intuitions into consideration, although perhaps accepting a bit more moral uncertainty.

Expand full comment
Dylan's avatar

Interesting thoughts- thanks for commenting!

To your 1st point, I agree that you can still praise an action as moral despite recognizing that very few people (including yourself) would do it. But I think that many utilitarians don't recognize that fact and make much bolder claims (see many of the responses to this post claiming that they would die for some specific number of shrimp). And I also think even if everyone recognized that fact, any framework that broadly considers selfless behavior 'good' and selfish behavior 'bad' remains impractical for that very reason.

Your 2nd point is interesting. I think that the capability of another in reciprocating isn't actually necessary to use reciprocal morality as a benchmark. For instance I can recognize that I would want to be taken care of if I was disabled, and so can establish that the moral behavior would be to take care of the disabled (regardless of their ability to return the care).

Regarding point 3, I suppose I just take it as self evident that the highest priority of all living creatures is themselves and their offspring. If I take this to be the natural state of affairs, then it's impossible for me to judge someone for taking actions to save their family when I know that I and most other people would do the same. And if I can't judge them for that, then how can I use utilitarianism to judge ethical behavior of individuals?

Expand full comment
Travis Gritter's avatar

Thanks for the response! I do have a couple additional thoughts.

Firstly, isn't this essentially saying "I want a moral system that validates whatever I'd do anyway"? If I don't want to give to charity, then not giving becomes the right choice. This seems less like ethics and more like sophisticated rationalization. I do think there's real value in having moral standards that challenge our default self-interest, even knowing most people won't meet them.

Secondly, the reciprocal empathy framework requires the same kind of impartial reasoning that utilitarianism does. When you say 'I can recognize that I would want to be taken care of if I was disabled,' you're asking people to step outside their current self-interest and consider what they would want from behind a 'veil of ignorance' about their circumstances. This is essentially asking for the same objective, universal perspective that utilitarians advocate.

Third, determining an objective morality based on what seems like a 'natural state of affairs' is difficult. How do we account for different societies perspectives on what is natural? Slavery was once widely practiced and felt natural to most people. Denying women rights seemed obvious to most societies. Or even something like child sacrifice (not for organs) which seems abominable to our modern ears, might be a moral decision in an ancient society. If naturalness varies this dramatically, I don't think can't provide the objective foundation of a moral theory you're looking for.

But look, every moral framework has vulnerabilities - that's what makes moral philosophy so challenging. The question is whether we want systems that challenge our default behavior or ones that provide sophisticated justification for it.

Expand full comment
Dylan's avatar

Hmm, interesting follow up points.

I think it makes the most sense to respond in reverse order.

On your second and third points, I think that where we are diverging is that you see me as suggesting reciprocal morality as a replacement for utilitarianism that still seeks to answer ethical problems with conclusive, objective answers. I am not suggesting so. I see the fundamental goal of ethics as improving cooperation- which I believe is better achieved by accepting our selfishness and persevering anyways (see section on comparative advantage and tragedy of the commons). And I condemn utilitarianism as only improving cooperation when you can stay behind the 'veil of ignorance' that you mentioned, only to break down immediately when that veil is removed.

Back to your first point, I don't think it's useful to 'judge' behavior as ethical or unethical, as it's all completely context dependent. But abstaining from ethical judgement doesn't mean I have no opinion on the actions of others. I much prefer to be around selfish people who have learned to use reciprocal morality to cooperate with each other over utilitarians who judge others for 'selfish' behavior and recommend selfless actions that they would never do if put into the same situation. You think there's "real value in having moral standards that challenge our default self-interest, even knowing most people won't meet them", and to that I would repeat this line from the post "Rather than pretend away or condemn our inherent selfish desires, I want a philosophy that accepts them as a necessary starting place."

Lastly, I completely agree every framework has vulnerabilities. Utilitarianism seems like a very useful macro tool to make decisions about other people when your self-interest is completely obscured. Otherwise? Worthless. Reciprocal morality seems to be very useful in our own personal lives but perhaps doesn't have the sophistication to deal with large-scale problems. But you and I and 99% of the people on the planet deal with our own problems, not the problems of others!

Expand full comment
Roman's Attic's avatar

Your proposed moral system sounds similar to Ayn Rand’s “Objectivism” philosophy.

—————————

While I agree that no parents should or would want to give up their child for organ sacrifice, I feel like your hypothetical scenario assumes a “default” state that I don’t necessarily agree with. To illustrate my point, I’m going to make a few modifications to the original scenario:

1. All 5 of the people with failing organs are children.

2. In addition to the parents of the to-be-sacrificed child being in the room, the parents of the children with failing organs are in the room as well, watching their children die.

3. In the society where this event is taking place, child sacrifice and organ harvesting is the legal and cultural default that must be opted out of, rather than something the parents opt their child into.

For the sake of a more vivid scenario, let’s talk some more about social pressure in this society. The culture and propaganda surrounding it is similar to how, in our world, we pressure people to go to war when called to it. Most people see it as their duty as citizens, it is believed to be for the greater good of the country, and people that back out are viewed as somewhat cowardly. So, when put into these unusual situations, most people choose to sacrifice their children when it is required to save more lives. A few years ago, this happened to one of your friends and her kid. She was deeply hurt by the loss, but she believed it to be the right decision to make. Additionally, four of your other friends have had the lives of either their children or themselves saved, and they continue to live their happy lives. For the sake of vividness, take a moment and imagine the names of all 9 of these people: 6 parents with saved children, 1 person saved, and 2 parents who lost a child. Don’t continue on until you’ve thought of all 9 of them. All of them are sorry for those who had to be lost so they could continue living, but they believe it to be the right decision.

Even though it is tragic and unfair to have your child randomly sacrificed for organ harvesting, I would argue that it just as tragic and unfair to have your child die of organ failure. If you were the parent of a child whose organs were about to fail, in a society where it was generally expected that single kids should be given up to save many more, wouldn’t it feel unfair for someone else to selfishly let your child (and several others) die? You could understand why they would make that decision, but as a self-interested agent, you would want the other parents to uphold their social and cultural duties. I mean, think of all your friends and their kids who would have been lost if people backed out! Wouldn’t you rather live in the world where you had those friends, rather than the world where you didn’t?

The truth is, I think most people would be better off if everyone acted in accordance with “for the good of all” principles, assuming they were thought-through correctly. The average person is made much better off by them. So, even though as an individual, you might be best off by acting selfishly, by creating a community if people who do things they dislike for the good of the community, everyone is made better off. Any self-interested individual should want to live in this community.

Expand full comment
Dylan's avatar

Thanks for your comment! Replied on notes

Expand full comment
Dylan's avatar

link below to what became our discussion

https://substack.com/@onlyvariance/note/c-142858358

Expand full comment
Sol Hando's avatar

As shrimp approach infinity, their moral value does not scale infinitely. I think it’s more something like:

lim (x→a) f(x) = 0.8 h

x = number of shrimp

a = ♾️

h = the moral worth of a human

So there is no arbitrarily large number of shrimp that I would trade for a human life. Once we get into the billions, or maybe just millions, the additional moral consideration for the 500 millionth shrimp is almost nothing. I value 500 million shrimp at about 0.79h, and 500 trillion shrimp at 0.799h or something along those lines.

Expand full comment
Dylan's avatar

Interesting idea, but that’s definitely not utilitarianism! I think utilitarians would say that we struggle to internalize the difference between 2 extremely large numbers but that doesn’t mean they lose objective value

Expand full comment
Jacco Rubens's avatar

As a utilitarian: yes this is what I would say. The limit argument fails for me because what if the choice is:

A) 500bn shrimp suffer and die and I get a paper cut

B) 500tn shrimp suffer and die but I don't get a paper cut

Your system seems to go for B, which can't be intuitively right.

(Or you might need to make the numbers of shrimp even larger depending on the weight a paper cut... But eventually you could make it work).

Expand full comment
Dylan's avatar

I do think if Mr. Sol Hando is a utilitarian that this presents him a challenge. As a non-utilitarian, though, I am provided the ability to say neither 500bn or 500tn shrimp demand action from me.

Haha is your paper cut example a coincidence or are you meta-ing me?

https://www.kylestar.net/p/morality-is-real/comment/143162381

Expand full comment
Jacco Rubens's avatar

Haha, it's a coincidence :)

Expand full comment
Michael Boccio's avatar

only comment is that gaslighting is a fighting word and my perspective is we need to stop talking past each other to hear the individuals thoughts

not sure what verbiage you would replace with, but I like your process of engaging, taking people at their word and attempting to meet them where they are at for a reasonably constructed rebuke

good discoursing!

Expand full comment
Dylan's avatar

appreciated! I will admit the choice of words is somewhat belligerent :)

Expand full comment
Nathan Barnard's avatar

I think there are very plausible normative reasons not to choose to sacrifice one's child in particular for many others (although I do not think that this number is unbounded) but yes I would choose to die to save two other people (assuming all else is equal) at least when I'm thinking about it now, calmly (and I have tried to donate my kidney, although I was turned down. I do donate blood though which I think is smaller version of the same principle.)

Maybe I wouldn't be able to do if the moment came, but I think that that would be personal weakness. I'd guess I'd find it easy with enough adrenaline with the danger near enough, particularly if it was children I was saving, and harder in circumstances where the fear of death allowed itself to become more salient. I don't think that fear is a good reason not to do something though.

Expand full comment
Dylan's avatar

Completely agree that fear is not a good reason to not do something. But I suppose that we disagree on selfishness being a good reason?

To quote from a separate discussion, "I’m not sure I accept that you are truly willing to bite this bullet. For example, a healthy adult with all organs available for donation can theoretically save about 8 lives, according to ChatGPT. May I ask why you aren’t giving your life via euthanasia right now to save them?"

Expand full comment
Nathan Barnard's avatar

Yeah I don't think that selfishness is a good reason, and I think we should try hard to do what is right by the lights of the common good and the particular obligations we've taken on (for instance having children.)

Concretely on the donation case, this is obviously not an option that is actually available to me - kidney and liver lobe donation are the only options available for undirected donations. I think the real response though is that I have a lot of scepticism towards arguments which imply taking irreversible, costly actions, particularly when the opportunity cost is extremely high (is very cheap to save lives via donations.) It's of course plausible that (by my lights) unjustified selfishness is playing into my answer here, although in this specific case I don't think it is.

I think there is a case for considering the partial good, and a weaker case for that including one's own interests in those partial goods. I think the case for considering one's own interests has to ultimately be quite weak though. The moral epistemology I use is roughly Rawlsian reflective equilibrium. Any account of special consideration of ones own interests, for me, has to take into the account the fixed point that I think that I would have been obligated to fight in the second world war at any personal cost. I think it could be possible that one can use the formal machinery of incompleteness to try and make an action guiding moral theory here which accounts both for the intuitions that in specific historical cases one is required to personally very self sacrificing and some common sense intutions about the reasonableness of some level of self interest, but I'm not sure and would bet actually that it isn't.

Maybe more methodologically, I think I take it as weaker evidence than you do that people often don't take the actions that they endorse upon reflection. I think essentially everyone finds it very hard to discount the future correctly (e.g with the minimal requirement for avoiding time inconsistency) but I don't think that this is strong evidence against the normative requirement for time consistency.

I think this dissucssion gets especially hard around death, where I think people's attitudes are extremely contextual in ways that I claim shouldn't be strong evidence for our normative judgements. Reading accounts of first world war, it's very striking to me the diversity in attitudes - Charles De Gaulle for instance was not only unafraid of dying but actually quite brazen about it. I think he was wrong in the other direction here - I think he was being excessively incautious with his life. I think this variety of psychological responses is some evidence against using psychological responses as evidence about normativerty.

Maybe some final useful context here is that I take the kripkenstien response to theories of what one should do that try to completely abandon non-indexial normative principles to be pretty lethal. In short, trying to construct decision rules from psyholgocical data is underdetermind and one needs to adopt some normative principles to make inferences, in the same way that an infinite number of theories (i.e functions) can fit any countable number of datapoints, which requires theories in the natural sciences to invoke normative principles in deciding what theories to adopt (typically by using a simplicity prior, which is what the information criteria are based on.)

Expand full comment
Jon Rogers's avatar

“Utilitarianism, along with most other mainstream ethical philosophy, starts with the assumption that there are objective moral truths and then constructs a comprehensive framework for discovering and applying those truths to our lives.”

Whether morality is objective, is a metaethical question. Utilitarianism is a normative theory. While there isn’t necessarily a consensus, a good number of philosophers hold that any metaethical position is compatible with any normative position. I don’t know how to say this nicely, but it just comes across that you haven’t read much. Unless I’m missing something.

Expand full comment
Dylan's avatar

I mean, sure, utilitarianism can in theory be paired with your metaethic of choice- but it’s overwhelmingly presented along with presumption of moral realism.

Expand full comment
Irredeemably Incorrect's avatar

no, I’m gaslighting the utilitarians

Expand full comment
Tom Hitchner's avatar

“For utilitarians, however, the answer is a straightforward “yes.” All else equal, overall utility is clearly maximized by saving the 5 at the cost of the 1.”

I think this is a straw man. Many, maybe most, utilitarians believe that the harms from the loss of trust that would come from random doctor visits resulting in murder would be far greater than the lives saved. At the very least you should quote the people you’re accusing of holding this position!

Expand full comment
Dylan's avatar

Absolutely true that most utilitarians would want to take into account the secondary considerations like loss of trust, etc. but that's why the original formulation of the problem includes "Legal ramifications and other peripheral matters disregarded, it morally right to do so?"

Wouldn't you agree this is not a straw man?

Expand full comment