With respect to what an ideal society would look like, I tend to lean quite utilitarian—to the point where people say “woah, you believe that?” pretty often. Because of my heavy consequentialist leanings, I thought it would be a good idea to try and steelman the case against Utilitarianism. With that, let’s get into it!
This is definitely not a complete list—it’s more like a combination of the arguments that I find most compelling + ones that are popular. If you want to find more counterarguments, see here and here.
On Method:
The general method I will be using to evaluate whether a normative theory is good or not will be reflective equilibrium — while not precisely defined, it is the process for systemizing intuitions by reflecting on them and then either accepting or rejecting them based on inconsistencies and relative importance. Therefore, a good moral theory is one that does the best job of capturing the most of our intuitions (with room for assigning different weight for various levels of importance).
Some nuance here comes from the epistemology of intuitions; in other words, some intuitions seem ‘more true’ than others. For example, incest being bad seems ‘less true’ than hedonistic pleasure being good. While incest seems intuitively immoral/ disgusting, conditional on the fact that we know it only comes from a morally-neutral evolutionary mechanism, it seems like it shouldn’t be accounted for in defining the ideal normative system. Hedonistic pleasure, on the other hand, still seems like it should be an intuitive plus for a theory regardless of any explanation of origin. Perhaps thats because of pleasure’s phenomenal relation to goodness, but I will leave that question for another time. In addition, there are also known biases in intuitive decision making like scope neglect, the base rate fallacy, and many more — does a simple intuitive aggregation method tell us to embrace them?
While I think there are other interesting methods and nuances, I will stick with a general reflective equilibrium for the rest of this post. Now that I have gone over the method of evaluation for moral theories, let’s apply this to Utilitarianism specifically.
Utilitarianism’s Assumptions:
Utilitarianism is the normative moral theory that claims the best outcome is the one that maximizes well-being (usually, but not always, maximizing pleasure and mitigating suffering) for the most amount of people. This is a broad theory that can take many forms (negative, preference, rule, and average utilitarianism, to name a few), but I will try to give arguments that apply to most (if not all) of them.
While there are differences in the specifics, (almost) all theories of Utilitarianism rely on these four assumptions:
Consequentialism: The moral state of affairs determined by the overall consequences.
Welfarism (not to be confused with hedonism, which is a type of welfarism): The moral state of affairs depends exclusively on the well-being/welfare of individuals.
Impartiality: No matter who experiences it, all utility should be treated the same in calculating the moral state of affairs.
Aggregationism: The moral state of affairs should be determined by aggregating the values of individuals.
As we will see, while these assumptions might sound intuitively obvious as abstract principles, once put into specific edge-cases, people often think they lose some of their intuitiveness.
Impartiality and Higher Responsibilities:
While egalitarianism is only a form of impartiality, I use them synonymously here because they have the same consequences in these cases. Also, egalitarianism sounds cooler than impartiality, tbh.
While fairly common among WEIRD people, the idea that we should be impartial is pretty controversial both historically and outside the west. While many would argue that people around the world and in the past were wrong about not being egalitarian, it’s actually quite hard to bring an argument for expanding your moral circle other than appealing to intuitions that others may not have. If some theory is true, however, it seems like you should be able to convince others regardless of their background.
For instance, imagine a westerner goes to a tribe in Nairobi, Africa and teaches the tribe about egalitarianism. The westerner states that all people should have moral equality because of human rights considerations and the intrinsic value of all people. He continues and states that that this applies to all tribes, including their out-group that is constantly at war with them. I have a feeling that the westerner won’t be taken so seriously, and it won’t be because the tribe members lack any information. It is because of their different backgrounds.
To bring a similar critique that might hit closer to home, imagine the following case:
Your mom is sick with cancer, but you and her cannot afford the treatment. She has been aching in bed every day with torturous pain for the past five years, and she has just been told that, unless she can pay $6,000 dollars, she will die within the next two months.
You start taking longer shifts at your minimum wage job to pay for it even though you know you won’t be able to afford to save her in time. A few days before the doctors says your mother will be at the point of no return, however, you see something on the floor. “It’s a lottery ticket,” you say. With a sense a hope when all seems lost, you go to cash it in and find out that you’ve won $5,000 which is, combined with the money you made from your job, exactly how much you needed to save your mom. You go straight to the hospital and tell your mom that you now have the money and everything will be fine. She feels relieved and, for the first time in a while, truly happy.
Coincidentally, however, that night you read Famine, Affluence, and Morality by Peter Singer and realize that the most utilitarian thing you can do with this money is actually to donate it to the Against Malaria Foundation (AMF) instead of saving your mom. This is because donating to the AMF to save a life costs $5,500 while saving your mom takes $6,000. Because you are so convinced of Utilitarianism, instead of saving your mom, you donate the money to the AMF against her wishes and watch as the doctors take her off life support. It would be inegalitarian not to!
Even though proponents of egalitarianism, and even Peter Singer himself, would likely say that you are not obligated to give the money to AMF, it seems extremely unintuitive that the morally best thing to do here is to let your own mother die. Unlike what Utilitarianism says, it just seems like we have much bigger obligations to the people closest to us: friends and family.
Population Ethics and the Repugnant Conclusion:
Population Ethics is a relatively new sub-field of moral philosophy concerned with ethical questions around actions affecting the creation of new people. One critique of Utilitarianism (at least, forms that are concerned with people that don’t currently exist) comes from the impossibility theorems that found that some pretty intuitive utilitarian premises can’t be shared together.
The impossibility theorem states that there is no such theory can accommodate the desiderata of avoiding all three of these very implausible intuitions:
The Repugnant Conclusion: for any population with x amount of happy people, you can make a “better” population by with only y amount of people with lives barely worth living.
The Sadistic Conclusion: A state of affairs resulting from adding persons with negative well-being is sometimes better than one resulting from adding persons with positive well-being, from the same starting p.
Non-anti-egalitarianism: A perfectly equal distribution is better than an unequal distribution of the same size and with lower total (and thus lower average) welfare. Stated differently because this one is a bit complicated: Let A, B be states of affairs such that the following three conditions all hold. A and B contain the same number of individuals; all individuals in B have equal well being; B has higher total (and hence higher average) well-being than A. Then, B is better than A.
From the fact that Utilitarianism leads us to such egregious and potentially extremely unintuitive solutions, one might argue that the intuitive cost is not worth the benefit, making it a bad theory.
To learn more about Population Ethics, I highly recommend Hillary Greaves’ awesome paper entitled Population Axiology, which is a great introduction to the field.
The Utility Monster:
*Cue the noise that JAWS plays before a shark comes into the scene* : Bum Bum…. Bum Bum… “It’s… It’s the utility monster!” “NOOOOOOOO”
Robert Nozick, in his very famous book Anarchy, State and Utopia, shows that Utilitarianism isn’t as egalitarian as it originally seems to be. He does this with his famous utility monster thought experiment, which is widely considered to be a good critique of Utilitarianism.
To invoke these intuitions, imagine a world that is exactly like ours except that there is a single individual that, due to his neuro-chemical makeup, can achieve an extremely high amount of (finite) utility from consuming resources. The amount of utility he receives from all resources far outweighs the utility any other human being might receive. To maximize overall utility in this society, it would be morally best to give the vast majority of resources (perhaps even all of the resources) to this single individual, leaving everyone else in the world with terrible lives.
This outcome seems obviously very unintuitive and should be a reason to reject Utilitarianism, at least in its naive form. It just seems like we care about egalitarianism too much to reject it for utilitarianism, at least in the edge cases.
Incommensurability Problems:
There is another critique of Utilitarianism (largely regarding the linearity and commensurability of utility) that is sometimes known as the problem of incommensurability. While this is very easily seen for objective-list versions of utility (how do you compare the value of rights to love to friendship, for example), it is a little trickier for hedonistic Utilitarianism (which is purely about maximizing pleasure and mitigating pain).
One form of the issue comes from a theory of pleasure from Mill (no, not that one; this one). Mill believes that there are higher and lower forms of pleasure, that are of different quality — this, many believe, seems pretty plausible. One may consider intellectually stimulating work, a sense of fulfillment, or creating / viewing art as qualitatively different from other hedonistic forms of pleasure like enjoying a meal or doing heroin.
Like in the cases of objective list theory, it seems really hard (potentially even impossible) to compare these qualitatively different types of pleasure. One can ask how anyone could possibly approach an objective comparison that can be used to measure whether a society is doing well or not.
Another form of this problem comes about if one thinks that the only difference in pleasures is time and degree and that there is no qualitative difference. Even so, it doesn’t seem to be the case that some great number of paper cuts, across many people, spread over a period of time could eventually be worse than the most horrible genocide. Furthermore, it seems highly implausible that enough chocolate would actually amount to the greatest pleasures in the world: like your chid being born or falling in love for the first time. These types of pleasure just seem totally incommensurable.
Lastly, and this one is more controversial, another issue comes from the idea that imperceivable pain or pleasure should be taken seriously. In his book Reasons and Persons, Derek Parfit argues that one must view imperceptible pain as counting towards the net utility. He believes this largely because he thinks one can’t define a demarcation point between perceptible and imperceptible pain (leading to heap-esque vagueness problems) and the fact that an accumulation of imperceptible pain results in perceptible pain.
While I have my own issues with this line of reasoning—namely, perceptibility seems to have a clear demarcation point, by definition, and badness of imperceptible pain can be modeled like a step function rather than a linear function, with badness (the y axis) only increasing after enough imperceptible pain (the x axis) has been accumulated enough—many take it seriously. This would make both (1) incommensurability problems and (2) the repugnant conclusion worse (in very similar ways) because (1) some amount of imperceivable pain across people is worse than genocide and (2) for every great society, you can have a “better” society with lives entirely neutral aside from imperceptible pain. Once again, because of the implausibility of these conclusions, one may take this to be an argument against Utilitarianism.
Other Values:
Many complain that the utilitarian calculus does not and can not take into account many values that are necessary for a moral system. In this section, I’ll just rapid fire what some of these others values are. In all of these cases, Utilitarianism tells us that the intuitively immoral thing to do is the most moral thing to do:
Rights matter: You should not forcibly put everyone in the world into the experience machine against their will.
Intentions matter: A doctor accidentally saving a life while attempting to commit malpractice is clearly immoral.
Bodily autonomy matters: If a parent finds out that hitting their child will result in more success, but the child would still rather not be hit, it is immoral to hit them.
Honesty/ keeping promises matters (adapted from Huemer): Your best friend tells you on his deathbed that he wants to give all of his money to the Make-A-Wish Foundation — he emphasizes that this is the only charity he wants it going to. You promise that you will do it. It would be immoral to lie and give it to a more effective charity.
The right to life matters: You morally should not take an old or disabled person off of life support to save resources that would eventually lead to more net-QALYs overall.
Dessert matters (also adapted from Huemer): You have a tasty cookie that will produce harmless pleasure with no other effects. You can give it to either serial killer Ted Bundy, or Ghandi. Bundy enjoys cookies slightly more than Ghandi. Morally, you should not give it to Bundy.
Respecting privacy matters: Even if you know your neighbor won't see you, it is still immoral to watch them get undressed through their window, despite any pleasure you might gain.
Consciousness matters: You may not end someone’s life if they have no family or friends, are completely neutral about life, and you know killing them would have net-zero impact.
Altruism matters: It is morally good to altruistically give your friend a piece of candy even if you would get more net value from it than they would.
Sadistic pleasure doesn’t matter: You should not go around hitting people if you are a masochist and the pleasure you receive from hitting someone is greater than the pain they suffer from being hit.
Extinction matters (see this Introduction for my inspiration): The badness from losing 98% to 99% of some population is not equally as bad as losing 99% to 100% of that population, unlike what many forms of Utilitarianism claim. This seems true even conditional on the fact that these percents will approximately stay the same over time.
Given all these different cases with different intuitions, one might wonder why we would even expect only a single value (maximizing net-pleasure) to be morally significant.
Cluelessness:
It really intuitively seems like we know which actions are immoral or not (say, in the case of unnecessary torture), and if a theory cannot account for this intuition, it would be a very good reason to reject the theory. However, some would make the argument that, under forms of Utilitarianism that take Longtermism (the ideas that future lives heavily outweigh the moral significance of present lives) seriously, we have very little idea of what actions will result in net positive or net negative outcomes. In other words, we are morally clueless:
For example, one’s choice of whether or not to drive on a given day will “advance or delay the journeys of countless others, if only by a few seconds”, and they in turn will slightly affect others. Eventually the causal chain will (however slightly) affect the timing of a couple conceiving a child. A different sperm will fertilize the egg than would otherwise have been the case, leading to an entirely different child being born. This different person will make different life choices, impacting the timing of other couples’ conceptions, and the identity of the children they produce, snowballing into an ever-more-different future. As a result, we should expect our everyday actions to have momentous—yet unpredictable—long-term consequences. Some of these effects will surely be very bad, and others very good. (We may cause some genocidal dictators to come into existence thousands of years from now, and prevent others.) And we’ve no idea how they will balance out.
Long-term consequences swamp short-term ones in total value. And because we generally can’t predict the long-term consequences of our actions, it follows that we generally can’t predict the overall consequences of our actions.
This also means that seemingly morally-neutral actions are likely as morally impactful as the decisions that we think are most relevant to morality — another very unintuitive consequence of utilitarianism. Another minus one point for Utilitarians!
Critiques:
Some intuitions go wrong because our brain creates proxy rules/heuristics when we creates rules from many instances of correlation. For example, lying isn’t truly bad in itself, but it is just because it is associated with bad outcomes most of the time, so our brain creates a rule that all lying is bad — it would be really hard to measure consequences in every case. These rules also explain why these intuitions go away in cases with high probability of terrible consequences— for example, Nazis ask if you are hiding Jews in your attic.
What is the test for whether this rules based approach is happening, so we can apply it across the board? You can also just say that lying has some hold in itself and just gets outweighed in these cases - this, in fact, seems like a much simpler explanation. Lastly, even if these intuition comes from other values, why is that a reason to take them less seriously?
There don’t seem to be any other better theories? This is especially for how societies should run?
I just think this is a pretty solid objection. It seems like many other normative theories don’t do such a good job for modeling how a society for run — especially a society that constantly relies one trade offs of values. One good exception might be a Prioritarianism, which would claim that a society should prioritize the least well-off people. If you do a bit of curve fitting by throwing some other values in there and make welfare defined by an objective list theory/ preference theory instead of purely hedonic (accounting for some other values in someone’s personal life), it seems like you can avoid a lot of the critiques. I will note, however, that this largely depends on one’s preferences to make a more or less “elegant” theory (i.e. a theory having more parameters would be less elegant).
Impartiality and widening moral circle are actually better from a communitarian perspective because they help economic efficiency. This results in everyone being better off.
This largely depends on how successful your group would be in that larger society relative to the other population. This would be true if the average member of one’s group gives less to the common good than the average person in the society, but it is not obvious that this would happen. Even if it were to happen, the people in your group may become held more accountable for free riding.
Rule Utilitarianism saves a lot of the examples in the other values at play section.
These rules are usually created for the purpose of better consequences and have exceptions if you condition on the fact that, in a particular circumstance, you know that breaking a rule will result in better consequences. In many of these cases, it seems like the reason the rule is created is because one person breaking the rule will lead to a creating a standard in which the rule can be broken, in general. However, if you change the case to be conditioned on the fact that no one else will know what you did, it seems like the rule would no longer apply, yet the actions still seem immoral.
There are robust ways to approximate what will be future harm or good — caring about existential risks.
Maybe, but two points should be made. One is that it largely depends on the tractability of one’s ability to solve existential risks even with all the uncertainty. Doing something like this doesn’t seem super historically precedented as it is just really hard to predict the future with high accuracy. Another point would be that this doesn’t solve the issue that almost all of the actions that we associate with morality are not really moral because they get overpowered by other long-term consequences.
Experience machine doesn’t work because of this awesome article I totally found coincidentally and totally didn’t write.
Oh, shoot. You’re totally right. Everyone should check out that article that is totally not yours because it’s awesome.
Utilitarianism.net, a website dedicated to the philosophy of Utilitarianism, helped me a lot in making this post. It was created by Richard Y. Chappell (Substack below) and others, to whom I’m very grateful!
As always, tell me why I’m wrong!
I am pleased to read your work, tho I just skimmed so far. One of my agendas is to help build non-utilitarian theories. Most non-utilitarians are not very interested in building theories. It seems perhaps you are.
Your theory says that we ought to weight the welfare of our associates in proportion to their closeness to us. That's a reasonable proposal but it's incomplete. You need to tell us how we should make decisions about how to choose (and dynamically update) the optimal degree of association to have with each other person (or animal).