There are many Effective Altruism (EA) ideas that lead to quite unintuitive moral conclusions. However, while moral philosophers typically treat these odd cases as a sign that a moral theory has gone wrong (as per reflective equilibrium), EA has embraced them.
Before I go into a few cases where I think EA has embraced unintuitive conclusions, let us first see some classic examples from moral philosophy that demonstrate that one should revise their theory in light of counterintuitive cases.
Cases That Go Against Intuition:
A classic critique to Naive Utilitarianism is the following case. Imagine you are a doctor about to perform surgery on a patient under anesthesia. However, before you do, you realize that there are 5 patients in other rooms that are all about to die because they have missing organs (each with a different organ missing — person A is missing a liver, person B is missing a lung, etc). Do you kill the person in a coma to save the 5 people with missing organs? While it seems like the answer should be no, a utilitarian would say that killing the single person to save 5 is the most ethical thing to do. Presumably this should be a case counted against Utilitarianism and a reason to revise or reject the theory.
A classic criticism to a simple Deontological theory of ethics is the following case. Imagine one is in Nazi Germany hiding Jews in their attic to protect them from the Nazis. A few days into hiding, Nazis knock on their door and ask if they know where any Jews are hiding. According to many Deontologists, one must tell the truth because that is one’s duty. Because this response is repugnant, this case should be counted against Naive Deontology and one’s moral theory should be updated accordingly.
Examples In EA:
First, it should be noted that much of EA does not rely on unintuitive conclusions. Using data to determine which charities are effective, for example, seems quite obvious!
In Peter Singer's famous drowning child thought experiment, one is supposed to imagine that they are wearing $500 sneakers and see a child drowning in a shallow pond. Despite the loss of money, it feels like one has a moral obligation to save this child. Peter Singer compares this to an analogous case that is much more familiar: one can donate their money to an effective charity like the Malaria Consortium and have a similar impact, with the only difference being the distance between you and the child. Given that distance between people should not change the degree of moral obligation, Singer concludes that we have as much of a moral obligation to donate to effective charities as we do for saving the drowning child.
Taken to its logical extreme, the argument claims that we have a moral obligation to give all of our extra money to charity except for what is absolutely necessary. This seems so intuitively implausible that Peter Singer, along with other Effective Altruists, have said that you should donate 10% of your income to charity. Given the initial argument, even the people that donate 10% should still be considered unethical. Given our intuitions that some people who are not donating all of their money are acting ethically, we should reject this standard of morality.
Strong Longtermism is the idea that the majority of consideration in our ethical decisions should be focused on longterm considerations for sentient (or human) wellbeing. The use of this reasoning often leads EAs to believe that existential risks (such as from AI, nuclear, bioterrorism, and more) are the most important ethical concerns.
If someone adopts a Strong Longtermist perspective, most of our everyday ethical decisions would be significantly overshadowed by decisions that don't seem ethical at first glance and are extremely hard to predict (in some contexts, this is what’s known as The Cluelessness Objection). For example, helping an elderly woman cross the street might disrupt traffic, which could alter the timing and circumstances of births, resulting in cascading effects with major ethical consequences in the longterm.
EAs use very analytical and mathematical techniques to quantify the best ethical decision (i.e. Expected Value, QALYs, etc). There is a common phrase thrown around in Effective Altruist (and Rationalist) circles to “shut up and multiply.” The general idea is that, although many of the conclusions drawn from shutting up and multiplying are unintuitive, this is just due to biases like scope neglect.
While people do have scope neglect and other biases that seem to affect the impact of altruistic behavior, the idea that we should just “shut up and multiply” seems to entirely miss some very basic tenets of moral philosophy in which you update your theory based on unintuitive consequences. In addition, if we actually just shut up and multiplied to no end, one would likely end up giving strangers their wallet.
A Potential Rebuttal?
Thank you to Richard Y Chappell for providing me with this argument here.
It seems like a solid response to many issues of unintuitive cases would be that we should use very abstract/ deep intuitions (in the EA case, some calculation approximating utilitarianism) to ground our ethical decision making. Perhaps a good reason to distrust case-specific intuitions is because they are more subject to bias. While I think this answer is fairly plausible, I would like to add that, as Michael Huemer argues (see 3a), many other major ethical theories can be based on abstract and general intuitions (e.g. deontology).
For the record, I am very sympathetic to Effective Altruism and would even consider myself a striving Effective Altruist. I just think that this critique hasn't gotten so much attention and should be brought to light.
As always, tell me why I’m wrong!
[Admittedly only skimmed your article.]
Two points:
1) The main way non-utilitarians avoid their theories having unintuitive implications is by refusing to put forward a theory in the first place. They are all moral particularists, and for vast ranges of cases, they have no idea what they think.
It is possible to formulate some deontological principles more explicitly, but I challenge anyone to do so in a way that doesn't have even more counter-intuitive implications than utilitarianism. For example, you can add penalties for doing harm vs. allowing harm, and extra weight on the welfare of people that are "close" to you. I think these will lead to their own very counterintuitive implications. Edit: I leave it to another day to try to back up that assertion.
2) Reflective equilibrium is a pretty vague term. Ideally, it can be made precise using ideas from formal epistemology. But how exactly that would work is a very hard and deep question. Some smart people like Richard Pettigrew work on formal epistemology, but it's still niche. Hopefully machine learning can help eventually.
Ultimately, our posterior beliefs may end up placing *partial* credence on multiple propositions that are mutually inconsistent. However, that doesn't mean we shouldn't go through the exercise of formulating theories and seeing what they imply, so that we can see which sets of mutually consistent propositions have the highest joint plausibility in some sense. I think moral anti-theorists like Amia Srinivasan have missed this point.
https://global.oup.com/academic/product/intuition-theory-and-anti-theory-in-ethics-9780198713227
This article by Michael Huemer may interest you.
> I argue that, given evidence of the factors that tend to distort our intuitions, ethical intuitionists should disown a wide range of common moral intuitions, and that they should typically give preference to abstract, formal intuitions over more substantive ethical intuitions. In place of the common sense morality with which intuitionism has traditionally allied, the suggested approach may lead to a highly revisionary normative ethics.
https://philarchive.org/rec/HUERI