Note: While I think this applies to all sorts of people, I made this post about EAs because I think it inherently goes against the stated goals of the movement. If you’re not EA or EA sympathetic at all, I still think this post can be valuable! :)
Many Effective Altruists (EAs) want to do the most good possible — I think this is an excellent idea. There is one type of situation, though, where I think EAs (myself included sometimes) can do a little better: politics.
I can talk all day about why I don’t think politics are rational, in general: they usually don’t result in changing opinions, there are smart people on both sides which should make one very uncertain about their own takes, conversations are usually not from first principles which leads many to take tribal positions based on sociological demographics (religion, class, etc), it’s usually highly personal which creates a lot of bias, and more.
Marginal Thinking and Neglectedness:
For good reason, Effective Altruists like to think on the margin — what is the direction we need to move in our next step forward? This makes a lot of sense if you want to be highly impactful. One way to be very efficient in marginal thinking is to focus on areas of impact that have been widely neglected relative to their potential value. General political conversations, I think, have way too much attention on them relative to one’s personal, marginal impact — not only will you likely not change someone’s mind, but it probably doesn’t even matter much if you do!
It seems like many get caught in the wildfire of political discussion and don’t think about the meta-level assumptions that they are making —how much time they should one dedicate to political conversation, what topics should one focus on, etc. While I think EAs do a good job of applying this to their work, if they are thinking about social-type issues anyways, they should spend it on stuff that matters at the margin. While this might not be true if one have some really radical take on a certain political matter, this is true for most.
Critique #1 — But It’s Fun:
“Fine,” one might say, “while political conversations may not be impactful, I am not having political conversations to update my beliefs or for impact purposes — it’s just fun.”
Response: Reputational Concerns Are Real:
That’s fair. However, one point about non impact-based political conversations that I think is very underrated is that your reputation as an EA matters a lot for the community. Whether or not you believe that you should judge some peoples’ beliefs on one area based on their beliefs in a completely different area, it certainly happens. If you have a take on a controversial topic or are on the other side of the aisle, people will just not take you as seriously. This seems bad, but EAs should be pragmatic, realize this is not an issue worth fighting for, and stop making controversial or super political leaning claims.
This is especially true for people who do public intellectual work — substack or elsewhere. For instance, Bentham’s Bulldog, a great Substacker who you guys should all subscribe to right now, writes really well on a bunch of topics in philosophy related to Effective Altruism, which I think has a very good chance of making people take EA ideas more seriously. However, he also infrequently writes blogs on on controversial topics entitled things like “It Matters If Abortion Is Murder.” Whether he is correct or not about these subjects, I think it can harm EA or his (thereby affecting EA’s) reputation to write about and take sides on these issues. What’s the point? Is it worth the risk? I agree that there is some calculation you should be able to make between your personal interests (perhaps you enjoy talking about controversial topics) and EA reputation — we do this all the time. However, this calculation actually should be done, instead of letting your subconscious or the Overton Window filter you which will likely not account for the reputational externality.
This sorta thing seems like why mainstream articles talk about how EA/Rationalist adjacent people are x crazy thing.
This same logic applies interpersonally. If you like talking about politics with friends and or strangers, it is probably at least a little bad for the movement from you to go from talking about why you think Eugenics is okay to arguing that factory farming is bad.
Critique #2 — But Some EA Topics Are Inherently Controversial:
“But some EA topics are already controversial— that normal people might be engaging in very unethical activities, that the long term future really matters, and that AI might lead us to extinction.”
Response: Fine, Maybe It’s Okay Sometimes
Perhaps it’s worth discussing controversial topics when they have a lot of potential value. Still, when introducing someone to Effective Altruism, I recommend against starting with “and the most impactful charity is the Shrimp Welfare Project because if you think that there’s a 5% chance that shrimp are sentient…” Once again, these things might be true, but they’re not a good place to start people off — they’re just very unintuitive.
A better place to start might be being very hedged. “Do you do charity? Shouldn’t you care about the effectiveness of your charity? How do you define impact? Let’s try to quantify that a little bit!” Pragmatically, I think an approach like this, which is much more appealing (and less attacking) to most interlocutors, will lead to much better consequences!
As always, tell me why I’m wrong!
I recommend against starting with “and the most impactful charity is the Shrimp Welfare Project because if you think that there’s a 5% chance that shrimp are sentient…”
I sort of agree, except to say that if you actually think this you should probably at least imply it
"There are some great global health charities though I donate to some more out there stuff"
1) Many political topics are underdiscussed and could have a high impact (e.g. tax reform to shift away from regressive taxes towards, e.g. land value tax)
2) There is no real line between applied ethics and politics; you may think that, e.g. evaluating UN programs that combat malaria is just applied ethics, but others would disagree.
3) Implementing certain policies could have a high impact (see, e.g. my post on how we added animal welfare to the Belgian Constitution), and to do that, you need to understand politics, which means you need to have conversations about politics
4) One of the reasons I drifted away from EA is its (unempirical) obsession with marginalism. Because EAs basically only look at economics (see e.g. my post on it), they miss the insights of other disciplines, which point against marginalism. For example, sociology has shown again and again that the victories of social movements are not continuous; they're discontinuous, not linear, but more like an s-curve. You e.g. try to find support to get animal welfare added to the Belgian constitution. You have 13% of the votes, and you ask EA funds to get some funding to get it to 18%. But 18% still doesn't do anything, so you don't get any money. You manage to get 55% of the vote, but that's still not a supermajority, so you ask EA funders for funds to get it to 57%, but 57% is still not a supermajority so you don't get any funds. Eventually, you get just over a supermajority and suddenly there's a seismic shift in the legislative landscape. It's not continuous; it's discontinuous. Every social scientist worth their salt knows that many interventions don't work well with marginalism, but for some reason, EAs tend to not get this.