2 Comments

I recommend against starting with “and the most impactful charity is the Shrimp Welfare Project because if you think that there’s a 5% chance that shrimp are sentient…”

I sort of agree, except to say that if you actually think this you should probably at least imply it

"There are some great global health charities though I donate to some more out there stuff"

Expand full comment

1) Many political topics are underdiscussed and could have a high impact (e.g. tax reform to shift away from regressive taxes towards, e.g. land value tax)

2) There is no real line between applied ethics and politics; you may think that, e.g. evaluating UN programs that combat malaria is just applied ethics, but others would disagree.

3) Implementing certain policies could have a high impact (see, e.g. my post on how we added animal welfare to the Belgian Constitution), and to do that, you need to understand politics, which means you need to have conversations about politics

4) One of the reasons I drifted away from EA is its (unempirical) obsession with marginalism. Because EAs basically only look at economics (see e.g. my post on it), they miss the insights of other disciplines, which point against marginalism. For example, sociology has shown again and again that the victories of social movements are not continuous; they're discontinuous, not linear, but more like an s-curve. You e.g. try to find support to get animal welfare added to the Belgian constitution. You have 13% of the votes, and you ask EA funds to get some funding to get it to 18%. But 18% still doesn't do anything, so you don't get any money. You manage to get 55% of the vote, but that's still not a supermajority, so you ask EA funders for funds to get it to 57%, but 57% is still not a supermajority so you don't get any funds. Eventually, you get just over a supermajority and suddenly there's a seismic shift in the legislative landscape. It's not continuous; it's discontinuous. Every social scientist worth their salt knows that many interventions don't work well with marginalism, but for some reason, EAs tend to not get this.

Expand full comment