Nice article! I broadly agree. But I suspect utilitarianism will prove a formidable player within the epistemic game you describe. Arguing for that is tall order towards which much still needs to be done.
Also, this all seems to be within the "ethical truths are discovered rather than constructed" paradigm. I'm not sure what the constructivists say about this stuff. But maybe that's getting more into metaethics than "meta-normative ethics".
Thanks. Tbh, I’m inclined to agree with your first point, but I would make some adjustments towards a more pluralistic approach as I care less about a simple theory than what I see most utilitarians do.
I agree and didn’t go so much into that, but I was trying to hit at something like a constructivist approach with the scaling up theories of well-being idea (in that, we have individual values - what do we do now?).
Thank you for the advice, and I’m glad you said it despite me not asking lol. I have been considering this and am actually taking more math courses next year to see if I can handle the load. One point on the other side would be that uchicago Econ from what I know is actually much heavier on math than other economics departments (especially if you do the honors economic analysis track which I’m trying next semester).
“Many of our intuitions around wanting simple moral theories seem largely influenced by religion and a need for rigorously defined governmental laws. “
My intuition about intuitions is that we want heuristics that let us play coordination games successfully. Ethics is not just about me wondering what I should do, it is also about what I want other people to expect from me, and what I want to expect from them. Gert made a point along this line, and it seems reminiscent of Haidt, if he didn’t actually make up an evolutionary story like that. He thinks it is mostly about rationalizing what we have done, but we are less likely to do something if it will be hard to rationalize.
I think our behaviors converge for social reasons, and our theories diverge because the data (?) don’t discriminate between them much. We can frame it as disagreement over theory, or as an epistemically constrained environment. So I might end up agreeing with your point that theories don’t matter much, even if I reject some of your premises. Assuming extreme uncertainty about what we really ultimately want and the ultimate effects of our various behaviors and policies, people might end up with a narrow range of practical behaviors.
Traffic rules provide an analogy. When driving, we want simple traffic rules, so that relatively unreliable drivers will cause fewer disasters. It would be easy to over-stretch that as an analogy for ethics, but there is something there. We could probably imagine plenty of cases where the rules don’t really help, but that isn’t much of a criticism, if they are sufficiently rare in practice.
I agree with this (especially solving coordination/ collective action games) and think(?) it is supported by data. Evolution typically creates rules to generalize off of instead of having some procedure for every case.
On the other hand, I would argue that 1) I think anti realists need to do ethics for political economy reasons, so how do they deal with evolutionarily debunked moral principles, and 2) a realist would probably argue something like if we haven’t explained all of the moral rules from a culture/ sexual/ biological evolution perspective (which I think we probably won’t - at least anytime soon), we haven’t yet ruled out moral realism.
I think the large reason they don’t have cases for each thing is because it is computationally strenuous and has large diminishing returns. I think we see this in behavior, heuristics, and it is a natural result of Evolutionary Stable Strategies.
I don’t think there’s much at stake if we make a realistic assumption about the epistemology. Moral realists, even if they think they know what’s right, have to interpret and apply it to concrete cases, and figure it out when they screw up. Moral anti-realists still want social norms that keep society working, but this is not based on something that is stance independent. Unless the moral realists think that everything can be really specified down to the last detail with perfect accuracy and specificity, they end up going through a very similar exercise of deciding what has or hasn’t worked, through trial and error to some degree.
I take the Moral anti-realist position because I can’t figure out where moral realism gets these supposed stance independent principles, or even what stance independence really would mean. Isn’t rationality part of a stance? Aren’t genetically determined behaviors or attitudes part of a stance? But I don’t think it makes a difference for how we proceed to try to figure out what sort of society is possible and choose from among the possibilities which one we prefer. I hope that there are some meta-principles or meta-meta-principles that we can get enough agreement on to serve as a basis for cooperation. I just don’t see them as stance independent logical/rational/moral necessities. I think they depend on us having enough similarity between our stances to make things work. I don’t want to deny the moral realists' favorite principles, I want to claim that they are not stance Independent.
It would be incredibly intriguing if we find that consequentialism and deontology don't make much of a difference when it comes to real moral actors, but does (Haidt 2001) that you link really support that?
It seems to me the paper mainly talks about ordinary people, most of which maintain neither consequentialism nor deontology. Maybe they'll make judgements differently if they study philosophy and seriously endorse some moral theory.
I'm not saying that philosophers are immune to motivated reasoning or automatical moral judgement mentioned in the paper, but it's reasonable to assume they do better than others, or even, it's certain theories that help them do better (eg. make them discard that automatical moral judgement from time to time)
I think the other paper (linked under the word “some”) supports it more.
It’s fair to suggest that philosophers may be more susceptible to these arguments than ordinary people (though it would require some empirical evidence), though I would say that it makes the project of establishing moral principles much less significant (philosophers/people who take philosophy seriously are an extremely small proportion of people in society).
Yeah, it does make the project much less significant in a sense. Do you think it's because normative ethics' primary role is to explain moral phenomenon in the society? or is it something else?
By the way, what do you think of literary theories? Should they shift their main focus to what ordinary people like to write (or read), rather than those classics?
I don’t think normative ethics’ goal is to make descriptive claims about society; I think the goal is for philosophers to discover what is valuable IN ORDER TO maximize for those things (probably less pragmatically inclined philosophers would disagree). If we saw that normal people weren’t acting any differently, this would increase my skepticism (and probably others’ largely outside the realm of philosophy) in the importance of their project.
On the other hand, one can claim that societies at large are causally affected by moral philosophy (even if this is not true on an individual basis) -- for example, it seems likely that utilitarian philosophy have led to more people focused on global development and wellbeing
I think a lot of people (and disproportionately successful people which should be accounted for) read classics. I’m not sure this actually means that we should put a lot of resources there (from a centrally planned societal perspective) -- people can read them without an interpretation of an interpretation of a footnote to Tolstoy (and this impact is likely to be small because of the diminishing returns). An additional argument I would give for people to stay in classics and stuff like that would be that other non classics-inclined members of society would like to have some people working on that stuff - perhaps it makes them feel sophisticated or justified in the stories they tell themselves or something like that.
I do think philosophy and literature are very different things - I see literature as inspiring people and (some of) philosophy as trying to answer important questions.
Nice article! I broadly agree. But I suspect utilitarianism will prove a formidable player within the epistemic game you describe. Arguing for that is tall order towards which much still needs to be done.
Also, this all seems to be within the "ethical truths are discovered rather than constructed" paradigm. I'm not sure what the constructivists say about this stuff. But maybe that's getting more into metaethics than "meta-normative ethics".
Thanks. Tbh, I’m inclined to agree with your first point, but I would make some adjustments towards a more pluralistic approach as I care less about a simple theory than what I see most utilitarians do.
I agree and didn’t go so much into that, but I was trying to hit at something like a constructivist approach with the scaling up theories of well-being idea (in that, we have individual values - what do we do now?).
Thanks for the comment!
Makes sense! I think this stuff is underexplored so I hope to learn more about it.
Me too!
Thank you for the advice, and I’m glad you said it despite me not asking lol. I have been considering this and am actually taking more math courses next year to see if I can handle the load. One point on the other side would be that uchicago Econ from what I know is actually much heavier on math than other economics departments (especially if you do the honors economic analysis track which I’m trying next semester).
“Many of our intuitions around wanting simple moral theories seem largely influenced by religion and a need for rigorously defined governmental laws. “
My intuition about intuitions is that we want heuristics that let us play coordination games successfully. Ethics is not just about me wondering what I should do, it is also about what I want other people to expect from me, and what I want to expect from them. Gert made a point along this line, and it seems reminiscent of Haidt, if he didn’t actually make up an evolutionary story like that. He thinks it is mostly about rationalizing what we have done, but we are less likely to do something if it will be hard to rationalize.
I think our behaviors converge for social reasons, and our theories diverge because the data (?) don’t discriminate between them much. We can frame it as disagreement over theory, or as an epistemically constrained environment. So I might end up agreeing with your point that theories don’t matter much, even if I reject some of your premises. Assuming extreme uncertainty about what we really ultimately want and the ultimate effects of our various behaviors and policies, people might end up with a narrow range of practical behaviors.
Traffic rules provide an analogy. When driving, we want simple traffic rules, so that relatively unreliable drivers will cause fewer disasters. It would be easy to over-stretch that as an analogy for ethics, but there is something there. We could probably imagine plenty of cases where the rules don’t really help, but that isn’t much of a criticism, if they are sufficiently rare in practice.
I agree with this (especially solving coordination/ collective action games) and think(?) it is supported by data. Evolution typically creates rules to generalize off of instead of having some procedure for every case.
On the other hand, I would argue that 1) I think anti realists need to do ethics for political economy reasons, so how do they deal with evolutionarily debunked moral principles, and 2) a realist would probably argue something like if we haven’t explained all of the moral rules from a culture/ sexual/ biological evolution perspective (which I think we probably won’t - at least anytime soon), we haven’t yet ruled out moral realism.
I think the large reason they don’t have cases for each thing is because it is computationally strenuous and has large diminishing returns. I think we see this in behavior, heuristics, and it is a natural result of Evolutionary Stable Strategies.
I don’t think there’s much at stake if we make a realistic assumption about the epistemology. Moral realists, even if they think they know what’s right, have to interpret and apply it to concrete cases, and figure it out when they screw up. Moral anti-realists still want social norms that keep society working, but this is not based on something that is stance independent. Unless the moral realists think that everything can be really specified down to the last detail with perfect accuracy and specificity, they end up going through a very similar exercise of deciding what has or hasn’t worked, through trial and error to some degree.
I take the Moral anti-realist position because I can’t figure out where moral realism gets these supposed stance independent principles, or even what stance independence really would mean. Isn’t rationality part of a stance? Aren’t genetically determined behaviors or attitudes part of a stance? But I don’t think it makes a difference for how we proceed to try to figure out what sort of society is possible and choose from among the possibilities which one we prefer. I hope that there are some meta-principles or meta-meta-principles that we can get enough agreement on to serve as a basis for cooperation. I just don’t see them as stance independent logical/rational/moral necessities. I think they depend on us having enough similarity between our stances to make things work. I don’t want to deny the moral realists' favorite principles, I want to claim that they are not stance Independent.
It would be incredibly intriguing if we find that consequentialism and deontology don't make much of a difference when it comes to real moral actors, but does (Haidt 2001) that you link really support that?
It seems to me the paper mainly talks about ordinary people, most of which maintain neither consequentialism nor deontology. Maybe they'll make judgements differently if they study philosophy and seriously endorse some moral theory.
I'm not saying that philosophers are immune to motivated reasoning or automatical moral judgement mentioned in the paper, but it's reasonable to assume they do better than others, or even, it's certain theories that help them do better (eg. make them discard that automatical moral judgement from time to time)
Thanks for the comment!
I think the other paper (linked under the word “some”) supports it more.
It’s fair to suggest that philosophers may be more susceptible to these arguments than ordinary people (though it would require some empirical evidence), though I would say that it makes the project of establishing moral principles much less significant (philosophers/people who take philosophy seriously are an extremely small proportion of people in society).
Yeah, it does make the project much less significant in a sense. Do you think it's because normative ethics' primary role is to explain moral phenomenon in the society? or is it something else?
By the way, what do you think of literary theories? Should they shift their main focus to what ordinary people like to write (or read), rather than those classics?
Good questions.
I don’t think normative ethics’ goal is to make descriptive claims about society; I think the goal is for philosophers to discover what is valuable IN ORDER TO maximize for those things (probably less pragmatically inclined philosophers would disagree). If we saw that normal people weren’t acting any differently, this would increase my skepticism (and probably others’ largely outside the realm of philosophy) in the importance of their project.
On the other hand, one can claim that societies at large are causally affected by moral philosophy (even if this is not true on an individual basis) -- for example, it seems likely that utilitarian philosophy have led to more people focused on global development and wellbeing
I think a lot of people (and disproportionately successful people which should be accounted for) read classics. I’m not sure this actually means that we should put a lot of resources there (from a centrally planned societal perspective) -- people can read them without an interpretation of an interpretation of a footnote to Tolstoy (and this impact is likely to be small because of the diminishing returns). An additional argument I would give for people to stay in classics and stuff like that would be that other non classics-inclined members of society would like to have some people working on that stuff - perhaps it makes them feel sophisticated or justified in the stories they tell themselves or something like that.
I do think philosophy and literature are very different things - I see literature as inspiring people and (some of) philosophy as trying to answer important questions.