Moral Realism is False!
All the world needs is one more post on metaethics by a twenty year old!
Realizing only know that I took the title from here.
Moral realism is the view that there exists stance independent (independent of desires, self-interest, beliefs, and attitudes) facts about how one should act. While I won’t go into detail here, there are two general kinds of moral realist positions: moral naturalists and non naturalists. Naturalists assert that moral facts are reducible to natural facts (neurotransmitters that result in pleasure or pain, for example) and non naturalists assert the opposite (what a surprise!). There are a lot of arguments for moral realism, but here, I will go over some of the more popular ones in the literature.
Some objections are repeated because they apply to multiple places. This list is obviously non exhaustive, but I think it covers a bunch.
Argument from intuition:
This argument does most of the work and generally serves as the root from which other arguments for moral realism grow. Most of the other arguments for and against moral realism are largely responding to this argument or something similar.
Premise 1: Torturing babies for fun (or any other action that goes against deeply held moral intuitions) really seems objectively wrong.
Premise 2: If something seems objectively wrong to that degree, we have reason to think that it is actually objectively wrong.
Premise 3: Torturing babies for fun could be objectively wrong if and only if moral facts existed.
Conclusion: Therefore, moral facts exist.
Objections:
Our moral intuitions have been developed due to various evolutionary and cultural factors. This either makes moral facts unnecessary to postulate (as you can explain our feelings without invoking the existence of more ontological entities) or that we would have no access to the moral facts because there is no selection pressure for them. While some argue that there might be some pre-established harmony between the evolutionary development of our moral intuitions and the moral facts, the harmony itself requires further evidence.
Beyond evolutionary debunking arguments, there is a larger question about how we attain access to these moral facts had they been real; this is known as the epistemological problem for moral realism. There are generally two ways of asking this question: 1) in what ways are moral facts causally affecting our beliefs about them, and 2) would we have these intuitions had there been no moral facts. If one can’t explain how moral facts cause our beliefs and argue that would not be different if they moral facts weren’t real, we can contend that our moral faculties are not reliable.
One might come to the conclusion that we should accept intuitions as reliable a priori knowledge because we get knowledge about logic and math this way. From this, many generalize the reliability of intuitions to moral intuitions. This is misguided, however, as it seem like we only see mathematical truths as truths because they are supported both by intuitions and empirical evidence. Had logical truths not have had this empirical success, we likely would not treat these intuitions as being reliable, but rather see them more like a cognitive bias (as we do for many other flawed intuitions). Since moral truths cannot be tested empirically, we should not rely on these intuitions like we do for mathematical truths.
One may be inclined towards a deflationary ontology. This would make the queerness of moral facts, being “utterly different from anything else in the world”, seem epistemically repugnant as it invokes the existence of a new type of ontological entity. This may make one much less inclined towards believing in the existence of non-natural moral facts.
Empirical research has shown that there is great moral disagreement among people (historically and geographically), and we have no clear (and largely universally compelling) method of distinguishing which intuitions are more reliable. What would one say to one that doesn’t share any of their moral intuitions? This leads some towards a moral relativistic approach, in which morality is merely determined by the society that one is in.
Michael Huemer’s Ontological Argument:
For brevity and clarity, I will not be using the exact words that Huemer uses (Huemer says not torturing babies rather than moral facts in general, for example); to find his exact wording, the paper can be found above. If anyone has a problem with the way I phrased the argument for any reason (e.g. if it was uncharitable), write a comment, and I’ll do what I can to correct it.
Premise 1 (Probabilistic Reasons Principle — PRP): If you have some probability in a proposition that tells you to act, that should, proportional to the probability, affect your reasons for action.
Premise 2: If we knew there were some objective moral facts, we have more reason to follow them than if there were no moral facts.
Premise 3: We have some reason to think that there are moral facts (for example, there are smart people who think that moral facts exist, other arguments, etc).
Premise 4: Based on the Probabilistic Reasons Principle, we have some reason to act in accordance with the moral facts.
Premise 5: The moral facts that we should follow based on the previous arguments are independent of our beliefs, desires, attitudes, etc, making it be true that there are stance-independent moral facts.
BTW, Michael Huemer’s Substack is great and can be found here. Here is his Substack post explaining his ontological argument.
Objections:
For many of the objections, I use a particular decision theoretic approach to how you should apply the probabilistic reasons principle: expected value, in which you act upon the high expected value of some action (i.e. multiplying the net value of some proposition by the probability that you receive that value given an action). I did this to make the objections more succinct, but I think they hold regardless of the plausible decision theoretic principle one uses.
The amount that one should change their actions given the subjective probability that moral realism is true might be extremely slight (if one thinks moral realism has a very low probability of being correct). While moral realism would be true under this objection alone, it would make it so it barely affects how one should act.
One can reject the notion of categorical reasons—the idea that one can/should rationally act in accordance with values that are not in their “subjective motivational set.” Therefore, if one thinks that categorical reasons are intelligible, it cannot rationally motivate one to act. If you do an expected value calculation for action, I’m not sure what sort of value you would attribute to moral realism that would guide you towards action.
It may be such that the cost of acting in accordance with the moral facts, as opposed to your own interests, are too high given a cost benefit analysis, depending on the probability you have in moral realism and the value you assign to it relative to your personal interests. If this is the case, moral realism having some probability wouldn’t actually change any behavior.
Thanks to my friend Nolan S. for sharing this argument with me: If one has low enough probability in some proposition, this might be a Pascal’s Mugging scenario in which one should no longer have the probability affect their decisions in cases of low probability, high uncertainty, and high value cases. This makes the Probabilistic Reasons Principle fail in some circumstances where the probability is low enough. Therefore, Premise 5 doesn’t follow because the argument depends on your probability in moral realism being high enough to pass the pascal’s mugging threshold, making it stance dependent to your epistemic attitude towards moral realism.
With respect to action, one may want to take a Lockean Thesis approach, in which some probability in a proposition can only affect ones reasons to act if the probability passes some threshold. If the probability of a moral truth in some case (or moral realism generally) does not reach this action threshold, under the Lockean view, one may have no reason to act in accordance with the moral facts even with PRP.
There is a Moral Arbitrariness Challenge here, which means that, even if there are moral facts, they don’t serve as a motivation for acting. While one can claim that they have some (perhaps smaller) probability to believe that moral facts are indeed action guiding, this might decrease the probability greatly.
Assuming the Regularity Principle of Bayesianism (which states that any logically contingent proposition must receive a credence of above zero), one can make a similar ontological argument for gastronomic realism (the view that there are objective good tasting foods), even though the probability might be lower. This is obviously absurd and should be seen as a reductio against the ontological argument.
Like in Pascal’s Wager, for every moral proposition that one can make, one can make the opposite proposition and assign a higher value to it (known as the Many-Gods-Objection). Say, for instance, that someone tells you that you have some reason to believe that you should give $5 to charity, and you should donate at least some because there is some probability moral realism is true. However, there is some probability that shmoral shmealism (which coincidentally sounds like the name of a Kosher deli) is true in which you have an obligation not to give $5 to charity. This would cancel out with the moral realism and result in no differences in action.
Companions in guilt:
The companions in guilt argument is a general type of argument that is a response to those who object to some proposition A because it has some problematic feature. Very abstractly, it says that, if one is willing to accept this problematic feature in proposition B, it would be arbitrary for one to not accept the problematic feature in proposition A.
More specific to morality, the claim would be that one probably accepts realism of a different kind (scientific realism, epistemic realism, mathematical realism, etc), which has similar problematic features to moral realism — one example is evolutionary debunking, in which you state that either 1) the existence of moral facts aren’t necessary (great article linked that is required reading) given that we can explain it away in terms of evolution and culture or 2) that we would have no access to the moral facts because our intuitions come from evolutionary and cultural pressures rather than being in causal relation to the moral facts.
The same responses, however, apply to any other kind of realism, making you have to reject those as well, and it would be logically inconsistent to apply them to one and not the other. Additionally, making the claim against moral realism in the first place (assuming one’s interlocutor will accept it given a convincing enough argument) seems to assume that you’ve already accepted an epistemic realism of some sort, making the argument self-defeating.
Objections:
One can reject all forms of normative realism. This seems less like a bitten bullet if one takes beliefs as merely pragmatic and a form of practical rationality (as opposed to some objective feature of the universe). One can say that, when we are making true beliefs about the world, we are merely holding beliefs that are subjectively useful (i.e. some sort of pragmatism) and that epistemic norms are just instrumental. This makes sense given different thresholds for belief in different practical circumstances, and it would explain how arguing against epistemic realism isn’t self defeating.
Unlike in morality, despite disagreement, we have good methods for figuring out which epistemic facts are better and worse as we can test them. It seems like there is also much less drastic disagreement in epistemic facts, leading to believe that convergence is largely on the basis of revision.
There is no evolutionary reason for us to have access to the normative facts about ethics, but there are for other types of realism (mathematical, epistemic, ect). It would be really evolutionarily important for us to know about epistemic facts or mathematical facts (for instance, for knowing how many tigers went into some cave), so evolution developed us with senses and faculties for these.
There can be facts about what beliefs are rational (epistemic realism) without them actually being motivating — perhaps one would practically care about them because they are instrumental towards one’s final ends (which doesn’t apply to morals). While there are some who think this moral facts are not motivating (moral externalists), it begs the question of why someone would care about these facts. One may argue that the problematic feature in moral realism is that it is both normative and motivating, as opposed to epistemic realism which is only normative, making the companions in guilt arguments fail regarding implying that the moral facts are motivating.
David Enoch’s Argument From Deliberative Indispensability:
This argument is similar to, but not the same as, the companions in guilt argument.
Premise 1: There are objective normative reasons that are indispensable with respect to practical deliberation.
Premise 2: Practical deliberation is necessary for everyday life.
Premise 3: Committing to normative reasons implies recognizing the existing of objective normative truths.
Conclusion: Therefore, practical deliberation necessarily implies the belief in objective normative truths (AKA normative realism).
Objections:
One can reject premise 1 and state that objective normative reasons are not indispensable to practical deliberation. While one can question why one chooses to act in accordance to whatever values they do (likely one’s own preferences), one may say that practical reason involves achieving ends that are motivating in themselves (which seems to only include one’s own ends).
One can argue that, descriptively, people do act in their own interest despite there being no objective normative reason to do so. Given that one accepts that there are no motivating facts about one should do, there is nothing wrong with this. Given that there are no normative truths according to this view, there would be nothing wrong with following this.
One can accept premise 1 but reject premise 3. They might say that just because one practices everyday life given a certain belief, doesn’t mean that they actually have (or should have) that belief. This actually seems to be pretty common — while some people think we don’t have reason to think we are not in a skeptical scenario, they act as if they are not anyways because it is convenient; this doesn’t seem like it will change anytime soon.
One can take a Humean Constructivist approach in which things only have value in relation to someone’s values. If this view is correct, practical deliberation requires only that one has value-dependent reasons to act, which would not necessarily imply the existence of mind-independent normative truths.
Derek Parfit’s Argument Against the Self Interest Theory:
Brief Preface: Parfit prefaces this discussion by saying that, unlike what Hume thought, reason is not purely a slave to the passions. He argues that there are some rational constraints that desires must follow and brings two cases to show this: 1) imagine a person with intransitive preferences (you can do this with hedonism as well if you replace the variables with preferred hedonic states) where they prefer A to B, B to C, and C to A. If an agent does have intransitive preferences, one can make a series of bets with them trading A, B, and C that will inevitably result in them losing all of their utility (also known as a Dutch Book). This preference seems irrational.
2) Imagine an agent with Future Tuesday Indifference in which they are exactly the same as me and you, but they don’t care about their desires on Tuesdays. If a doctor asks them which date to do a surgery—either Tuesday without anesthetic and immense pain versus on Wednesday with anesthetic and a tiny amount of pain—they should pick the surgery on Tuesday. This preference seems irrational. These cases seem to imply that one can have irrational preferences and that reason is not merely a slave to the passions — we can ask questions about is someone’s preference rational.
In his book Reasons and Persons, Derek Parfit (he actually didn’t get a doctorate which is a cool stat to flex on your friends with) argues that the Self Interest Theory is less rational than both caring about others and what calls the Present Aim Theory. The Present Aim Theory suggests that one should only care about their present wellbeing—this, Parfit claims, is pretty obviously ridiculous but a position one can hypothetically take.
Like Sidgwick, Parfit claims that the theory one ought to follow must be “pure”—”an agent may not give a special status either to himself or to the present.” The Present Aim Theory and the theory that tells agents to care for others are pure in that they either reject or accept both personal and temporal neutrality. The Self Interest Theory, on the other hand, is not pure—it “allows the agent to single himself out, but insists that he may not single out the time of acting. He must not give special weight to what he now wants or values. He must give equal weight to all parts of his life, or to what he wants or values at all times.” On the basis of this, Parfit argues that the Self Interest Theory is incompletely relative.
If the Self Interest Theorist claims to avenge himself by claiming temporally neutrality, the Present Aim Theorist can reject this—the Self interest Theorist doesn’t (and presumably shouldn’t) care about past pain, making him temporally partial to his present and future. Parfit gives a thought experiment to further invoke this intuition: imagine you wake up in a hospital bed, and a doctor then says that you got into a car crash, resulting in the need for a surgery. The problem is that there were two patients in this room (one of which we already did the surgery on and one that we will do the surgery on in 20 minutes), and I don’t remember which one you are.” Parfit asks “would you rather be the one who already completed the surgery or the one who still must complete?” The Self Interest Theorist would obviously rather be the person already having finished the surgery, but this wouldn’t be temporally neutral.
Objections:
One may argue that desires don’t have rationality constraints. In cases of “irrational preferences,” one may argue that our intuitions are going wrong because these cases are very different from the real world, and we assume that the agent is wrong about their own preferences. This intuition usually goes away if we consider a case where the agent is an AI, and these preferences represent its built-in objective function. This makes sense according to the theory that our intuitions go wrong because the case is too far from human experience — we are not going to project our preferences onto these agents as we do for humans. For a similar argument, see Sharon Street’s argument In Defense of Future Tuesday Indifference (which is an excellent and entertaining read!).
If one rejects rationality constraints on desires, any talk (including Parfit’s discussion) that uses reason to critique our desires is unfounded and unmotivating. For example, if I prefer some set of hedonic states to others, no one can say anything that can make me rationally change my true preferences.
One can argue that the present aim theory is incoherent. If it is about my pleasure in this instantaneous moment, that wouldn’t mean anything for action (by the time I do something, it would be the next moment), and if it meant my current preferences about the future, it would be the same as the Self Interest Theory.
Even if preferences have some rationality constraints (perhaps they must follow the rules of propositional logic and can’t contradict), it takes further assumptions to argue that they must be consistent across people and follow other rationality constraints. Just because there are some constraints doesn’t mean that they must follow all rules.
Given that we merely find ourselves with the preferences we have (as opposed to them being chosen), what would it even mean to have irrational preferences? What should the person do if they have Dutch-Bookable preferences? It seems like one cannot can’t be subject to irrationality if they couldn’t have it any other way in principle.
One can argue that past pain and pleasure don’t matter because they don’t share the reason that hedonic states have value in the first place — because they’re phenomenologically bad. Adding a past pain experience doesn’t affect this as you will never experience the phenomenological badness of it.
Frege-Geach Problem:
Moral statements can be embedded in complex sentences (including conditional and syllogisms) in ways that make sense. For example, someone might say “if torturing babies for fun is wrong, then getting your brother to torture babies for fun.” Under a non-cognitivist view (in which moral statements simply reflect an expression of emotion), this conditional doesn’t make any sense — it would be like saying “boo muder, so you shouldn’t muder.” However, people understand (and are even motivated towards action from) these sentences as if they do make sense. This seems to support the idea that moral statements are meaningful and don’t represent mere expressions of emotion.
Objections:
One might argue that the way that people talk about morality is irrelevant to whether it reflects objective reality or not. While anti-realist positions (i.e. error theory, non-cognitivism broadly, etc) make claims about how people speak, this seems like an entirely different empirical question that an anti realist shouldn’t be responsible in answering — it should probably be explained by anthropologists, psychologists, and linguists. This would be like asking atheists to explain how religious people use syllogisms in theology, talk about religious experience, or explain whatever else goes on here.
Since altruism developed evolutionary as a means for promoting collectivist behavior, we wouldn’t expect it to be used like a mere expression of truth — perhaps it’s more like signaling that you are altruistic to others. Instead, it seems like we should expect that it would be logically motivating as any other conditional, which is exactly what we find.
One may develop a quasi-realist approach in which moral expressions need to be consistent attitudes that will preserve coherence in moral talk. One can adopt Expressivist Logic to say that these complex sentences represent a conditional attitude.
One can argue that, while it might seem like people are making intelligible arguments, they are actually not. When we make moral claims, they would argue, we are doing something else. This is not completely unheard of — we often make sarcastic or joking claims in which we claim one thing but mean the exact opposite.
As always, tell me why I’m wrong!


The Frege Geach argument is against noncognitivism, not anti-realism.
This is a good summary of the common responses to arguments for realism. Shockingly, I'm not convinced. I'll just cover the responses to intuitionism.
First, you try to debunk it. Now, there's a lot to say about this, but I'll just make four points. First, even if you can explain away our moral intuitions, that's still a cost of the theory. You can *always* make an alternative hypothesis to explain our aberrant intuitions. I can postulate the theory that my toe caused all things, caused you to hallucinate the things you're hallucinating, and makes you falsely believe that this theory is complicated. Every piece of evidence that you can aduce, I can debunk, but nonetheless, the view is crazy because it conflicts with obvious beliefs. Likewise with anti-realism. Second, I think anti-realism poorly explains our moral beliefs like transitivity that lack and obvious evolutionary explanation but are simple and elegant and therefore especially likely to be true. Third, if we're rational creatures, we might believe things because they're true. Four, this seems to also apply to modal and mathematical facts, which we shouldn't give up on.
I agree the epistemic challenge is a puzzle (at least, for godless heathens like you :P) but it's general to all sorts of a priori knowledge. But without a priori knowledge that inductive worlds are likelier than counterinductive ones, you can't justify evolution https://philpapers.org/archive/HUETIN.pdf
As for three, we don't have empirical evidence that, for example, we're not in the Matrix, but it's a bad theory. Don't go overboard on empiricism!
As for four, you shouldn't be inclined towards that ontology :). Numbers, sets, and so on are real. I think the modal facts are the clearest case--surely it's just impossible that there are contradictions.
Finally, five is true of lots of domains. People have wrong intuitions about math all the time and modality and logic. The way you find truth is how you do on every other philosophical topic--you carefully reflect. And there's not much disagreement about many things--we only talk about the things we disagree on. Everyone agrees that, for example, gratuitous torture is wrong.
Have you heard of Justin Clarke-Doane's book comparing Morality and Mathematics? I think he makes a very convincing, rigorous and exhaustive case for why, ultimately, one can't really justify Moral Realism but actually *can* justify Mathematical Realism.