Make me a utopian metaverse
And why you should enter the experience machine
Imagine you’re in an extremely realistic simulated utopian metaverse: you have your 72 virgins (or whatever), they’re feeding you grapes, and you’re having the time of your life. You legitimately couldn’t comprehend a better world: all your needs and more are satisfied.
Are you better off?
Of course you are.
If given the option, should you leave the world you are in now to join this simulated one?
I think so!
The thesis
Many theories of value recognize that wellbeing should be understood in terms of either (a) the quality of your hedonic states or (b) the degree to which your preferences are fulfilled. A simulated utopia is simply an optimized version of both — just more effectively and reliably than could ever be realized in the “real world.”
“But it’s not real!” you could exclaim. “The goods I have in the real world are just different from the ones in the metaverse because they are more real.” I think this kind of response probably misunderstands the importance of wellbeing and what makes it valuable.
I think the realness of goods don’t actually give them any more weight than they otherwise have (or, at least, a substantial amount such that there is no large amount of pleasure that can outweigh it). In other words, I think going into an experience machine—a machine where you were stimulated with all your favorite experiences → you get the resulting neurotransmitters → and their associated positively valenced states—would be as good for you as actually experiencing those things themselves. Want a permanent significant other? Consider a simulated one. Want to experience the awesomeness of nature or travel to foreign countries? Put on your hyperrealistic VR headset and see it there!
The intuition pump
To illustrate why I think this, consider the following thought experiment:
Imagine you can choose between being in one of two worlds: world A and world B. In world A, you see things with much greater accuracy. However, from the inside, you feel substantially worse off; your life feels shorter, and you feel extremely weak, frail, and miserable by the end of it. In world B, you have a much worse representation of reality—you don’t actually see the true substance of reality and your ability to explain or predict phenomena is relatively terrible—yet you are in a substantially better state of mind, and many more of your preferences are satisfied. Which should you choose?
While someone who gives substantially more weight to reality being a terminal value might choose world A, this actually shows the absurdity of the position.
Before we get into that, though, let’s explain the natural intuition behind choosing World A:
World A is the world where you now live, of course, and world B is some hypothetical experience machine. I think this interpretation makes sense: your model of reality in this world is substantially better, yet your quality of life, subjective experience of time, and the end of your life will be significantly worse.
The switch:
However, here’s another interpretation of the worlds I asked you to choose from: World A is some hypothetical world where all you see is the true fundamental building blocks of reality (chemicals, fundamental particles, and all that good fundamental stuff). In this world, though, you don’t see chairs, you can’t see food, and you would never know whether you will instantly die because you’re doing something obviously stupid (like digging straight down).
World B is our world: even though your model of reality is substantially worse (you almost never see the fundamental stuff of the world - i.e. particles, etc), your wellbeing is higher because you can make decisions that advance your goals using a level of understanding that’s well-suited to your interests.
In this interpretation of the situation I initially posed, there is a clear analogy between the “real world” and the world in which you get to see fundamental reality: while you get a substantially more realistic model of the world, you are made much worse off for it.
The relevance
Let’s go back to what the anti-experience-machin-er would say about the simulated metaverse: the goods in the metaverse world are fundamentally different from those of the “real world.”
However, couldn’t those who choose to go into the true reality world-–the world in which you see all the actual most basic building blocks of the world—say the same thing? They could indeed, and they’d be right to! If your concern is merely that the goods are less fundamental and therefore simply worse (such that you’d be willing to sacrifice the values you hold for the ability to see reality more clearly), yet you think we should rather live in the present world (which I think you obviously should), you’re applying the rule of valuing reality > utility inconsistently.
The anti-experience-machine-er, then, must make another claim as to differentiate these two cases such that they wouldn’t choose to be in the world where all they could see is fundamental physics, yet they could also refuse to go into the experience machine. However, the alternatives that the anti-experience-machine-er could suggest are much less plausible than their initial “I value the realness of it much more” theory. Let’s go into some of those alternatives to see why (of course, I can’t go into all of them but let me know if you think there are other good counterexamples):
Counter-explanations and their failures
Explanation #1: “Perhaps human pleasure just never becomes good enough for the experience to outweigh reality. On this view, the level of wellbeing you could achieve in the machine is simply too low relative to the value of the reality of this world, so you rationally prefer this world.”
If this were true, then in principle it would distinguish the two cases, potentially letting you choose our world in both situations.
But I think this is almost certainly false.
First, this intuition seems to rely on a well-documented cognitive bias: scope neglect. Humans are extremely bad at reasoning about very large quantities—especially large quantities far outside normal human experience, such as extreme or sustained hedonic states. There is overwhelming empirical evidence that our intuitive judgments collapse when numbers get large or unfamiliar.
So when we’re talking about unusually high amounts of pleasure—precisely the kinds of states the experience machine could generate—our brute intuitions become unreliable.
Second, there are additional reasons to distrust this explanation:
You can easily imagine a much longer life whose total hedonic value outweighs the value of “reality,” even on the fundamental-reality view. But if that’s possible, then the experience machine can simply dilate experiential time for you, generating the same enormous hedonic surplus. So this argument doesn’t actually really distinguish between the two cases
It seems suspiciously convenient that our initial intuitions about the thought experiment just so happen to perfectly match the supposed “optimal trade-off” between reality and wellbeing in our world. That symmetry looks like rationalization, not an independently motivated principle.
Even if you’re a satisficer (though there are strong arguments against satisficing), the point at which you become “satisfied” is extremely unclear, especially when you have no idea how good the experience machine could actually be. Claiming confidently that you’d hit your satisfaction threshold before the machine exceeds it seems naive.
Explanation #2: “I only need to understand a certain amount of reality, and after that point I stop caring about reality and care instead about other goods.”
This theory, like the others, strike me as being extremely post hoc, and they seem to just justify your existing intuitions about the experience machine case. Nevertheless, I’ll bite.
Your objects of your experience are, in some sense, real. So before claiming that you need some threshold amount of “realness,” you should at least offer a metric for what it means for one experience to be more or less real. Once again, if by whatever metric you somehow came up with resulted in you saying that the world you currently live (even within a range) is perfectly optimal, I’m gonna be pretty suspicious of the po-hoc-ness at play.
Conclusion
This is great news! It means that humans, post-labor-AGI, will not lose the value of their lives (due to losing their jobs and meaning or something) because they can live in awesome virtual utopias… yay!
Ok… fine. Maybe this doesn’t actually address all the objections to the experience machine, but I think it addresses one that is quite pertinent.
As always, tell me why I’m wrong!


Man ... I remember this one dream I had. Would love to have it again.
Do desire-satisfaction theory people think it counts if the desires are had and/or satisfied in dreams?