Discussion about this post

User's avatar
Hugh Hawkins's avatar

You should have covered things like factory farming and wild animal suffering more.

So much of the current world is tiled with suffering, and an unaligned AGI would have no reason to keep factory farming or wild nature (if it needs meat for some reason, surely growing it without the brain is more efficient, and giant forests sucking up sunlight and space seem bad for most plausible goals.)

You should also have discussed s-risks more-- what would be more likely to increase s-risk, an unaligned AGI or a human-controlled AGI? Certainly there are a lot of sadistic humans. (see: https://www.lesswrong.com/posts/CtXaFo3hikGMWW4C9/the-case-against-ai-alignment)

My personal view is that both human-aligned AGI and unaligned AGI are risky gambles, with uncertain payoffs. Are the AGIs conscious? If so, do they really have analogous experiences to pleasure/pain? What sort of people end up controlling an aligned AGI? What if an unaligned AGI keeps around a bunch of GM cockroaches to scrub for contamination and accidentally causes crazy suffering because the cockroaches are minimally conscious and mostly in pain? The possibility space is incredibly vast.

Expand full comment
Ax Ganto's avatar

Very interesting take that preemptively stops critics in their tracks! I agree with the conclusion. However, I do disagree that there is no upper bound on happiness even with time extensions. The other “irrational” blog (are you related?) has a nice article about it: https://www.optimallyirrational.com/p/the-aim-of-maximising-happiness-is

The way I phrased it in my article is that happiness is a positive deviation from expectation. At some point when total control and knowledge of the universe is optimized, it’s possible that the possibility of positive qualia is extinguished (nothing unexpected anymore). This could be done in several ways. I do think that utilitarianism still cannot expect us to prefer one of these ways over the other.

Expand full comment
18 more comments...

No posts