A longtermist case for advancing technological progress
And why it’s controversial
There’s a very intuitive case for advancing societal technological progress: progress has historically been largely good for society,1 which gives us evidence that this trend will continue into the future. Therefore, making more technological progress is good.
However, as Toby Ord argues in his article entitled On the Value of Advancing Progress,2 once we start shedding our social discount rates,3 the case for speeding up becomes much murkier.4 Whether we should have technological progress or not, he argues, crucially depends on weird considerations like when and how morally important beings eventually go extinct.
This is really unintuitive, but Ord’s argument is powerful.
Why progress depends on how moral beings go extinct
Ord makes this case more thoroughly and responds to many objections in his piece here.
Ord distinguishes between two ways humans5 could go extinct: endogenous and exogenous reasons.6 We can distinguish this into worlds where each of these reasons occur:
By an exogenous world,7 Ord means a world where the time humanity comes to an end is a particular calendar date, which would be unchanged if we advanced progress or not (i.e. the heat death of the universe).
By an endogenous world, Ord means a world where the time humanity comes to an end is based on how fast we make technological progress (i.e. humanity eventually creates some technology that kills them).
Ord argues that in an exogenous world we should push technological progress as fast as possible, in order to realize as much good as we can — remaining largely agnostic about what The Good™ consists in — before humanity eventually ceases to exist.
He continues: if we are in an endogenous world, we’d want to slow technological progress, so we can extend the period of time where we could/do experience good stuff before humanity ceases to exist.8
Crucially, however, this model shows us something important: whether we should speed up, slow down, or maintain the current pace of technological advancement largely depends on whether we think the world has an exogenous or endogenous end.
About halfway through the article, Ord goes on to speculate about why this point — despite being important for basic economic theories — is often missed, which is worth checking out.
Despite all of this, I think that we should assume technological progress is likely worth accelerating.
Why we should still bet on technological progress
There are important ingredients missing from Ord’s framework, which he admits.9 While he largely just tallies up the possible worlds relevant to whether we should advance technological progress, we shall assess their relative probabilities and values conditional that we know we are in these worlds. That being said, we can actually learn a great deal about a world’s value if we assume that it is endogenous:
If we went extinct in an endogenous world, this should significantly update us toward thinking that our world was less prepared, less robust, and therefore probably less valuable than default exogenous worlds. More specifically:
We should increase our credence that we did not spread very far and/or power was more concentrated, which are highly correlated with worlds that have much less value.
It is far more difficult to destroy a civilization that is widely distributed across space and whose survival capacity is not concentrated in a small set of agents. Spreading very far, however, is (on many views) necessary for achieving lots of value (for instance, filling the universe with whatever is valuable, collecting resources to put into whatever things are valuable, etc), thereby making more fragile societies have far less value in expectation.
Many of the catastrophic risks we face (nuclear war, climate change, bioterrorism, and other anthropogenic risks) largely rely on humanity being Earth-bound and centralized, suggesting that endogenous endings are correlated with thinner, more fragile (and probably less valuable) worlds.
We should think that these worlds had weaker epistemic and moral governance, which is negatively-correlated with the ability to reflect and make good decisions (including moral ones) based on evidence.
As Carl Sagan noted, existential risk depends on two opposing forces: humanity’s growing capacity to reshape the world (with technological progress, for instance), which raises the danger, and wiser governance, which reduces it.
The fact that we went extinct under this model gives us lots of evidence that the world was less epistemically sophisticated. Under many views, though, this kind of sophistication is strongly correlated (perhaps even necessary) for lots of moral value.
Being in an endogenous world teaches us that the world likely ended earlier, which could be strongly correlated with less value.
Endogenous worlds end earlier than exogenous worlds because the exogenous ending didn’t occur yet (assuming that there would be an ending date otherwise).
Earlier endings leave less time for population growth, moral reflection, and the stabilization of high-stakes and fragile values.
Relatedly, they are correlated with having astronomical waste, which can be hugely important (especially if we think the majority of the value is at the very end because it grows a lot every, say, year).10
All of these pieces of evidence point towards exogenous worlds—where we go extinct from things that are out of our control— having significantly more value than endogenous ones. If that’s right, then even under deep uncertainty about whether we’re in an endogenous or exogenous world, the correct bet is still on technological progress. Why?
Well,
Suppose endogenous and exogenous worlds are equally likely. Even so, if most of the total value is concentrated in exogenous worlds (as we’ve just seen, there are good arguments for this), assuming that our relative leverage is the same, then we should optimize for is performance in those worlds. As we’ve stated before, what most improves outcomes in exogenous worlds is accelerating technological progress.
There are also additional considerations that complicate Ord’s clean endogenous/exogenous picture—and they further strengthen the case for technological progress:
As Ord mentions, technological progress is not monolithic; while some of technological progress will likely increase our chances of existential risk, lots of progress is neutral to or can even reduce the chances of existential risk. This creates a third type of world that Ord doesn’t really touch (though he does briefly mention it): worlds where we advance technological progress and that helps us decrease the possibility of existential risks. If you can distinguish between those with some degree of certainty, then it could be worth speeding up progress in some areas.
Some examples of technology reducing x-risk include bettering our ability to fight climate change/deadly pandemics, increasing the likelihood we spread out across space, and governance/reasoning technology (i.e. better forecasting techniques, etc).
This is true if (1) some technologies are more obvious potential end-ers/non-end-ers than others (meaning that you can move progress in one and not the other), and (2) if you can move technology forward in some ways without doing them (or at least holding off) in other ways.
Philip Trammell and Leopold Aschenbrenner have an interesting argument where they make the case that technological progress may decrease the total chances of existential risk. This happens in two ways:
The first is that increasing the pace of technological progress decreases the time spent at each technology level, where an existential risk is likely to occur.
Secondly, given that technological progress increases our prosperity (which, as stated, history seems to suggest), we may expect that our willingness to pay for existential security to increase greatly. If the willingness to pay increases faster than the costs to reducing the chances of some existential risk, then it makes sense to hasten progress.
All of these points strengthen the case for technological progress, at least under some conditions. However, this case is not one without critiques.
Ways this could be wrong
A few times that I’ve made claims throughout this piece, I’ve qualified them by saying that all else is equal. Yet, as the world typically goes, all else is not actually equal, so let’s explain some of the conditions where my claim can go wrong.
The probability of dying in endogenous worlds might be much greater.
There are, definitionally, more times that endogenous worlds can occur because they are earlier than exogenous ones, as we explained earlier. That might mean that they are drastically more endogenous worlds than exogenous ones.
If endogenous worlds are, say, 15x more probable, even if exogenous worlds have, say, 10x more value, then most of the value mass that we have leverage over exists in endogenous worlds, meaning that we slow down progress.
Most obviously, these are clearly not decisions made in a vacuum: speeding up technological progress makes you much more likely to die in an endogenous world.
Usually doing something fast is correlated with less reflection and more sloppiness.
Contra the claim that we can meaningfully differentiate between technologies that are more or less likely to contribute to existential risk, one can argue that such distinctions are in practice extremely difficult to draw because of pervasive spillover effects.
Scientific and technological advances in one domain frequently enable progress in many others in ways that are unintuitive and difficult to predict. As a result, developments that initially appear benign or even risk-reducing may later facilitate the creation of high-risk capabilities.
For instance, early nuclear physicists could have known that atomic research might someday produce large explosives, but they only found out later that it would culminate in weapons capable of destroying entire cities and permanently transforming the risk profile of civilization.
Taken together, these considerations suggest that the growth–existential risk trade-off may depend sensitively on the a priori and empirical assumptions about the relative probabilities of endogenous and exogenous worlds, the institutional effects of acceleration, and the pervasiveness of technological spillovers.
Conclusion
Overall, what we should actually do will likely depend on modelling assumptions about which of the factors matter most and by how much —- how much more valuable are exogenous worlds, how much can you progress in areas that aren’t correlated with existential risk without spillover effects, how much greater is the probability of endogenous worlds if you speed up progress, and more.
Still, my intuition (perhaps naively) probably leans towards something that looks like the normal picture: unless some technological progress has some clear ways that it can increase existential risk (in those cases, examine the risks and benefits further), we should probably just continue to accelerate technological progress.
A particular definition of society is doing lots of the work in this argument, though. If you give animals some degree of moral importance, technology might have been extremely net negative throughout history due to factory farming.
Here’s the audio version.
This is just to say that we don’t give less weight to those beings that exist in the future merely because they exist in the future. Here are some arguments for this position.
While this assumption could be rejected, it should at least be argued for – proponents of progress who believe in the value of advancing progress on these grounds should say something like: “I believe that future people count less, so even if it is a net-negative for future people (because it raises the chances of existential risk more than the benefits they get), it is worth it because they don’t matter/matter less.” While this point could be made, it is (1) not argued for and (2) seems less compelling than the initial obvious case for progress.
This is not to say Ord doesn’t think the case for progress is good—indeed, throughout the post he stays largely agnostic about this question. Instead,
This isn’t only true for humans, of course – it could be any morally important agent that will exist.
He does not claim that these are jointly exhaustive.
“Worlds” is not his precise phrasing, but I think it is helpful to talk about it like this.
Maybe what we want (ideally) is to find the spot right before technology is going to kill us and stay there for a while, but the broader picture looks the same.
Though this was purposeful: he explicitly stated that he wasn’t going to talk about the probabilities or values associated with either society: “ .” He was merely (in my view, very astutely) pointing out that this commonly held claim is argued on the basis of premises that often go unargued for.
This really depends on if we are still expanding, at what rate, and how long we’ve been expanding for.


How much of this is dependent upon the definition of value? isn't the definition of value the critical pretext?
TY for this thoughtful analysis, Noah. IMHO, I think the discussion is based on a false premise, as I note here https://www.mattball.org/2024/04/i-welcome-our-robot-overlords-you.html
Take care and stay well.