by Michael Aird
This post was written for Convergence Analysis. This post highlights and analyses existing ideas more than proposing new ones.
In The Precipice, Toby Ord writes:
An existential catastrophe is the destruction of humanity’s longterm potential.
An existential risk is a risk that threatens the destruction of humanity’s longterm potential.
I’ve previously discussed some distinctions and nuances relevant to these concepts. This post will focus on:
- The idea that these concepts are really about the destruction of the potential of humanity or its “descendants”; they’re not necessarily solely about human wellbeing, nor just Homo sapiens’ potential.
- The implications of that, including for how “bad” an existential catastrophe might be
The potential of humanity and its “descendants”
When explaining his definitions, Ord writes:
my focus on humanity in the definitions is not supposed to exclude considerations of the value of the environment, other animals, successors to Homo sapiens, or creatures elsewhere in the cosmos. It is not that I think only humans count. Instead, it is that humans are the only beings we know of that are responsive to moral reasons and moral argument – the beings who can examine the world and decide to do what is best. If we fail, that upwards force, that capacity to push towards what is best or what is just, will vanish from the world.
Our potential is a matter of what humanity can achieve through the combined actions of each and every human. The value of our actions will stem in part from what we do to and for humans, but it will depend on the effects of our actions on non-humans too. If we somehow give rise to new kinds of moral agents in the future, the term ‘humanity’ in my definition should be taken to include them.
This makes two points clear:
- An existential catastrophe is not solely about the destruction of the potential for human welfare, flourishing, achievement, etc. Instead, it’s about humanity’s potential to bring about or protect whatever turns out to be of value. This may include, among many other things, the welfare of other beings.
- More specifically, it’s about the potential of humanity or its “descendants”,1The phrase “humanity (or our descendants)” is used by Ord on page 382, and a similar phrase is used on page 395. not just “Homo sapiens”, to bring about or protect whatever turns out to be of value.
In line with that second point, Bostrom’s (2012) definition is:
An existential risk is one that threatens the premature extinction of Earth-originating intelligent life or the permanent and drastic destruction of its potential for desirable future development (emphasis added)
And Bostrom also writes:
Above, we defined “humanity” as Earth-originating intelligent life rather than as the particular biologically defined species Homo sapiens. The reason for focusing the notion of existential risk on this broader concept is that there is no reason to suppose that the biological species concept tracks what we have reason to value. If our species were to evolve, or use technology to self-modify, to such an extent that it no longer satisfied the biological criteria for species identity (such as interbreedability) with contemporary Homo sapiens, this need not be in any sense a catastrophe.
If we wished to more explicitly capture the above two points in our definitions, we could expand Ord’s definitions to:
An existential catastrophe is the destruction of the long-term potential humanity (or its “descendants”) has to cause morally valuable outcomes.
An existential risk is a risk that threatens the destruction of the long-term potential humanity (or its “descendants”) has to cause morally valuable outcomes.2Unimportantly, I’ve also added a hyphen in “long-term” in these definitions. See footnote 2 here.
Personally, I’d also be inclined to say “the vast majority of the long-term potential”; see here.
However, these tweaks would also make the definitions longer, perhaps “weirder” sounding, and perhaps less emotionally resonant. So I’m not suggesting that they’re all-things-considered improvements to Ord’s definitions.
Here’s another option that might avoid those issues, while capturing the two points noted above:
An existential catastrophe is the destruction of the long-term potential for value in the universe.
An existential risk is a risk that threatens the destruction of the long-term potential for value in the universe.
But I don’t think those definitions would quite work, for reasons explained in the following section.
What about moral agents other than humanity or its “descendants”?
As noted, Ord focuses on humanity in his definitions because “humans are the only beings we know of that are responsive to moral reasons and moral argument” (emphasis added). And he adds “If we somehow give rise to new kinds of moral agents in the future, the term ‘humanity’ in my definition should be taken to include them.”
But it seems plausible that there are now, or will be in future, other “moral agents” which developed or will develop independently of us. I see three ways this could occur.
Firstly, as Bostrom (2009) notes:
It is possible that if humanity goes extinct, another intelligent species might evolve on Earth to fill the vacancy. The fate of such a possible future substitute species, however, would not strictly be part of the future of humanity.
Secondly, it seems possible that roughly the same thing could occur if humanity doesn’t go extinct but for some reason fully departs the Earth, despite it remaining habitable. (Incidentally, these possibilities for “Earth-originating intelligent life” which arises independently of humanity is why I use the phrase “humanity or its ‘descendants’” when discussing existential risks, instead of Bostrom’s “Earth-originating intelligent life”.)
Thirdly, it seems possible that there currently are, or in future will be, extraterrestrial intelligent life that would classify as “moral agents” (see Dissolving the Fermi Paradox for discussion).
Thus, it seems possible – though of course highly speculative – that value (or disvalue) in the universe could be created without humanity or its descendants. So if we defined an existential catastrophe as “the destruction of the vast majority of the long-term potential for value in the universe”, that would have the strange implications that:
- The destruction of the potential of humanity or its descendants wouldn’t necessarily count as an existential catastrophe.
- Whether that counts as an existential catastrophe would depend in part on the likelihood that there is or will later be “moral agents” unrelated to humans, and what such life would do with “our part” of the universe.
So I favour sticking with Ord’s definitions, and just being aware that they mean something like “An existential catastrophe is the destruction of the long-term potential humanity (or its “descendants”) has to cause morally valuable outcomes.”
But this also has strange implications mirroring the above:
- At least in theory, an “existential catastrophe” could not be overwhelmingly bad, or perhaps not bad at all.
- At least in theory, how bad an “existential catastrophe” would be depends in part on the likelihood that there is or will later be “moral agents” unrelated to humans, and what such life would do with “our part” of the universe.
- E.g., it might be less bad than we’d naively think, if such agents are likely to exist and to create value using some of the resources we would otherwise have used.
- Or it might be even worse than we’d naively think, if such agents are likely to exist and do net-negative things with those resources if we’re not around.
But those strange implications seem worth accepting, partly to stay consistent with established definitions and to avoid the aforementioned implications the alternative definitions would also have. And in any case, in practice, it doesn’t seem like the possibility of “moral agents” unrelated to humanity should substantially affect how strongly we value reducing existential risks. This is for reasons related to:
- the probability of such other moral agents
- uncertainty about whether their actions would be of positive or negative moral value in any case
- the seemingly far greater decision-relevance of other considerations.
For interesting discussion of related points, see The expected value of extinction risk reduction is positive.
- An existential catastrophe can be thought of as the destruction of the long-term potential humanity (or its “descendants”) has to cause morally valuable outcomes.
- It’s not necessarily just about human wellbeing, or just about Homo sapiens’ potential.
- An existential catastrophe is also not necessarily the destruction of the long-term potential for value in the universe.
- This is due to the speculative possibility of current or future “moral agents” unrelated to humanity, which could create value (or disvalue) in our absence.
- But it doesn’t seem like this possibility should play a key role in our decision-making in practice.
Plausibly, reducing existential risks should be a top priority of our time. One way to improve our existential risk reduction efforts is to clarify and sharpen our thinking, discussion, and research, and one way to do that is to clarify and sharpen our key concepts. I hope this post has helped on that front.
In an upcoming post, I’ll discuss another question related to the concept of existential risks: What if humanity maintains its potential, but doesn’t use it well?
- 1The phrase “humanity (or our descendants)” is used by Ord on page 382, and a similar phrase is used on page 395.