If you prefer to listen rather than read, this blog is available as a podcast here. Or if you want to listen to just this post:
One of the great truths of the world is that the future is unpredictable. This isn’t a great truth because it’s true in every instance. It’s a great truth because it’s true about great things. We can’t predict the innovations that will end up blessing (or in any event changing) the lives of millions, but even more importantly we can’t predict the catastrophes that will end up destroying the lives of millions. We can’t predict wars or famines or plagues—as was clearly demonstrated with the recent pandemic. And yet on some level despite the impossibilities of foretelling the future we must still make an attempt.
It would be one thing if unpredicted catastrophes were always survivable. If they were tragic and terrible, but in the end civilization, and more importantly humanity, was guaranteed to continue. Obviously avoiding all tragedy and all terror would be ideal, but that would be asking too much of the world. The fact is even insisting on survivability is too much to ask of the world, because the world doesn’t care.
Recognizing both the extreme dangers facing humanity, as well as the world’s insouciance, some have decided to make a study of these dangers, a study of extinction risks, or x-risks for short. But if these terminal catastrophes are unpredictable what does this study entail? For many it involves the calculation of extreme probabilities—is the chance of extinction via nuclear war 1 in 1,000 over the next 100 years or is it 1 in 500? Others choose to look for hints of danger, trends that appear to be plunging or rising in a dangerous direction or new technology which has clear benefits, but perhaps also, hidden risks.
In my own efforts to understand these risks, I tend to be one of those who looks for hints, and for me the biggest hint of all is Fermi’s Paradox, the subject of my last newsletter. One of the hints provided by the paradox is that technological progress may inevitably carry with it the risk of extinction by that same technology.
Why else is the galaxy not teeming with aliens?
This is not to declare with certainty that technology inevitably destroys any intelligent species unlucky enough to develop it. But neither can we be certain that it won’t. Indeed we must consider such a possibility to be one of the stronger explanations for the paradox. The recent debate over the lab leak hypothesis should strengthen our assessment of this possibility.
If we view any and all technology as a potential source of danger then we would appear to be trapped, unless we all agree to live like the Amish. Still, one would think there must be some way of identifying dangerous technology before it has a chance to cause widespread harm, and certainly before it can cause the extinction of all humanity!
As I mentioned already there are people studying this problem and some have attempted to quantify this danger. For example here’s a partial list from The Precipice: Existential Risk and the Future of Humanity by Toby Ord. The odds represent the chance of that item causing humanity’s extinction in the next 100 years.
- Nuclear War ~1 in 1000
- Climate Change ~1 in 1000
- Engineered Pandemics ~1 in 30
- Out of control AI ~1 in 10
You may be surprised to see nuclear war so low and AI so high, which perhaps is an illustration of the relative uncertainty of such assessments. As I said, the future is unpredictable. But such a list does provide some hope, maybe if we can just focus on a few items like these we’ll be okay? Perhaps, but I think most people (though not Ord) overlook a couple of things. First, people have a tendency to focus on these dangers in isolation, but in reality we’re dealing with them all at the same time, and probably dozens of others besides. Second it probably won’t be the obvious dangers that get us—how many people had heard of “gain of function research” before a couple of months ago?
What should we make of the hint given us by Fermi’s Paradox? How should we evaluate and prepare ourselves against the potential risks of technology? What technologies will end up being dangerous? And what technologies will have the power to save us? Obviously these are hard questions, but I believe there are steps we can take to lessen the fragility of humanity. Steps which we’ll start discussing next month…
If the future is unpredictable, how do I know that I’ll actually need your donation. I don’t, but money is one of those things that reduce fragility, which is to say it’s likely to be useful whatever the future holds. If you’d like to help me, or indeed all of humanity, prepare for the future, consider donating.