If you prefer to listen rather than read, this blog is available as a podcast here. Or if you want to listen to just this post:
One of the great truths of the world is that the future is unpredictable. This isn’t a great truth because it’s true in every instance. It’s a great truth because it’s true about great things. We can’t predict the innovations that will end up blessing (or in any event changing) the lives of millions, but even more importantly we can’t predict the catastrophes that will end up destroying the lives of millions. We can’t predict wars or famines or plagues—as was clearly demonstrated with the recent pandemic. And yet on some level despite the impossibilities of foretelling the future we must still make an attempt.
It would be one thing if unpredicted catastrophes were always survivable. If they were tragic and terrible, but in the end civilization, and more importantly humanity, was guaranteed to continue. Obviously avoiding all tragedy and all terror would be ideal, but that would be asking too much of the world. The fact is even insisting on survivability is too much to ask of the world, because the world doesn’t care.
Recognizing both the extreme dangers facing humanity, as well as the world’s insouciance, some have decided to make a study of these dangers, a study of extinction risks, or x-risks for short. But if these terminal catastrophes are unpredictable what does this study entail? For many it involves the calculation of extreme probabilities—is the chance of extinction via nuclear war 1 in 1,000 over the next 100 years or is it 1 in 500? Others choose to look for hints of danger, trends that appear to be plunging or rising in a dangerous direction or new technology which has clear benefits, but perhaps also, hidden risks.
In my own efforts to understand these risks, I tend to be one of those who looks for hints, and for me the biggest hint of all is Fermi’s Paradox, the subject of my last newsletter. One of the hints provided by the paradox is that technological progress may inevitably carry with it the risk of extinction by that same technology.
Why else is the galaxy not teeming with aliens?
This is not to declare with certainty that technology inevitably destroys any intelligent species unlucky enough to develop it. But neither can we be certain that it won’t. Indeed we must consider such a possibility to be one of the stronger explanations for the paradox. The recent debate over the lab leak hypothesis should strengthen our assessment of this possibility.
If we view any and all technology as a potential source of danger then we would appear to be trapped, unless we all agree to live like the Amish. Still, one would think there must be some way of identifying dangerous technology before it has a chance to cause widespread harm, and certainly before it can cause the extinction of all humanity!
As I mentioned already there are people studying this problem and some have attempted to quantify this danger. For example here’s a partial list from The Precipice: Existential Risk and the Future of Humanity by Toby Ord. The odds represent the chance of that item causing humanity’s extinction in the next 100 years.
- Nuclear War ~1 in 1000
- Climate Change ~1 in 1000
- Engineered Pandemics ~1 in 30
- Out of control AI ~1 in 10
You may be surprised to see nuclear war so low and AI so high, which perhaps is an illustration of the relative uncertainty of such assessments. As I said, the future is unpredictable. But such a list does provide some hope, maybe if we can just focus on a few items like these we’ll be okay? Perhaps, but I think most people (though not Ord) overlook a couple of things. First, people have a tendency to focus on these dangers in isolation, but in reality we’re dealing with them all at the same time, and probably dozens of others besides. Second it probably won’t be the obvious dangers that get us—how many people had heard of “gain of function research” before a couple of months ago?
What should we make of the hint given us by Fermi’s Paradox? How should we evaluate and prepare ourselves against the potential risks of technology? What technologies will end up being dangerous? And what technologies will have the power to save us? Obviously these are hard questions, but I believe there are steps we can take to lessen the fragility of humanity. Steps which we’ll start discussing next month…
If the future is unpredictable, how do I know that I’ll actually need your donation. I don’t, but money is one of those things that reduce fragility, which is to say it’s likely to be useful whatever the future holds. If you’d like to help me, or indeed all of humanity, prepare for the future, consider donating.
Perhaps this can be addressed by taking the opposite position. Evaluate a proposal for modified technological stasis. I’ll call it Neo-Amish. Let’s say we adopt the following:
– Technological advancement is stopped mostly around now.
– Rather than innovation, diffusion will be the emphasis with concentration on providing everyone roughly a middle class American level of quality of life with sustainability.
This is hardly implausible IMO. Between conservation and potentials of simply tapping geothermal energy, it isn’t hard at all.
– Limited zones of ‘innovation’ will continue to be invested in modestly.
Say about $100B-$200B a year into mapping nearby space. Barring anything really crazy like an asteroid aimed right at us and accelerated to 99% c, this will provide probably centuries of lead time for any extinction level impacts. Likewise nearby stars could be analyzed deeply for anything like potential gamma ray bursts, super-nova etc. If any such dangers are detected, resources could be directed to either prevention (we prep Bruce Willis for his Armageddon mission) or mitigation (we do a 50 year stint or so in underground cities till the radiation from the nova passes).
We have funding for medical innovation as well, however it is limited to healthspan rather than lifespan research and concentrated on compounds, biologicals and gene therapy is limited to CRISPR like modification of exiting cells rather than germ line modification. This is accompanied with a cultural acceptance of death as a normal part of life rather than a problem to be solved.
Would you feel this would more or less preclude human extinction? Granted this doesn’t solve the sun eventually turning red and swallowing the earth but that’s a few billion years away and the $200B/yr space budget over that extended time could probably get us the point where we could gently move the earth out to more distant orbits gaining us a few billion more years.
In terms of Fermi, what is interesting about this is if you don’t think civilization should hit the ‘Neo-Amish’ button today, what’s to stop you from deciding that for tomorrow? In other words, technological innovation may simply go on pause or grow very slowly for very long periods of time more because civilizations lose interest in the next iPhone.
I think you may be too optimistic about geothermal. I saw a pretty critical response to the Eli Dourado article, though after spending 30 minutes looking I can’t seem to find it. But as a general rule of thumb, the idea that technology ends up being harder to implement, with less benefits and more costs seems pretty solid. And recall this is infrastructure we’re talking about which is a generalized American weakness.
As for the rest of your idea, it sounds nice, but it seems like one of those things that every nation would have to agree to. Do China and Russia give up research on hypersonic missiles, or next gen drone armies? Probably not. We could make an exemption for military tech, but given the interconnectedness of technology, a broad based ban would probably slow down any specific area.
Well I’m talking about time scales here where 100,000 years would be considered a tactical horizen rather than strategic. There is no question a huge amount of heat can be tapped from the earth with determined application. There’s likewise not much question you can’t grab huge amounts of renewable energy from projects like covering 10% of the Sahara with solar panels. I know you like nuclear but is uranium available on the billion scale time range or is it more like oil? Regardless we can toss it in to our ‘Neo-Amish’ option as it is existing tech.
I’m not sure military tech has had a huge innovation factor going on. ICBMs are hypersonic missiles and we had them since 1957. What I think they are talking about now are putting some flaps on a warhead the target can be shifted a few dozen miles or so making it hard to know exactly what the missile’s target is. Russia likes to talk a lot about atomic powered drones that can fly tens of thousands of miles at hypersonic speeds….but Russia’s been doing enough vaporware claims to make Elon blush.
This is just a hypothetical but if there was near universal agreement the ‘good life’ would be sufficient energy and resources to give everyone a slightly upper middle class American level quality of life (perhaps modified a bit less in the direction of self-harming over consumption), and this was achieved using present tech plus some extrapolations from it, you might be taking away a lot of motivation for large scale war. The US and China may be willing to go to war to keep Taiwan free, but would there be an ideological struggle? I’m not sure.
You may say the world already tripped the ‘neo-Amish’ option when it comes to the military. After WWII there was a lot of nuclear innovation. The US scaled up to hydrogen bombs. There was also the expectation that a host of nuclear weapons would be made available on the smaller level, smaller nuclear bombs that could be used by infantry troops. Yet that all more or less seemed to end.
Consider a movie like The Hunt for Red October. It’s a bit over 30 years old today and when it was released in 1990 it was about a fictional incident that happened nearly a decade before. Here’s the thing…
If the movie was brand new today, nothing about nuclear armed subs would strike an audience as outdated. Nothing in the movie struck 1990’s viewers as from the 80’s. If I could show the movie to people in the mid-70’s maybe they might pick up on the computer screens being futuristic. How many other movies show no sign of age or being ahead of their time over nearly a half century timescale? Perhaps Neo-Amish has already arrived and no one noticed…..?
Anyway, what I’m getting at more here is:
1. Why would you recoomend against Neo-Amish as a survival strategy? At least to cover a few hundred million years or so?
2. If it is possible to fall into Neo-Amish simply because needs and wants are satiated, this would be a less malign explanation for the Fermi-paradox. Less of a great filter and more of a comfortable sofa.
1- I’m not recommending against, it. I think it would be great if we could pull it off, but it seems unlikely to be the kind of thing people would choose to do. I can imagine via decadence being forced into it, but forced neo-amishness is very different from consensual neo-amishness.
2- Which is basically the answer to this question as well, I don’t think we’re going to fall into it. I don’t think it’s a stable equilibrium.