If you prefer to listen rather than read, this blog is available as a podcast here. Or if you want to listen to just this post:

Or download the MP3


I.

Elon Musk has asserted, accurately in my opinion, that unless humanity becomes a two planet species that we are eventually doomed (absent some greater power out there which saves us, which could include either God or aliens). And he has built an entire company, SpaceX, around making sure that this happens (the two planet part, not the doomed part). As I mentioned, I think this is an accurate view of how things will eventually work out, but it’s also incredibly costly and difficult. Is it possible that in the short term we can achieve most of the benefits of a Mars colony with significantly less money and effort? Might this be yet another 80/20 situation, where 80% of the benefits can be achieved for only 20% of the resources?

In order to answer that question, it would help to get deeper into Musk’s thinking and reasoning behind his push for a self-sustaining outpost on Mars. To quote from the man himself:

I think there are really two fundamental paths. History is going to bifurcate along two directions. One path is we stay on Earth forever, and then there will be some eventual extinction event — I don’t have an immediate doomsday prophecy … just that there will be some doomsday event. The alternative is to become a space-faring civilization and a multiplanet species.

While I agree with Musk that having a colony on Mars will prevent some doomsday scenarios, I’m not sure I agree with his implied assertion that it will prevent all of them, that if we choose the alternative of being a space-faring civilization, that it forever closes off the other alternative of doomsday events. To see why that might be, we need to get into a discussion of what potential doomsdays await us, or to use the more common term, what existential risks, or x-risks are we likely to face?

If you read my round up of the books I finished in May, one of my reviews covered Toby Ord’s book, The Precipice: Existential Risk and the Future of Humanity which was entirely dedicated to a discussion of this very subject. For those who don’t remember, Ord produced a chart showing what he thought the relative odds were for various potential x-risks. Which I’ll once again include.

Existential catastrophe via Chance within the next 100 years
Asteroid/comet Impact ~1 in 1,000,000
Supervolcanic eruption ~1 in 10,000
Stellar explosion ~1 in 1,000,000
Total natural risk ~1 in 10,000
Nuclear war ~1 in 1,000
Climate change ~1 in 1,000
Other environmental damage ~1 in 1,000
Naturally arising pandemics ~1 in 10,000
Engineered pandemics ~1 in 30
Unaligned artificial intelligence ~1 in 10
Unforeseen anthropogenic risks ~1 in 30
Other anthropogenic risks ~1 in 50
Total anthropogenic risks ~1 in 6
Total existential risk ~1 in 6

Reviewing this list, which x-risks are entirely avoided by having a self-sustaining colony on Mars? The one it most clearly prevents is the asteroid/comet impact, and indeed that’s the one everyone thinks of. I assume it would also be perfect for protecting humanity from a supervolcanic eruption and a naturally arising pandemic. I’m less clear on how well it would do at protecting humanity from a stellar explosion, but I’m happy to toss that in as well. But you can instantly see the problem with this list, particularly if you read my book review. These are all naturally arising risks, and as a category they’re all far less likely (at least according to Ord) to be the cause of our extinction. What we really need to be hedging against is the category of anthropogenic risks. And it’s not at all clear that a Mars colony is the cheapest or even the best way to do that. 

The risks we’re trying to prevent are often grouped into the general category of “having all of our eggs in one basket”. But just as we don’t want all of our eggs in the “basket” of Earth, I don’t think we want all of our risk mitigation to end up in the “basket” of a Mars colony. To relate it to my last post, this is very similar to my caution against a situation where we all make the same mistake. Only this time rather than a bunch of independent actors all deciding to independently take the same ultimately catastrophic action, here the consensus happens a little more formally, with massive time and effort put into one great effort. One of the reasons this effort seems safe is that it’s designed to reduce risk, but that doesn’t really matter, it could still be a mistake. A potential mistake which is aggravated by focusing on only one subset of potential x-risks, naturally occurring ones, and this one method for dealing with them, a Mars Colony. In other words in attempting to avoid making a mistake we risk making a potentially different mistake. The mistake of having too narrow a focus. Surviving the next few hundred years is a hugely complicated problem (one I hope to bring greater attention to by expanding the definition and discipline of eschatology). And the mistakes we could make are legion. But, in my opinion, focusing on a Mars Colony, as the best and first step in preventing those mistakes turns out to be a mistake itself

II.

At this point it’s only natural to ask what I would recommend instead. And as a matter of fact I do have a proposal:

Imagine that instead of going to Mars that we built a couple of large underground bunkers, something similar to NORAD. In fact we might even be able to repurpose, or piggyback on NORAD for one of them. Ideally the other one would be built at roughly the opposite spot on the globe from the first. So maybe something in Australia. Now imagine that you paid a bunch of people to live there for two years. You would of course supply them with everything they needed, entertainment, food, power, etc. In fact as far as food and power you’d want to have as robust a supply of those on hand as you could manage. But as part of it they would be completely cut off from everything for those two years, no internet connection, no traffic in our out, no inbound communication of any sort. You would of course have plenty of ways to guarantee the necessities like air, food and water. Basically you make this place as self-contained and robust as possible. 

When I say “a bunch of people”, you’d want as many as you could afford, but in essence you want to have enough people in either bunker that by themselves they could regenerate humanity if, after some unthinkable tragedy, they were all that remained. The minimum number I’ve seen is 160, with 500 seeming closer to ideal. Also if you wanted to get fancy/clever you could have 80% of the population be female, with lots of frozen sperm. Also it should go without saying that these people should be of prime child bearing age, with a fertility test before they went in.

Every year you’d alternate which of the bunkers was emptied and refilled with new people. This ensures that neither bunker is empty at the same time and that the period where even one bunker was empty would only be a week or so.

Beyond all of the foregoing, I’m sure there are many other things one could think of to increase the robustness of these bunkers, but I think you get the idea. So now let’s turn to Ord’s list of x-risks and compare my bunker idea to Musks’ Mars plan. 

All natural risks: Mars is definitely superior, but two things to note, first, even if you combine all possible natural risks together, they only have a 1 in 10,000 chance, according to Ord, of causing human extinction in the next century. I agree that you shouldn’t build a bunker just to protect against natural x-risks, but it also seems like a weak reason to go to Mars as well. Second, don’t underestimate the value the bunker provides even if Ord is wrong and the next giant catastrophe we have to worry about is natural. There are a whole host of disasters one could imagine where having the bunker system I described would be a huge advantage. But, even if it’s not, we’re mostly worried about anthropogenic risks, and it’s when we turn to considering them that the bunker system starts to look like the superior option. 

Taking each anthropogenic risk in turn:

Nuclear war- Bunkers as a protection against nuclear weapons is an idea almost as old as the weapons themselves. Having more of them, and making sure they’re constantly occupied, could only increase their protective value. Also Ord only gives nuclear war a 1 in 1000 chance of being the cause of our extinction, mostly because it would be so hard to completely wipe humanity out. The bunker system would make that even harder. A Mars colony doesn’t seem necessarily any better as a protection against this risk, for one thing how does it end up escaping this hypothetical war? And if it doesn’t, it would seem to be very vulnerable to attack. At least as vulnerable as a hardened bunker and perhaps far more so given the precariousness of any Martian existence.

Climate Change- I don’t deny the reality of climate change, but I have a hard time picturing how it wipes out every last human. Most people when pressed on this issue say that the disruption it causes leads to Nuclear War, which just takes us back to the last item. 

Environmental Damage- Similar to climate change, also if we’re too dumb to prevent these sorts of slow moving extinction events on Earth, what makes you think we’ll do any better on Mars? 

Engineered Pandemics- The danger of the engineered pandemic is the malevolent actor behind it, preventing this x-risk means keeping this malevolent actor from infecting everyone, in such a way that we all die. Here the advantage Mars has is its great distance from Earth, meaning you’d have to figure out a way to have a simultaneous outbreak on both planets. The advantage the bunker has is that it’s whole function is to avoid x-risks. Meaning anything that might protect from this sort of threat is not only allowed but expected. The kind of equipment necessary to synthesis a disease? Not allowed in the bunker. The kind of equipment you might macgyver into equipment to synthesis a disease? Also not allowed. You want the bunker to be hermetically sealed 99% of the time? Go for it. On the other hand Mars would have to have all sorts of equipment and tools for genetic manipulation, meaning all you would need is someone who is either willing or could be tricked into synthesizing the disease there, and suddenly the Mars advantage is gone.

Unaligned artificial intelligence- This is obviously the most difficult threat of all to protect against, since the whole idea is that we’re dealing with something unimaginably clever, but here again the bunker seems superior to Mars. Our potential AI adversary will presumably operate at the speed of light, which means that the chief advantage of Mars, it’s distance, doesn’t really matter. As long as Mars is part of the wider communication network of humanity, the few extra minutes it takes the AI to interact with Mars isn’t going to matter. On the other hand, with the bunker, I’m proposing that we allow no inbound communication, that we completely cut it off from the internet. We would allow primitive outbound communication, we’d want them to be able to call for help, but we allow nothing in. We might even go so far as to attempt to scrub any mention of the bunkers from the internet as well. I agree that this would be difficult, but it’s easier than just about any other policy suggestion you could come up with for limiting AI Risk (e.g. stopping all AI research everywhere).

It would appear that the bunker system might actually be superior to a Mars colony when it comes to preventing x-risks, and we haven’t even covered the bunker system’s greatest advantage of all, it would surely be several orders of magnitude cheaper than a Mars colony. I understand that Musk thinks he can get a Mars trip down to $200,000, but first off, I think he’s smoking crack. It is never going to be that cheap. And even if by some miracle he does get it down to that price, that’s just the cost to get there. The far more important figure is not the cost to get there, but the cost to stay there. And at this point we’re still just talking about having some people live on Mars, for this colony to really be a tool for preventing doomsdays it would have to be entirely self sufficient. The requirement is that Earth could disappear and not only would humanity continue to survive, they’d have to be able to build their own rockets and colonize still further planets, otherwise we’ve just kicked the can one planet farther down the road.

III.

I spent more time laying out that idea than I had intended, but that’s okay, because it was a great exercise for illustrating the more general principle I wanted to discuss, the principal of localism. What’s localism? Well in one sense it’s the concept that sits at the very lowest scale of the ideological continuum that includes nationalism and globalism. (You might think individualism would be the lowest -ism on that continuum, but it’s its own weird thing.) In another sense, the sense I intend to use it in, it’s the exact opposite of whatever having all of your “eggs in one basket” is. It’s the idea of placing a lot of bets, of diversifying risk, of allowing experimentation, of all the things I’ve alluded to over the last several posts like Sweden foregoing a quarantine, or Minneapolis’ plan to replace the police, and more generally, ensuring we don’t all make the same mistake.

To be clear, Musk’s push for a Mars Colony is an example of localism, despite how strange that phrase sounds. It keeps humanity from all making the same unrecoverable mistake of being on a single planet should that planet ever be destroyed. But what I hoped to illustrate with the bunker system is that the localism of a Mars Colony is all concentrated in one area, distance. And that it comes not by design, but as a byproduct. Mars is its own locality because it’s impossible for it to be otherwise. 

However, imagine that we figured out a way to make the trip at 1% the speed of light. In that case it would only take 12 hours to get from Earth to Mars, and while it would still offer great protection against all of humanity being taken out by an asteroid or comet, it would offer less protection against pandemics than what is currently enforced by the distance between New York and China. In such a case would we forego using this technology in favor of maintaining the greater protection we get from a longer trip? No,the idea of not using this technology would be inconceivable. All of which is to say that if you’re truly worried about catastrophes and you think localism would help, then that should be your priority. We shouldn’t rely on whatever localism we get as byproducts from other cool ideas. We should take actions whose sole goal is the creation of localism, actions which ensure our eggs have been distributed to different baskets. This intentionality is the biggest difference of all between the bunker system and a Mars Colony (Though, obviously the best idea of all would be a bunker on Mars!)

In a larger sense one of the major problems of the modern world is not merely a lack of intentional localism, but that we actually seem to be zealously pursuing the exact opposite course. Those in power mostly seem committed to making things as similar and as global as possible. It’s not enough that Minneapolis engage in radical police reform, your city is evil if it doesn’t immediately follow suit. On the other hand the idea that Sweden would choose a different course with the quarantine was at a minimum controversial and for many, downright horrifying

I’m sure that I am not the first to propose a system of bunkers as a superior alternative to a Mars colony if we’re genuinely serious about x-risks, and yet the latter still gets far more attention than the former. But to a certain extent, despite the space I’ve spent on the topic, I’m actually less worried about disparities of attention at this scale. When it comes to the topic of extreme risks and their mitigation, there are a lot of smart people working on the problem and I assume that there’s a very good chance they’ll recognize the weaknesses of a Mars colony, and our eventual plans will proceed from this recognition. It’s at lower scales that I worry, because the blindness around less ambitious localism seems even more pervasive, with far fewer people, smart or otherwise, paying any sort of attention. Not only are the dangers of unifying around a single solution harder to recognize, but there’s also lots of inertia towards that unity, with most people being of the opinion that it’s unquestionably a good thing.

IV.

In closing I have a theory for why this might be. Perhaps by putting it out there I might help some people recognize what’s happening, why it’s a mistake, and maybe even encourage them towards more localism, specifically at lower scales.

You would think that the dangers of “putting all of your eggs in one basket” would be obvious. That perhaps the problem is not that people are unaware of the danger, but that they don’t realize that’s what they’re doing. And while I definitely think that’s part of it, I think there is something else going on as well. 

In 1885, Andrew Carnegie in a speech to some students, repudiated that advice. In a quote you may have heard, he flipped things around and advised instead that we should, “Put all your eggs in one basket, and then watch that basket.” This isn’t horrible advice, particularly in certain areas. Most people, myself very much included, would advise that you only have one husband/wife/significant other. Which is essentially having all of your eggs in one basket and then putting a lot of effort into ensuring the health of that basket. Of course this course of action generally assumes that your choice of significant other was a good one. That in general with sufficient patience any relationship can be made to work, and that both parties accept that not everything is going to be perfect. 

If we take these principles and expand on them, we could imagine, as long as we’re making a good choice up front, and taking actions with some margin for error, that we should default towards all making the same good decision. Of having all of our eggs in one basket, but being especially vigilant about that basket. So far so reasonable, but how do we ensure the decision we’ve all decided to take is a good one? For most people the answer is simple, “Isn’t that the whole point of science and progress? Figuring out what the best decisions are and then taking them?”

Indeed it is, and I’m thankful that these tools exist, but it’s entirely possible that we’re asking more from them than they’re capable of providing. My contention is that, culturally, we’ve absorbed the idea that we should always be making the best choice. And, further because of our modern understanding of science and morality this should be easy to do. That lately we have begun to operate under the assumption that we do know what the best choice is, and accordingly we don’t need to spread out our eggs because science and moral progress has allowed us to identify the best basket and then put all of our eggs in that one. But I think this is a mistake. A mistake based on the delusion that the conclusions of science and progress are both ironclad, and easy to arrive at, when in fact neither of those things is true. 

I think it’s easy enough to see this delusion in action in the examples already given. You hardly hear any discussion of giving the police more money, because everyone has decided the best course of action is giving them less money. And already here we can see the failure of this methodology in action. The only conceivable reason for putting all of your eggs in one basket is that you’re sure it’s the best basket, or at least a good one, and yet if anything the science on what sort of funding best minimizes violent crime points towards spending more money as the better option, and even if you disagree with that, you’d have a hard time making the opposite case that the science is unambiguous about lower funding leading to better outcomes.

There are dozens if not hundreds of other examples, everything from the CDC’s recommendation on masks to policies on allowing transgender athletes to compete (would it that terrible to leave this up to the states, people can move), but this post is already running a little long, so I’ll wrap it up here. I acknowledge that I’m not sure there’s as much of a through line from a colony on Mars to defunding the police as I would like, but I’ll close by modifying the saying one further time.

Only put all of your eggs in one basket if you really have no other choice, and if you do, you should not only watch that basket, but make extra sure it’s the best basket available.


My own reservations about the Mars Colony aside, I would still totally want to visit Mars if I had the money. You can assist in that goal by donating, I know that doesn’t seem like it would help very much, but just you wait, if Elon Musk has his way eventually that trip will be all but free!