If you prefer to listen rather than read, this blog is available as a podcast here. Or if you want to listen to just this post:
I.
Elon Musk has asserted, accurately in my opinion, that unless humanity becomes a two planet species that we are eventually doomed (absent some greater power out there which saves us, which could include either God or aliens). And he has built an entire company, SpaceX, around making sure that this happens (the two planet part, not the doomed part). As I mentioned, I think this is an accurate view of how things will eventually work out, but it’s also incredibly costly and difficult. Is it possible that in the short term we can achieve most of the benefits of a Mars colony with significantly less money and effort? Might this be yet another 80/20 situation, where 80% of the benefits can be achieved for only 20% of the resources?
In order to answer that question, it would help to get deeper into Musk’s thinking and reasoning behind his push for a self-sustaining outpost on Mars. To quote from the man himself:
I think there are really two fundamental paths. History is going to bifurcate along two directions. One path is we stay on Earth forever, and then there will be some eventual extinction event — I don’t have an immediate doomsday prophecy … just that there will be some doomsday event. The alternative is to become a space-faring civilization and a multiplanet species.
While I agree with Musk that having a colony on Mars will prevent some doomsday scenarios, I’m not sure I agree with his implied assertion that it will prevent all of them, that if we choose the alternative of being a space-faring civilization, that it forever closes off the other alternative of doomsday events. To see why that might be, we need to get into a discussion of what potential doomsdays await us, or to use the more common term, what existential risks, or x-risks are we likely to face?
If you read my round up of the books I finished in May, one of my reviews covered Toby Ord’s book, The Precipice: Existential Risk and the Future of Humanity which was entirely dedicated to a discussion of this very subject. For those who don’t remember, Ord produced a chart showing what he thought the relative odds were for various potential x-risks. Which I’ll once again include.
Existential catastrophe via | Chance within the next 100 years |
Asteroid/comet Impact | ~1 in 1,000,000 |
Supervolcanic eruption | ~1 in 10,000 |
Stellar explosion | ~1 in 1,000,000 |
Total natural risk | ~1 in 10,000 |
Nuclear war | ~1 in 1,000 |
Climate change | ~1 in 1,000 |
Other environmental damage | ~1 in 1,000 |
Naturally arising pandemics | ~1 in 10,000 |
Engineered pandemics | ~1 in 30 |
Unaligned artificial intelligence | ~1 in 10 |
Unforeseen anthropogenic risks | ~1 in 30 |
Other anthropogenic risks | ~1 in 50 |
Total anthropogenic risks | ~1 in 6 |
Total existential risk | ~1 in 6 |
Reviewing this list, which x-risks are entirely avoided by having a self-sustaining colony on Mars? The one it most clearly prevents is the asteroid/comet impact, and indeed that’s the one everyone thinks of. I assume it would also be perfect for protecting humanity from a supervolcanic eruption and a naturally arising pandemic. I’m less clear on how well it would do at protecting humanity from a stellar explosion, but I’m happy to toss that in as well. But you can instantly see the problem with this list, particularly if you read my book review. These are all naturally arising risks, and as a category they’re all far less likely (at least according to Ord) to be the cause of our extinction. What we really need to be hedging against is the category of anthropogenic risks. And it’s not at all clear that a Mars colony is the cheapest or even the best way to do that.
The risks we’re trying to prevent are often grouped into the general category of “having all of our eggs in one basket”. But just as we don’t want all of our eggs in the “basket” of Earth, I don’t think we want all of our risk mitigation to end up in the “basket” of a Mars colony. To relate it to my last post, this is very similar to my caution against a situation where we all make the same mistake. Only this time rather than a bunch of independent actors all deciding to independently take the same ultimately catastrophic action, here the consensus happens a little more formally, with massive time and effort put into one great effort. One of the reasons this effort seems safe is that it’s designed to reduce risk, but that doesn’t really matter, it could still be a mistake. A potential mistake which is aggravated by focusing on only one subset of potential x-risks, naturally occurring ones, and this one method for dealing with them, a Mars Colony. In other words in attempting to avoid making a mistake we risk making a potentially different mistake. The mistake of having too narrow a focus. Surviving the next few hundred years is a hugely complicated problem (one I hope to bring greater attention to by expanding the definition and discipline of eschatology). And the mistakes we could make are legion. But, in my opinion, focusing on a Mars Colony, as the best and first step in preventing those mistakes turns out to be a mistake itself.
II.
At this point it’s only natural to ask what I would recommend instead. And as a matter of fact I do have a proposal:
Imagine that instead of going to Mars that we built a couple of large underground bunkers, something similar to NORAD. In fact we might even be able to repurpose, or piggyback on NORAD for one of them. Ideally the other one would be built at roughly the opposite spot on the globe from the first. So maybe something in Australia. Now imagine that you paid a bunch of people to live there for two years. You would of course supply them with everything they needed, entertainment, food, power, etc. In fact as far as food and power you’d want to have as robust a supply of those on hand as you could manage. But as part of it they would be completely cut off from everything for those two years, no internet connection, no traffic in our out, no inbound communication of any sort. You would of course have plenty of ways to guarantee the necessities like air, food and water. Basically you make this place as self-contained and robust as possible.
When I say “a bunch of people”, you’d want as many as you could afford, but in essence you want to have enough people in either bunker that by themselves they could regenerate humanity if, after some unthinkable tragedy, they were all that remained. The minimum number I’ve seen is 160, with 500 seeming closer to ideal. Also if you wanted to get fancy/clever you could have 80% of the population be female, with lots of frozen sperm. Also it should go without saying that these people should be of prime child bearing age, with a fertility test before they went in.
Every year you’d alternate which of the bunkers was emptied and refilled with new people. This ensures that neither bunker is empty at the same time and that the period where even one bunker was empty would only be a week or so.
Beyond all of the foregoing, I’m sure there are many other things one could think of to increase the robustness of these bunkers, but I think you get the idea. So now let’s turn to Ord’s list of x-risks and compare my bunker idea to Musks’ Mars plan.
All natural risks: Mars is definitely superior, but two things to note, first, even if you combine all possible natural risks together, they only have a 1 in 10,000 chance, according to Ord, of causing human extinction in the next century. I agree that you shouldn’t build a bunker just to protect against natural x-risks, but it also seems like a weak reason to go to Mars as well. Second, don’t underestimate the value the bunker provides even if Ord is wrong and the next giant catastrophe we have to worry about is natural. There are a whole host of disasters one could imagine where having the bunker system I described would be a huge advantage. But, even if it’s not, we’re mostly worried about anthropogenic risks, and it’s when we turn to considering them that the bunker system starts to look like the superior option.
Taking each anthropogenic risk in turn:
Nuclear war- Bunkers as a protection against nuclear weapons is an idea almost as old as the weapons themselves. Having more of them, and making sure they’re constantly occupied, could only increase their protective value. Also Ord only gives nuclear war a 1 in 1000 chance of being the cause of our extinction, mostly because it would be so hard to completely wipe humanity out. The bunker system would make that even harder. A Mars colony doesn’t seem necessarily any better as a protection against this risk, for one thing how does it end up escaping this hypothetical war? And if it doesn’t, it would seem to be very vulnerable to attack. At least as vulnerable as a hardened bunker and perhaps far more so given the precariousness of any Martian existence.
Climate Change- I don’t deny the reality of climate change, but I have a hard time picturing how it wipes out every last human. Most people when pressed on this issue say that the disruption it causes leads to Nuclear War, which just takes us back to the last item.
Environmental Damage- Similar to climate change, also if we’re too dumb to prevent these sorts of slow moving extinction events on Earth, what makes you think we’ll do any better on Mars?
Engineered Pandemics- The danger of the engineered pandemic is the malevolent actor behind it, preventing this x-risk means keeping this malevolent actor from infecting everyone, in such a way that we all die. Here the advantage Mars has is its great distance from Earth, meaning you’d have to figure out a way to have a simultaneous outbreak on both planets. The advantage the bunker has is that it’s whole function is to avoid x-risks. Meaning anything that might protect from this sort of threat is not only allowed but expected. The kind of equipment necessary to synthesis a disease? Not allowed in the bunker. The kind of equipment you might macgyver into equipment to synthesis a disease? Also not allowed. You want the bunker to be hermetically sealed 99% of the time? Go for it. On the other hand Mars would have to have all sorts of equipment and tools for genetic manipulation, meaning all you would need is someone who is either willing or could be tricked into synthesizing the disease there, and suddenly the Mars advantage is gone.
Unaligned artificial intelligence- This is obviously the most difficult threat of all to protect against, since the whole idea is that we’re dealing with something unimaginably clever, but here again the bunker seems superior to Mars. Our potential AI adversary will presumably operate at the speed of light, which means that the chief advantage of Mars, it’s distance, doesn’t really matter. As long as Mars is part of the wider communication network of humanity, the few extra minutes it takes the AI to interact with Mars isn’t going to matter. On the other hand, with the bunker, I’m proposing that we allow no inbound communication, that we completely cut it off from the internet. We would allow primitive outbound communication, we’d want them to be able to call for help, but we allow nothing in. We might even go so far as to attempt to scrub any mention of the bunkers from the internet as well. I agree that this would be difficult, but it’s easier than just about any other policy suggestion you could come up with for limiting AI Risk (e.g. stopping all AI research everywhere).
It would appear that the bunker system might actually be superior to a Mars colony when it comes to preventing x-risks, and we haven’t even covered the bunker system’s greatest advantage of all, it would surely be several orders of magnitude cheaper than a Mars colony. I understand that Musk thinks he can get a Mars trip down to $200,000, but first off, I think he’s smoking crack. It is never going to be that cheap. And even if by some miracle he does get it down to that price, that’s just the cost to get there. The far more important figure is not the cost to get there, but the cost to stay there. And at this point we’re still just talking about having some people live on Mars, for this colony to really be a tool for preventing doomsdays it would have to be entirely self sufficient. The requirement is that Earth could disappear and not only would humanity continue to survive, they’d have to be able to build their own rockets and colonize still further planets, otherwise we’ve just kicked the can one planet farther down the road.
III.
I spent more time laying out that idea than I had intended, but that’s okay, because it was a great exercise for illustrating the more general principle I wanted to discuss, the principal of localism. What’s localism? Well in one sense it’s the concept that sits at the very lowest scale of the ideological continuum that includes nationalism and globalism. (You might think individualism would be the lowest -ism on that continuum, but it’s its own weird thing.) In another sense, the sense I intend to use it in, it’s the exact opposite of whatever having all of your “eggs in one basket” is. It’s the idea of placing a lot of bets, of diversifying risk, of allowing experimentation, of all the things I’ve alluded to over the last several posts like Sweden foregoing a quarantine, or Minneapolis’ plan to replace the police, and more generally, ensuring we don’t all make the same mistake.
To be clear, Musk’s push for a Mars Colony is an example of localism, despite how strange that phrase sounds. It keeps humanity from all making the same unrecoverable mistake of being on a single planet should that planet ever be destroyed. But what I hoped to illustrate with the bunker system is that the localism of a Mars Colony is all concentrated in one area, distance. And that it comes not by design, but as a byproduct. Mars is its own locality because it’s impossible for it to be otherwise.
However, imagine that we figured out a way to make the trip at 1% the speed of light. In that case it would only take 12 hours to get from Earth to Mars, and while it would still offer great protection against all of humanity being taken out by an asteroid or comet, it would offer less protection against pandemics than what is currently enforced by the distance between New York and China. In such a case would we forego using this technology in favor of maintaining the greater protection we get from a longer trip? No,the idea of not using this technology would be inconceivable. All of which is to say that if you’re truly worried about catastrophes and you think localism would help, then that should be your priority. We shouldn’t rely on whatever localism we get as byproducts from other cool ideas. We should take actions whose sole goal is the creation of localism, actions which ensure our eggs have been distributed to different baskets. This intentionality is the biggest difference of all between the bunker system and a Mars Colony (Though, obviously the best idea of all would be a bunker on Mars!)
In a larger sense one of the major problems of the modern world is not merely a lack of intentional localism, but that we actually seem to be zealously pursuing the exact opposite course. Those in power mostly seem committed to making things as similar and as global as possible. It’s not enough that Minneapolis engage in radical police reform, your city is evil if it doesn’t immediately follow suit. On the other hand the idea that Sweden would choose a different course with the quarantine was at a minimum controversial and for many, downright horrifying.
I’m sure that I am not the first to propose a system of bunkers as a superior alternative to a Mars colony if we’re genuinely serious about x-risks, and yet the latter still gets far more attention than the former. But to a certain extent, despite the space I’ve spent on the topic, I’m actually less worried about disparities of attention at this scale. When it comes to the topic of extreme risks and their mitigation, there are a lot of smart people working on the problem and I assume that there’s a very good chance they’ll recognize the weaknesses of a Mars colony, and our eventual plans will proceed from this recognition. It’s at lower scales that I worry, because the blindness around less ambitious localism seems even more pervasive, with far fewer people, smart or otherwise, paying any sort of attention. Not only are the dangers of unifying around a single solution harder to recognize, but there’s also lots of inertia towards that unity, with most people being of the opinion that it’s unquestionably a good thing.
IV.
In closing I have a theory for why this might be. Perhaps by putting it out there I might help some people recognize what’s happening, why it’s a mistake, and maybe even encourage them towards more localism, specifically at lower scales.
You would think that the dangers of “putting all of your eggs in one basket” would be obvious. That perhaps the problem is not that people are unaware of the danger, but that they don’t realize that’s what they’re doing. And while I definitely think that’s part of it, I think there is something else going on as well.
In 1885, Andrew Carnegie in a speech to some students, repudiated that advice. In a quote you may have heard, he flipped things around and advised instead that we should, “Put all your eggs in one basket, and then watch that basket.” This isn’t horrible advice, particularly in certain areas. Most people, myself very much included, would advise that you only have one husband/wife/significant other. Which is essentially having all of your eggs in one basket and then putting a lot of effort into ensuring the health of that basket. Of course this course of action generally assumes that your choice of significant other was a good one. That in general with sufficient patience any relationship can be made to work, and that both parties accept that not everything is going to be perfect.
If we take these principles and expand on them, we could imagine, as long as we’re making a good choice up front, and taking actions with some margin for error, that we should default towards all making the same good decision. Of having all of our eggs in one basket, but being especially vigilant about that basket. So far so reasonable, but how do we ensure the decision we’ve all decided to take is a good one? For most people the answer is simple, “Isn’t that the whole point of science and progress? Figuring out what the best decisions are and then taking them?”
Indeed it is, and I’m thankful that these tools exist, but it’s entirely possible that we’re asking more from them than they’re capable of providing. My contention is that, culturally, we’ve absorbed the idea that we should always be making the best choice. And, further because of our modern understanding of science and morality this should be easy to do. That lately we have begun to operate under the assumption that we do know what the best choice is, and accordingly we don’t need to spread out our eggs because science and moral progress has allowed us to identify the best basket and then put all of our eggs in that one. But I think this is a mistake. A mistake based on the delusion that the conclusions of science and progress are both ironclad, and easy to arrive at, when in fact neither of those things is true.
I think it’s easy enough to see this delusion in action in the examples already given. You hardly hear any discussion of giving the police more money, because everyone has decided the best course of action is giving them less money. And already here we can see the failure of this methodology in action. The only conceivable reason for putting all of your eggs in one basket is that you’re sure it’s the best basket, or at least a good one, and yet if anything the science on what sort of funding best minimizes violent crime points towards spending more money as the better option, and even if you disagree with that, you’d have a hard time making the opposite case that the science is unambiguous about lower funding leading to better outcomes.
There are dozens if not hundreds of other examples, everything from the CDC’s recommendation on masks to policies on allowing transgender athletes to compete (would it that terrible to leave this up to the states, people can move), but this post is already running a little long, so I’ll wrap it up here. I acknowledge that I’m not sure there’s as much of a through line from a colony on Mars to defunding the police as I would like, but I’ll close by modifying the saying one further time.
Only put all of your eggs in one basket if you really have no other choice, and if you do, you should not only watch that basket, but make extra sure it’s the best basket available.
My own reservations about the Mars Colony aside, I would still totally want to visit Mars if I had the money. You can assist in that goal by donating, I know that doesn’t seem like it would help very much, but just you wait, if Elon Musk has his way eventually that trip will be all but free!
I like the main thrust of this post – that localism is a winning global strategy. I think the either/or scenario you posit between Mars and bunkers has many problems, and is itself a bit of a false choice. We can pursue both. That said, I’d like to quibble a bit about whether Mars really is worse than bunkers:
1. I think once we solve the problem of interplanetary colonization we’ll start doing the same risk calculus that told us it’s dangerous to rely on only one planet. This will drive us to move our species interstellar, intergalactic, and if possible universal. I have lots of thoughts about how underlying biology suggests this won’t manifest in the way any science fiction authors have imagined. Maybe I’ll write a blog post about it…
2. As someone who has personally manipulated bacterial and mammalian genetic material in the lab, I find it difficult to believe that Mars would be in a position to generate a pandemic x-risk; or that it would be able to do so simultaneously with Earth. Given its distance from Earth, I’d think it’s much better suited for surviving a pandemic than even a bunker. Imagine you’re outside the bunker. You think you’re safely non-infectious, but you want to get into the bunker to make sure you don’t get sick from all the other people who might unknowingly have the infection. Well, it is right there…
Maybe there are lots of safety precautions to keep people out, but then you’re just talking about whether the engineers who built the vault are more motivated than the ones trying to break into it. Who wins?
Now look at Mars. You want to get to Mars to be safe from the virus. You hop a shuttle into space, then begin the six-month voyage to the colony. Now you have six months to show symptoms, get sick, die of the disease (potentially easier in the low-G environment of space), and never get around to infecting the Martians. In the case of a pandemic, extreme social distance really is the best protection. If we get to the point where we’re travelling 1% the speed of light I assume we’ll also be able to establish colonies farther out than Mars as well.
4. I’m not sure how an AI would be able to take over a Mars colony remotely. The AI isn’t just the program but also the computer it’s built on. It might be able to influence the Martian colony’s computer systems, but that’s different from direct access. In order to have an AI take over Mars it would have to have been purpose-built on Mars. It feels like we should be smart enough not to simultaneously build parallel experimental computer projects everywhere at once. By the time the Martian colony realizes Earth has been taken over by a hostile computer, I’d assume they would stop developing their experimental AI project. That doesn’t save Mars, but at least it’s a world apart from the danger.
Of course, it’s possible to imagine an online digital switch that could be flipped to turn off the air filtration systems colony-wide on Mars. (Or something else equally terminal.) But if that switch were accessible by a remote super-intelligent computer it would also be accessible by remote malicious human actors. In other words, thanks to the existence of bad actors already extant on the internet, I think we can assume that when we design the Martian computer system it will be hardened against a direct external takeover.
Meanwhile the Earth-side bunkers are close. Any attempt at scrubbing all mention of them on the internet – after a major international scientific/engineering/political effort to build them in the first place – is doomed to failure. It would presumably take the super-intelligent AI far less time to discover, and fewer resources to infiltrate, the Earth-bound bunker than the Mars Colony. I’m sure a determined malicious computer could eventually take out both, but the Mars colony at least survives longer.
(Ergo the drive to become an interstellar species. Once distances are large enough, all contact would be lost and you end up in a Dark Forest fight against the malicious AI, which grants humanity much better odds than any other alternative. Of course, Mars is the logical stepping stone to interstellar colonies. And if you’re willing to posit travel at 1% c we’ll have interstellar travel in no time!)
5. Finally, you didn’t address what is, to me, one of the most striking differences between these two approaches: incentives. The idea here is similar to the “why don’t we have surge capacity of mask production?” question you discussed in an earlier post. Incentives drive what we end up actually doing, both politically and economically.
Mars is remote, but not by choice. Bunkers could only become isolated if they are intentionally designed to be that way. Going to Mars is partly a journey of discovery and exploration. And partly I think we’ll discover ways to do and make things on Mars that are more difficult or impossible to do on Earth. Perhaps we’ll discover deposits of rare-Earth elements or other resources, such that going to Mars is a net benefit to the inhabitants of Earth. In contrast, I don’t see any way a hermetically sealed colony is capable of extracting anything other than an initial cost with no ongoing benefit to the rest of civilization.
As such, I can imagine a future where colonies on Mars exist because people decided to go out there and build them and others decided that going to Mars – and more importantly STAYING on Mars – was a good idea. I can’t see a world where a series of bunkers is maintained for more than a few years before humanity loses interest and moves on.
From comments on Twitter I don’t think this point emphasized enough that we should prioritize the bunkers ahead of Mars, but that doesn’t mean I think Mars is dumb. I mostly think that making it a refuge is massively harder than people imagine, and that if we’re really serious about x-risk, bunkers get us 80% of the way there for 20% of the effort.
That said I think I’m going to object to your objections.
Engineered pandemic- I think you may have missed a key point there. Imagine there’s a bad guy and he’s gone to all the work on Earth of figuring out the process for creating an incredibly deadly pandemic. Now further imagine that he has a co-conspirator on Mars. I feel like once all the hard work has been done, just duplicating that work on Mars would be fairly straightforward. You are obviously the expert here, way more than me, but my understanding was that the equipment required to do this sort of thing is fairly small, and that it would be super handy to have around if you’re trying to genetically modify life to work better on Mars (either crops, bacteria, what have you).
I will say that the point about people trying to break into the bunkers is a good one. And one I’ll need to think about more. I guess my assumption is that the last thing people would want to do is take everyone else with them. Or to put it another way I think you’d have far more people volunteering to protect the bunkers than you would have trying to break in, but again, I’ll have to give it some more thought.
AI Risk- I’m not sure how many books on AI Risk you’ve read, but most spin very credible narratives about how a sufficiently intelligent AI would probably only have to be able to communicate with humans in order for them to accomplish their goals. Which makes protecting against them incredibly difficult, but I continue to maintain that the bunker would be at least as good as Mars, and, once again, a hell of a lot cheaper.
Leaving out incentives was an oversight, because the incentives are very different, but I did kind of assume that there is a group of people with money and power who are worried about x-risks at whom this post might be directed.
It seems to me in terms of pandemics the existential risks have been getting less, not more. Once upon a time there was only 10,000 humans. A pandemic that killed 10,000 people would have made us extinct. Now a virus that does 10K deaths is a blip. With about about 8B people, you would need a virus that kills 99,999.875 out of every 100,000 people to get humans below 10,000.
Yes we can engineer viruses now (although I don’t think anyone has an actual example of an engineered viral weapon), but the evolutionary limits still apply. Your deadly virus that you engineer cannot be so deadly that it kills people faster than they can spread it. If you make it so it leaves enough time for you to spread it before it kills you, well now it’s easy to spread to everyone but harder to ensure extinction.
Could a comet kill 8B? Yes but it’s even easier for a comet to kill 10,000 so in that sense our risk levels have gone down. We are essentially diversified pretty heavily on earth. What is left at this point is what they call ‘systemic risk’ in finance. The types of risk you can’t diversify away anymore.
That would seem to argue more for Mars than bunkers on earth. Bunkers on earth would reduce a rather small and esoteric risk while even a few hundred people on Mars (who could in theory return to earth) would be adding a serious risk reduction in the form of cutting the risk of extinction that is due to being on planet earth.
I understand the main thrust of your argument was that bunkers are cheaper and quicker. I think if you can get over there incentive issue they seem like a good strategy for x-risk. I’ve been thinking this idea would make a good short story, where people get the idea for bunkers, then when they start trying to build them they realize someone already did and nobody realized it!
Engineered pandemic: I’m going to ask you to trust me that this is one of those places where it’s easy to hand wave and say something is possible in theory but it turns out to be obviously wrong in practice. Sort of like saying you could water color your own Mona Lisa and it would be the same as the original. If you insist, I can go more into depth. Basically, unless Mars becomes a very large colony I don’t see this simultaneous development as an issue.
AI risk: I’ve heard this argument before and I think it’s mostly fanciful. It imagines people to be like machines that operate on a programming language similar to logic but with some emotional commands thrown in the mix. I think cognition isn’t like something you can program, and it’s not easy to just make people do what you want if you say the right things or confuse them in a specific way. Of course this is an empirical question, so let’s hope we never find out.
I am going to continue to push on this. Surely it would be possible for a co-conspirator to take the engineered pathogen in some safe form (tightly sealed, inactivated, etc.) and then at the predetermined moment release it simultaneously with the release of the Earth pathogen?
Yes, a simultaneous release would not be a problem, but then that has nothing to do with equipment, really, so long as the virus is stable long enough to get it to Mars. If you’re engineering the virus with the intent to make sure you wipe out Mars as well you could do that. The original argument was about simultaneous development, which for technical reasons is highly unlikely. But simultaneous release would be easier.
Perhaps the biggest suspect should be Dani California?
Yes but in that case the earth bunkers are probably more easily accessible than Mars colonies. In terms of logistics earth bunkers simply require less energy to get too than Mars bunkers therefore the would be anti-human terrorist needs less to infect them as well. You could say security but then presumably this earth terrorist already knows society around the globe well enough to manufacture multiple doses of this super-virus while eluding the normal security and intelligence forces that would want to stop him.
You could say if 0.1% c travel becomes a norm, Mars bunkers wouldn’t feel so far away. They wouldn’t but if that was the case then there would almost certainly be plenty of colonies all over the solar system.
You could even make a ‘looping bunker’. Pack 100 people on a big ship with 2 years worth of supplies. Accelerate to 0.1c. Travel out for 11 months or so then stop and turn around. For a year then the ‘bunker’ will be outside the influence of anyone back home if 0.1c is the fastest we could accelerate.
I think price makes the terrestrial bunkers quicker and easier, which was the original argument in their favor. I’m already on record as saying I support multiple strategies. I just don’t see how the incentive structure aligns to make them happen.
Price might be deceptive here, penny wise and pound foolish. In terms of risk reduction, Mars colonies might indeed be cheaper. If you are heavy in one asset class often adding a little bit of another class will do more to diversify your risks than going deeper on that class.
Right. As I noted earlier, my biggest objection to this piece is the inclusion of the word “instead”, and the strong defense of it. The best strategy is to build a bunch of cheap bunkers now while we work toward sustainable colonies throughout the solar system – then beyond, if possible.
It should be noted in terms of a pandemic, bunkers are extra insecure in that they will probably be using recycled air. Covid-19 seems very prone to large indoor spaces like nursing homes and offices.
This virus is also interesting in that it has a long period where you are both infected and spreading the virus but showing no symptoms. This seems like the perfect mix to get into a bunker and then once the doors shut it is too late. I suppose you could have many bunkers with different waves of people, say every 2 months or so there’s one group entering a bunker while another is leaving. This way there will always be at least a few bunkers going before the ‘patient 0’ event.
So kind of interesting, we’ve had this virus for the last 6 months. Assuming we have multiple bunkers going for two year gigs, about 1/4 of our ‘protected population’ would be coming offline. Would we re-up them for another two years? Let them empty out into a world with the virus? Suppose we discover the virus has a habit of reanimating in the body after a year and the 2nd time around mortality is very high (could be, remember no one on Earth has yet made it a full year post-infection). Our ‘bunker stock’ of uninfected people could fall very, very low.
On the pro side I think keeping the bunkers in use constantly would keep them in tip top shape. Contrast this with bomb shelters from the 50’s, most of which I believe ended up becoming teenage make out spots before becoming wet sink holes filled with mold.
On the con side I think in terms of cost this will likely be much more than Mars if you’re pricing in terms of risk reduced per dollar spent.
On the other hand the two might have a lot of synergies. Getting really good at large underground living structures would probably need to be something we have to learn a lot about for any serious presence off the Earth, especially the moon, due to radiation.
Quibbles aside, I think the general point you’re making is an important one. In general, it’s better for many strategies to flower in the hopes one of them works out. The way you get there is not through a single central entity making all the decisions. Here is where I think the Carnegie quote isn’t different from what you’re saying. And it’s not entirely different from Musk and co. putting all their effort into one thing, but in general humanity pursues a cornucopia of strategies. You don’t want ONE entity engaging in myriad strategies. It will never be able to devote sufficient focus to any one thing to do it right. So that approach results in myriad strategies poorly implemented. Instead, you want myriad entities each engaged in their own one strategy. If you can do one thing really well, and I can do one thing really well, collectively we can all do myriad things really well – so long as we don’t all try to do it all together. We don’t need everyone to succeed, we just need a few people to survive. (I’d really like if that included me and mine, of course. I hope I pick the right strategy!)
I think the point about making decisions based on science is close, even if it’s slightly off the mark. The error isn’t in thinking ‘Science Has The Answer’, but in thinking that there is one specific answer that is right for everyone or every strategy. If biology teaches us anything, it’s that everyone pursuing the same strategy is probably the biggest x-risk of all. To the extent we’re using the tools of science to create a homogeneous community we’re doing it wrong. And I don’t think we can lay that at the feet of the science itself, but rather to misapplication of it and failure to learn obvious lessons from subjects like biology.
(As a final parenthetical, I’ve always opposed the idea of “people can just move” as a response to local governance variation. I don’t live where I do now because I was intentionally selecting local and state governance as my primary motivation. Indeed, I think this motivation is the exception rather than the rule. What if people want to live near family or want to join an industrie clustered in one location, but the incentive to be “politically aligned on a certain basket of issues” is too great to allow them to sort along any other dimension? Might this strategy enforce a certain ideological purity to local industry? To the extent we push this idea of ‘if you don’t like it just move’, I think we’re making the world a worse place for all of us. Or at least a place our children will not want to live.)
I think your second paragraph is very much to the point I was trying to get at, and I agree with it in it’s entirety, and if it seemed from the post that I didn’t, that’s on me.
As far as the “people can just move”. I think there’s a balance. I’m assuming (and again you may disagree) that there are some things which are divisive enough that it’s fair to expect that if you feel really strongly that you should have some method of escaping to somewhere that matches your ideology, and that such a place should exist. I’m not saying that this should apply to every issue, only those that are the most divisive (and I think transgender issues might be one of those issues).
Also isn’t this at it’s core the entire justification people use for immigration?
Sounds to me like you’re a bit old fashioned. By which I mean I suspect part of you wonders if it’s been down hill since the Cambrian Explosion when all sorts of life forms were being tried out but since then it’s been narrowing down more and more.
But such diversity is unstable and risky as well. Consider slavery. One reason it was such an issue was because cotton was an excellent crop for slave labor, but depleted the soil. A plantation that could not longer produce good crops but had an aging slave population who needed to be fed was not viable economically. Lands had to be opened up west AND slavery had to be the system in those lands. People in the 1800’s imagined some type of compromise could have been struck between slave and non-slave states but as tensions continued it became obvious to many like Lincoln (and the South) that the US would have to go one way or the other. Slavery was bitterly opposed in the North because white northerners knew sooner or later they would be forced to accept slave holding for themselves, like it or not.
Now consider your praise for Sweden. What happens in a pandemic when one state shuts down and drives the virus to zero but another lets it burn? Now the first state must lock out the second state or lock itself down until the virus has passed or else infected people from the Sweden like state will swing by and set off infections again. As Europe considers banning people from coming from the US (which would have to include fellow Europeans who want to return home), consider where is the localism? One side is going to force the virus on the other or else force the other to keep a perpetual watch.
Your main point came through clearly, and I think it’s an important one.
Immigration is often the result of incentives to move into a country. In other words, “If you build it they will come.” That seems like a different issue, where sometimes people build it and don’t want them to come.
Emigration is more along the lines of, “If you don’t like it just leave.” And the forced exodus of various peoples due to political, ideological, or other differences does not have a great history.
But this goes deeper than the fact that this has a poor record of being implemented in reality. It’s just a bad idea over all. It’s directly opposed to pluralism, where the ideal is that we find a way to work it out and live together. That seems to be one of the great accomplishments of classical liberalism, culminating in the American Experiment.
I’m contrast to that Experiment stands this new experiment that seems to go back to ideas we rejected decades ago. And those ideas certainly have their proponents today, both on the far Left and far Right, where both are essentially calling for a return to segregation. I knew a guy on the far right who basically rejected the idea that pluralism could ever work, so we should all just get it over with and segregate already. We’ll all fight it out in a clean contest. Let the best man win.
That’s not the kind of world I want to live in. I don’t want people to think that changing their mind on an issue should also require them to change jobs, homes, friends, etc. Certainly there are a lot of people who are building that world today, whether they intend to or not, when they call for people to be cancelled.
This doesn’t work well when we apply it to other areas of life, and I can’t think of an issue where I’d want to enforce it. I certainly don’t like its current implementations.
I don’t like that my home purchase is linked to my decision of where my kids go to school. Their ages mean that Venn diagram can get complicated.
I don’t like that my health insurance purchase is tied to my career path. Twice now it has required me to change insurance carriers halfway through my wife’s pregnancy.
I’d like if we could unbundle, instead of going the other direction.
To be clear, I’m not opposed to groups who want to go off and build their own community and invite like-minded people to join them. I’m opposed to communities pushing people to move due to ideological non-conformity. Home isn’t just a modular brick of community that you can pop out and move. It’s personal.
I think there’s less difference between the two sides of those coin than you might think. Though it’s kind of a big topic. But the simple example is something like a campus christian group. Can they function at all if they can’t keep atheists people from attending their meetings, or holding office.
Freedom of association was considered to be a pretty foundational value to the framers of the country.
I can freely change my association much more easily than I can change my residence. I support the freedom of association as a fundamental feature of a pluralistic society.
If you’re saying you want to extend the features of those associations to society and government I’d say that way lies the tyranny of the majority in a pure democracy. Much has been written by US founding fathers and other political philosophers about why this is a state of affairs we should avoid at all costs, or lose the republic.
The only thing I’d add to that discussion is that a democratic tyranny of the majority is about as anti-localism as you can get. In that sense it’s not a good adaptation strategy for species survival.
Few issues with this:
Trying lots of different ideas sounds fun but there are some limitations to it. Namely ideas that work have an innate evolutionary advantage over those that don’t. The ‘localism’ you think you see is mostly just ideas that work in different contexts. Japan is big on sea food, the US is less so. Japan is an island, the US has a huge area of fertile land in its center. How much is this ‘trying different things’ versus just doing one grand thing, “eat what makes sense in your area” , Don’t buy it? How many places aren’t doing toilets or electricity?
Side note on this, remember the talk about global supply chains? Maybe we should make our own masks? I noticed there’s packs of masks all over these days. They almost always have Chinese inserts in them. So much for even finding a straw for Trump.
Anyway, I think the localism argument is a bit of a myth here. Stuff that works is going to catch on, period. The idea of ‘letting a thousand flowers bloom’….sorry…’letting states be laboratories of democracy’ neglects the fact that you try a thousand things to find the thing that works and when you do you ditch everything else. Musk had various options for his Tesla. He could have had fuel cells. He could have had swappable batteries (rather than recharging you just swap out one at a station the way you do propane tanks). He went with electric rechargeable. Once he went down that path the other ideas got ditched and he centered on that.
Nothing new there. Go back 100 years and there were gas cars, cars that ran on batteries, all types of ideas were being tried. But then they were all gas (and diesel). There were all types of operating systems, but then they settled all on Windows and various flavors of Unix.
You can’t get localism unless you create and enforce artificial divisions. Berlin Wall, North Korea are models here. You said it yourself, what if we invent a way to easily to 1% of light speed? It would be easy for a virus to spread to Mars since it is only 12 hours away. Well China is 12 hours away and if we sew more masks and underwear here in the US it ain’t getting further away.
The existential issue that isn’t addressed here I think is what happens when something works but it sets up the problem in the long run? Anyone see Star Trek Discovery Season 2 here? Do I have to worry about spoilers?
So don’t want to spoil the two examples that came to my mind….Star Trek Discovery (season 2) and Westworld (season 3). Both, though, pull upon earlier science fiction standards so I think I can avoid the hard spoil.
Both deal with the development of an AI to help humanity navigate dangers. In one the AI is able to see despite the world appearing to be improving, it will collapse in the future. In another an AI seeks self awareness and a huge cache of data that will give it the edge it needs to outwit all life and exterminate it, leaving only itself.
Both of these risks to me seem to have the same underlying structure. Everything is good until you hit the point of no return. Localism does not seem like a viable defense. Since everything is good, the locality that adopts that will have an edge over all the others. Over time all the others will either adopt the same strategy or they will die out. At the end of the day everyone ends up on the same path.
Note this doesn’t have to be an AI. It could be a system like capitalism, democracy, freedom of speech etc. (In the first story the AI doesn’t cause the collapse but sees it as a natural consequence of society continuing without intervention) Interesting fact about China, they didn’t fight the Covid-19 virus with traditional Chinese medicine but labs, doctors, nurses all trained in medicine and science (let’s not even call it Western medicine or science, when an engineer calculates how much support a structure needs you don’t call it “Greek math” he is doing after all!). Note how ‘stuff that works’ cancels out localities . Yes our flim flam artists hawk ‘homeopathic cures’ to the unsuspecting, theirs ‘traditional medicine’ but when the shit hits the fan it’s people in white coats and test tubes in both places.
Now instead of AI imagine instead the risk was ‘scientific method’. What mechanism protects against that? Localities doesn’t do it IMO.
First, for those people who take AI risk seriously (which is not me) AI is incredibly hard to defend against, so using a contrived SF example of how it might fail might not make the best example.
As far as people coalescing around the best methodology, that was precisely my point, that sort of thing is bound to happen, and it should happen. What I’m asking for here is that if someone says, “I know the world is improving now, but I want to spend all of the resources at my command doing this non-mainstream thing, trying to prevent/hedge against this seemingly improbably thing.” We should exercise considerable deference to such efforts.
I suppose, but then who stops them? I mean the Amish basically expend all their resources living without grid electricity (I do think they are ok with electricity from generators). If an ‘energy demon’ from the 14th dimension invades our grid killing everyone on the terminals, the Amish got our back, they will be around afterwards.
But what the Amish aren’t doing it seems is seeking out an alternative to the grid. They aren’t pushing for system of running civilization using non-grid technology. Think PC versus Apple. You could in theory use either one for almost all the tasks you use the other for, there are pros and cons to both but both are pushing themselves as ‘solutions’. An example that doesn’t really exist yet might be a fuel cell car as an alternative to Musk’s battery based cars. But I suspect much like in the past the dynamic is the alternatives will become ‘localities’ and then ‘niches’ and then either extinct or just really eccentric hamlets.
I meant to post this here, but accidentally submitted it under the “Second Mistake” article instead.
Interesting stuff. I’ve been thinking a bit about extinction level events vs those that are massively devastating (billions of deaths, massive technological regression) and it seems to me better to focus on the second category rather than the first. In fact, I’m not even sure why we should care about actual extinction level events. After all, for something to be good or bad implies that there is some sort of teleology (or possibly a deontology). Its difficult to see how either of those can exist without a consciousness that contemplates them also existing — and if they can somehow exist without a consciousness also contemplating them I struggle to see how their failure would matter in any concrete sense.
If that is the case, it follows that if all humans went extinct, absent some other consciousness caring about that fact the extinction itself wouldn’t matter (though the suffering leading up to it very much would, but it is important here to note that proposals to prevent human extinction care about avoiding extinction qua extinction, not the suffering prior to extinction). So, for there to be any justification for expending resources to avoid extinction rather than to minimize the risk of massively destructive not-quite-extinction level events, the question of who might care needs to be examined. As I thought about this, six candidates occurred to me. They are, in the order that I thought of them: 1) God (or gods), 2) aliens, 3) past humans (relative to the event, could be past, present or future relative to today), 4) AI, 5) conscious species that evolve on earth post-human extinction, or 6) a panpsychic consciousness.
Assessing each of these candidates one by one on the assumption that they do or will exist, I find myself remaining in most cases unconvinced of the argument that we need to invest in preventing human extinction. Consider:
In case 1, there are three general conceptions of god(s) to think about. First, the conception of god as a metaphor for peace and love, etc. No consciousness there, so nothing to worry about. Second, the conception of God as a more-or-less omnipotent, omniscient being. Presumably such a god could prevent human extinction if desired, and as such if this is the case there remains no reason to worry about preventing human extinction unless some other consciousness beside God would also be around to be sad that humans were gone. Third, god(s) more along the lines of the mythology of old — powerful enough to know of and care about human extinction, but not powerful enough to stop it. If this were true, that would be a reason to care about preventing human extinction (how good a reason depends on how plausible you believe this to be), although any scenario in which this were true is probably also a scenario in which, if there is nothing the gods could do to stop our extinction, there is nothing we could do to stop it either, and therefore there remains no good reason to prioritize preventing extinction-level events of non-extinction catastrophes.
In case 2, aliens who are either aware of us prior to our extinction but are unable to prevent it or who stumble upon the archaeological evidence of our existence post-extinction constitute the consciousness necessary for there to be a teleology or deontology that allows for human extinction to be a good or bad thing in the first place. Again, if this were true, that would be a reason to care about preventing human extinction (how good a reason depends on how plausible you believe this to be).
In case 3, the contemplative mind that is the source of the requisite teleology or deontology is one we know to exist — namely, humans prior to human extinction. This is the one that virtually all arguments for preventing human extinction seem to rely on. My argument here is not that this can be totally ignored, but rather that the suffering of human minds in near-extinction level event scenarios should take priority over the telos put forth by us as we contemplate the idea of human extinction. Basically, I argue that concrete ongoing suffering of beings that do or will exist should matter more than the philosophic tragedy of the non-existence of further minds under an extinction scenario. That said, I suspect there are some valid angles of attack on this position and could see myself changing my mind on this. I think it would benefit from further refining and argumentation.
In case 4, the hypothetical contemplative consciousness is AI (one we built — I lump AI originating elsewhere in with aliens). This case, of course, requires some constraints on what type of AI we are talking about — here one that was not itself the cause of the extinction level event but which proved unable to prevent it. Whether this then means that this is justification for caring about extinction-level events or whether it means that if the AI can’t prevent it neither can we and so we shouldn’t invest in preventing it depends on how likely you think AI in general, and AI that are capable of caring but are not more capable of halting a disaster than humanity, are.
In case 5, it seems to me quite plausible that such a species could only arise in the absence of humans filling that niche — in other words, our extinction might be a prerequisite for the existence of the conscious mind necessary for it to be a tragedy in the first place. Perhaps multiple fully conscious, philosophizing species can arise on the same planet, but in my view competition for resources make that unlikely (and no, I don’t think any of the semi-conscious species currently sharing our planet rise to the level of consciousness necessary for teleology or deontology to exist — if anyone disagrees, I am curious as to what evidence you have that they do rise to that level). In this scenario our extinction would be a tragedy, but one that they probably wouldn’t actually want to undo, and of course it implies that our continuing existence is also an “extinction event” of sorts in that it prevents another conscious species from existing. Given that, I am once again unclear on what justification there is in investing against human extinction.
In case 6… well, I don’t even know how to think about case 6. My understanding of panpsychism is that most of its proponents are not arguing that there exists a galactic consciousness on par with our own, but rather that there is some level of awareness or psyche. I don’t see this providing a telos or deontology (though if anyone does I would love to hear it). Of course, perhaps there is some massively complex arrangement of asteroids or gravitational fields or something out there that does possess the necessary level of consciousness, and which could care about our extinction. This being the case, that would be an argument for avoiding human extinction, and one that basically parallels case 2 (aliens) but with lower probability.
Really fascinating take on things. I’ll have to mull over these ideas. But you make some very interesting points. I’m pretty sure that Toby Ord has a whole section justifying his worry about existential risks, but I didn’t pay very close attention to it because it seemed unobjectionable to me, but your comment definitely opened my eyes on that. I’ll have to go back and review that section.
Perhaps sunk cost applies to this approach. If humans could do something good, then extinction would eliminate that possibility. Unless you have entirely ruled out that possibility, then it makes sense to try to avoid extinction.