Tag: <span>Catastrophe</span>

Polycrises or Everything, Everywhere, All at Once

If you prefer to listen rather than read, this blog is available as a podcast here. Or if you want to listen to just this post:

Or download the MP3


There are people who are optimistic about the future. I am not one of them. (I do have religious faith, but that’s different.) I am open to the idea that I should be more optimistic, but that doesn’t seem supported by “facts on the ground” as they say.

Some might argue that I have a bias for ignoring the good facts and focusing on the bad ones. That’s certainly possible, but I have put forth considerable effort to expose myself to people making the case for optimism. Here are links to some of my reviews of Pinker, perhaps the most notable of our current modern optimists. Beyond Pinker I’ve read books by Fukuyama, Deutsch, Yglesias, Zeihan and Cowen, and while these authors might not have quite the optimism of Pinker, they nevertheless put forth optimistic arguments. Finally if any of you have recommendations for optimists I’ve missed, I promise I’ll read them. (Assuming I haven’t already. That list of authors is not exhaustive.)

After doing all this reading, why do I remain unconvinced and expect to remain that way, regardless of what else I end up coming across? To understand that we first have to understand their case for optimism. It generally rests on two pillars:

First, they emphasize the amazing progress we’ve made over the last few centuries and in particular over the last few decades. And indeed there has been enormous progress in things like violence, poverty, health, infant mortality, minority rights, etc. They assume, with some justification, that this progress will continue. Generally it is dicey to try predicting the future, but they have a pretty good reason for believing that this time it’s different. Through the tools of science and reason we have created a perpetual knowledge generation machine, and increasing knowledge leads to increasing progress. Or so their argument goes.

Second, they’ll examine the things we’re worried about and make the case that they’re not as bad as people think. That certain groups are incentivized, either because it attracts an audience or there’s money involved, or because of their individual biases to engage in fearmongering. Highlighting the most apocalyptic scenarios and data, while downplaying things that paint a more moderate picture. Pinker is famous, or infamous depending on your point of view, for his optimism about global warming. Which is not to say that he doesn’t think it’s a problem, merely that he believes the same tools of knowledge generation that solved, or mitigated, many of our past problems will be up to the task of mitigating, or outright solving the problem of climate change as well.

In both of these categories Pinker and the other’s make excellent and compelling points. And, on balance, they’re entirely correct. Despite the pandemic, despite the war in Ukraine, despite the opioid epidemic, and a lot of other things (many of them mentioned in this space) 2023 is just about the best time to be alive, ever. Notice I said “just about”. Life expectancy has actually been going down recently, and yes the pandemic played a big role in that, but it had been stagnant since 2010. Teen mental health has gotten worse. Murders are on the rise. This is a US-centric view, but outside of the US there’s the aforementioned war in Ukraine, but also famine is on the rise in much of Africa. With these statistics in mind it certainly seems possible that as great as 2023 is that 2010 was better. 

Does this mean that we’ve peaked? That things are going to get steadily worse from here on out? Or are we on something of a plateau, waiting for the next big breakthrough. Perhaps we’re on the cusp of commercializing fusion power, or of widespread enhancements from genetic engineering, or perhaps the AI singularity. 2023 does have ChatGPT which 2010 did not. Or are our current difficulties just noise? If they look back on things from the year 2500 will everything look like one smooth exponential curve? This last possibility is basically what Pinker and the others say is happening, though some are less bullish than Pinker, and some, like Deutsch, are more bullish. 

And, to be clear, on this first point, which is largely focused on human capacity, they may be right. I’m familiar with the seemingly insoluble manure crisis of the late 1800s. And how it suddenly was a complete non-issue once the automobile came along. Still, evidence continues to mount that things are slowing down, that civilization has plateaued. That science, the great engine powering all of our advances, is producing fewer great and disruptive inventions, and resistance to innovation is increasing. If it is, that would mark the big difference between now and the late 1800’s. Back then science still had a lot of juice. Now? That’s questionable. And we might be lucky if it turns out to just be a plateau, the odds that we’re actually going backwards are higher than they’ll admit. But this isn’t the primary focus of this post. I’m more interested in their second claim, that the bad things people are worried about aren’t all that bad. 

In general, albeit in a limited fashion, I also agree with this point. I think if you take any individual problem and sample public opinion, that you will find a bias towards the apocalyptic. One that isn’t supported by the data. As an example many, many people believe that global warming is an extinction level event. It’s not. Of course this assumes that people know about the problem in the first place. There’s a lot of ignorance out there, but it’s human nature that those who are worried about a problem, likely worry more than is warranted. And Pinker, et al. are correct to point it out. That’s not the problem, the problem arises from two other sources.

First, and here I am using Pinker’s book Enlightenment Now as my primary example, there’s an unwarranted assumption of comprehensiveness. In the book, Pinker goes through everything from nuclear war, to AI Risk, to global warming, and several more subjects besides. And when it’s over you’re left with the implication of: “See you don’t need to worry about the future, I’ve comprehensively shown how all of the potential catastrophes are overblown. You can proceed with optimism!” If you’ve been reading my posts closely you may have noticed that on occasion (see for instance the book review in my last post of A Poison Like No Other) I will point out that the potential catastrophe I’ve been discussing is not one covered by Pinker.

Of course, if Pinker just missed a couple of relatively unimportant problems then this oversight is probably no big deal. He covered the big threats, and the smaller threats will probably end up being resolved in a similar fashion. The problem is we don’t know how many threats he missed. Such is the nature of possible catastrophes, there’s not some cheat sheet where they’re all listed, in order of severity. Rather the list is constantly changing, catastrophes are added and subtracted (mostly added) and their potential severity is, at best, an educated guess. Some of them are going to be overblown, as Pinker correctly points out. Some are going to be underestimated, which might end up being the case with microplastics. I’m not sure how big of a problem it will eventually end up being, but given that it didn’t even make it into Pinker’s book, I suspect he’s too dismissive of it. But beyond those catastrophes where our estimate of the severity is off, there’s the most dangerous category of all. Catastrophes that take us completely by surprise. I would offer up social media as an example of a catastrophe in this last category.

As I said, the problems of the optimists arise from two sources. The first is the assumption of comprehensiveness, the second is the ignorance of connectedness. To illustrate this I’d like to go back to a post I wrote back in 2020 about Fermi’s Paradox.

At the time I was responding to a post by Scott Alexander who argued that we shouldn’t fear that the Great Filter is ahead of us. For those who need a refresher on what that means. Fermi’s Paradox is paradoxical because if the Earth is an average example of a planet, then there should be aliens everywhere, but they’re not. Where are they? Somewhere between the millions Earthlike planets out there, and becoming an interstellar civilization there must be a filter. And it must be a great filter because seemingly no one makes it past it. Perhaps the great filter is developing life in the first place. Perhaps it’s going from single celled, to multicellular life. Or perhaps it’s ahead of us. Perhaps it’s easy to get to the point of intelligent life, but then that intelligent life inevitably destroys itself in a nuclear holocaust. In his post Alexander lists four potential great filters which might lie ahead of us and demonstrates how each of them is probably not THE filter. I bring all of this up because it’s a great example of what I’ve been talking about.

First off he makes the same assumption of comprehensiveness I accused Pinker of making — listing four possibilities and then assuming that the issue is closed when there are dozens of potential future great filters. But it’s also an example of the second problem, the way the problems are connected. As I said at the time:

(Also, any technologically advanced civilization would probably have to deal with all these problems at the same time, i.e. if you can create nukes you’re probably close to creating an AI, or exhausting a single planet’s resources. Perhaps individually [none of them is that worrisome] but what about the combination of all of them?)

Yes, Pinker and Alexander may be correct that we don’t have to worry about nuclear war, AI risk, or global warming, when considered individually. But when we combine these elements we get a whole different set of risks. Sure, rather than armageddon there’s an argument to be made that nuclear weapons actually created the Long Peace through the threat of mutually assured destruction, but what happens to that if you add in millions of climate refugees? Does MAD continue to operate? Or, maybe climate refugees won’t materialize (though it seems like we’ve got a pretty bad refugee problem even without tacking the word climate onto things). Are smaller countries going to use AI to engage in asymmetric warfare because nukes are prohibitively expensive and easy to detect? Will this end up causing enough damage that those nations with nukes will retaliate. And then there’s of course the combination of all three things: Are small nations suffering from climatic shifts going to be incentivized to misuse new technology like AI and destabilize the balance created by nukes?

This is just three items which produces only six possible catastrophes. But our list of potential, individual catastrophes is probably in the triple digits by this point. Even if we just limit it to the top ten, that’s 3.6 million potential combinatorial catastrophes. 

Once you start to look for the way our problems combine, you see it everywhere:

  • Microplastics are an annoying pollutant all on their own, but there’s some evidence they contribute to infertility, which worsens the fertility crisis. They get ingested by marine life which heightens the problems of overfishing. Finally, they appear to inhibit plant growth, which makes potential food crises worse as well.
  • You may have seen something about the recent report released by the CDC saying that adolescent girls were reporting record rates of sadness, suicidal ideation, and sexual violence. Obviously social media has to be suspect #1 for this crisis. But I’m not sure it can be blamed for the increase in sexual violence. Isn’t the standard narrative that kids stay home on their phones rather than going out with friends. Don’t you have to be with people to suffer from sexual violence? I’d honestly be surprised if pornography didn’t play a role, but regardless this is definitely a case where two problems are interacting in bad ways.
  • If you believe that climate change is going to exacerbate natural disasters (and there’s evidence for and against that) then these disasters are coming at a particularly bad time. Lots of our infrastructure dates from the 50s and 60s and much of it from even before that. But because of the pensions crisis being suffered by municipalities. We don’t have the money to conduct even normal repairs, let alone repair the additional damage caused by disasters. And most projections indicate that both the disaster problem and the pension problem are just going to get a lot worse. 
  • I’m not the only one who’s noticed this combinatorial effect. Search for polycrisis. There are all sorts of potential crises brewing, and most of the lists (see for example here) don’t even mention the first two sets of items on my list, and they only partially cover the third one.

You may think that one or more of the things I listed are not actually big deals. You may be right, but there are so many problems operating in so many combinations, that we can be wrong about a lot of them, and still have a situation where everything, everywhere is catastrophic all at once.

Pinker and the rest are absolutely correct about the human potential to do amazing good. But they have a tendency to overlook the human potential to cause amazing harm as well. In the past, before things were so interconnected, before our powers were so great. Just a few things had to go right for us to end up with the abundance we currently experience. But to stay where we are, nearly everything has to go right, and nothing, very much, can go wrong.


If you’re curious I did enjoy the Michelle Yeoh movie referenced in the title. I’d like to say that I got the idea for this post from the everything bagel doomsday device. But I didn’t. Still I like bagels, particularly with lox. If you’d like to buy me one, consider donating


Eschatologist #12: Predictions

If you prefer to listen rather than read, this blog is available as a podcast here. Or if you want to listen to just this post:

Or download the MP3


Many people use the occasion of the New Year to make predictions about the coming year. And frankly, while these sorts of predictions are amusing, and maybe even interesting, they’re less useful than you might think.

Some people try to get around this problem by tracking the accuracy of their predictions from year to year, and assigning confidence levels (i.e. I’m 80% sure X will happen vs. being 90% sure that Y will happen). This sort of thing is often referred to as Superforecasting. These tactics would appear to make predicting more useful, but I am not a fan

At this point you might be confused: how could tracking people’s predictions not ultimately improve those predictions? For the long and involved answer you can listen the 8,000 words I recorded on the subject back in April and May of 2020. The short answer is that it focuses all of the attention on making correct predictions rather than making useful predictions. A useful prediction would have been: there will eventually be a pandemic and we need to prepare for it. But if you want to be correct you avoid predictions like that because most years there won’t be a pandemic and you’ll be wrong. 

It leaves out things that are hard to predict. Things that have a very low chance of happening. Things like black swans. You may remember me saying in the last newsletter that:

Because of their impact, the future is almost entirely the product of black swans.

If this is the case what sorts of predictions are useful? How about a list of catastrophes that probably will happen, along with a list of miracles which probably won’t. Things we should worry about and also things we can’t look forward to. I first compiled this list back in 2017, with updates in 2018, 2019, and 2020. So if you’re really curious about the specifics of each prediction you can look there. But these are my black swan predictions for the next 100 years:

Artificial Intelligence

  1. General artificial intelligence, something duplicating all of the abilities of an average human (or better), will never be developed.
  2. A complete functional reconstruction of the brain will turn out to be impossible. For example slicing and scanning a brain, or constructing an artificial brain.
  3. Artificial consciousness will never be created. (Difficult to define, but let’s say: We will never have an AI who makes a credible argument for its own free will.)

Transhumanism

  1. Immortality will never be achieved. 
  2. We will never be able to upload our consciousness into a computer. 
  3. No one will ever successfully be returned from the dead using cryonics. 

Outer Space

  1. We will never establish a viable human colony outside the solar system. 
  2. We will never have an extraterrestrial colony of greater than 35,000 people. 
  3. Either we have already made contact with intelligent exterrestrials or we never will

War (I hope I’m wrong about all of these)

  1. Two or more nukes will be exploded in anger within 30 days of one another. 
  2. There will be a war with more deaths than World War II (in absolute numbers, not as a percentage of population.) 
  3. The number of nations with nuclear weapons will never be fewer than it is right now.

Miscellaneous

  1. There will be a natural disaster somewhere in the world that kills at least a million people
  2. The US government’s debt will eventually be the source of a gigantic global meltdown.
  3. Five or more of the current OECD countries will cease to exist in their current form.

This list is certainly not exhaustive. I definitely should have put a pandemic on it back in 2017. Certainly I was aware, even then, that it was only a matter of time. (I guess if you squint it could be considered a natural disaster…)

To return to the theme of my blog and this newsletter:

The harvest is past, the summer is ended, and we are not saved.

I don’t think we’re going to be saved by black swans, but we could be destroyed by them. If the summer is over, then as they say, “Winter is coming.” Perhaps when we look back, the pandemic will be considered the first snowstorm…


I think I’ve got COVID. I’m leaving immediately after posting this to go get tested. If this news inspires any mercy or pity, consider translating that into a donation.


What “The Expanse” Can Teach Us about Fermi’s Paradox

If you prefer to listen rather than read, this blog is available as a podcast here. Or if you want to listen to just this post:

Or download the MP3


This post is going to draw fairly extensively from The Expanse series. It contains definite spoilers for anyone who hasn’t made it through book 3 of the series or season 3 of the TV show. Also the post will have some vague allusions to what happens after that. (I have not personally had the chance to watch the TV show much past season 1, so the exact amount I’m spoiling there might be more than I think.) 

You have been warned.

I.

This blog has been fascinated by Fermi’s Paradox since its inception. As such I’m always interested in the explanations science fiction authors create in the course of tackling the paradox in their books. Some explanations are fascinating and thought provoking, some are implausible and lazy. The explanation given by the Expanse Series, by James S. A. Corey, is fortunately one of the former.

We get Corey’s answer at the end of Abaddon’s Gate, the third book in the series. As it turns out there was someone else out there, and they created a empire of over 1300 planets and knit them together with a network of gates. Earth was supposed to be one of those planets, but the device which would have created the gate (and dramatically hijacked all life on Earth in the process) was captured by Saturn’s gravity and never made it to its final destination.

Eventually people find this device and hilarity ensues. Okay not really, the device (what the series calls the protomolecule) actually turns people into horrible zombie-like creatures who eventually merge with each other into something even more horrible, which then eventually turns into the “Sol Gate” humanity’s very own connection to the ring network. You may have noticed earlier that I said that there was something out there. Well, when the humans travel through the ring they find out that the aliens who built the gates have vanished. Nor is the reason for their disappearance entirely mysterious. It is soon discovered that they were killed off by something even bigger and nastier. 

From the perspective of the series the creation of the gate is good and bad. It’s good because now humans have easy access to hundreds of new, habitable worlds. It’s bad because not only do they know that there exists some other awesomely powerful entity—an entity which is horribly, and seemingly blindly malevolent, something like Lovecraft’s description of the elder gods—but they also may have just brought themselves to the attention of this entity.

As I mentioned this all comes out at the end of book three. The series just barely concluded with book 9 (review coming soon!) So based on this mix of good and bad news what do you imagine the humans do in the subsequent books? Well, and I think Corey predicts this accurately, they spend all of their time on the bounty of the 1300+ systems they’ve just discovered, and almost none of it on the giant, horrible elder gods lurking in the shadows. Now to be fair, they’ve got a lot of problems to deal with other than the elder gods. The animosity between Earth, Mars and the Belters has not gone away just because there’s a bunch of new worlds, in fact if anything the discovery has inflamed tensions. But still one would hope that should we be confronted with this situation in actuality that we would spend more time on the giant, horrible alien problem than the people in the book do, but maybe not.

There is however one person in the books who’s different. One person who will stop at nothing to ensure the survival of humanity. This is Winston Duarte. If you have read many books like this, you may have already guessed that he’s the bad guy. Whether this would be so in reality is not the point of this post, and to be clear, in the context of the books he does end up doing some very bad things. No, the point of this post is to imagine what we might do if we were Duarte. If we decided that the problem of the missing aliens was really the biggest problem humanity faces. 

Of course to a certain extent there are such people, people who are really interested in identifying and dealing with existential issues, because if we don’t we may not be around to deal with anything else. I’ve reviewed some of their books, for example: Global Catastrophic Risks by Nick Bostrom and Milan Ćirković and The Precipice by Toby Ord. And I will continue to review and read these books. I think they touch on one of the most important subjects people can be thinking about. But while reading the final book of The Expanse I was struck by the similarity between Duarte’s situation and our own. And I wanted to use it as a springboard to revisit the profound implications of Fermi’s Paradox, and how it’s easy to understand those implications when it’s fiction, but far harder when it’s reality.

II.

The insight which prompted me to write this post was the realization that there are a lot of similarities between our position and the position of the humans who have just discovered the gates. There were many, many years when neither was even aware of the problem, and then suddenly, in their case, and almost as suddenly, in our case, we both realized that we had a big problem. Both of us have every reason for believing that there should be aliens out there. And as it turns out (thus far) the rest of the universe is empty.

Of course there are obviously some differences. To begin with you may think that our situation is not as bad as the one Duarte is focused on, but I’m not sure that’s the case. He has the advantage of knowing exactly what the problem is: there is some sort of Lovecraftian elder god which eradicates any civilization above a certain level of technology. Of course this is a very big problem, possibly insoluble, but at least he knows where to direct his attention and his energy. And while it is true that nearly everyone else in the books seems to be ignoring the problem. At least they’re aware of it. And when the time comes it doesn’t take much to get them to throw enormous resources at it. On the other hand, most people today aren’t even aware that there is a problem, if they are aware of it they may wonder whether it’s appropriate to even call it a “problem”, and if they grant all of that, there’s still very little agreement on what sort of problem it might be.

To get more concrete, sitting on a shelf in front of me is a book which contains 75 explanations for Fermi’s paradox, and even this collection of 75 explanations doesn’t cover all of the possibilities. Duarte only has to concern himself with one of those explanations: malevolent aliens, and not even malevolent aliens as a general concept, but rather a specific malevolent alien whose existence has already been demonstrated beyond any reasonable doubt. This is not to say that all of the questions posed by the paradox have been answered. For example, did the ring builders really wipe out all other life before being wiped out themselves? But as far as Duarte is concerned the part that matters has been solved, and now he just has to deal with the problems arising from the reality of that solution. And he has lots of options for doing just that. The elder gods might have left clues as to their motivations; there might also be precautions he could take; experiments he could run; or at least data he could collect. 

Duarte doesn’t have to worry about other possible solutions. He doesn’t have to worry that all intelligent aliens destroy themselves in a nuclear war so humans will as well. Or at least he doesn’t have to worry about this nearly as much as we do. Humans are now on hundreds of worlds, and have gone hundreds of years without such a war. He doesn’t have to worry about the difficulties intelligent species might encounter in making it off their home planet in the first place. Humans (in The Expanse) have already shown that can be done as well. Nor does he have to worry about interstellar distances, not only has the gates made this point moot, but even without the gates a major plot point of the first few books is that the Mormons (Go team!) are preparing to leave the solar system in a generational ship. And the list of things he no longer has to worry about goes on and on beyond these examples.

On the other hand, when we contemplate the silent universe we have to consider all 75 solutions, while also being aware of the fact that this list might not be exhaustive, we have probably overlooked some of the possibilities, perhaps even the correct one. 

Some of the potential solutions to the paradox are better for us than the elder gods of The Expanse. Some are worse. You might take issue with the idea that anything could be worse than implacably hostile, nearly omnipotent super aliens, but I disagree. There’s always some chance that we could avoid, placate, or defeat the other aliens. In fact, the chances of avoiding them seem particularly high, since we already managed to do so for tens of thousands of years. But if we consider the entire universe of possible solutions, there are explanations where our chances of survival are much, much lower. As an example, what if the answer to Fermi’s paradox is something inherent to intelligence, or technological progress, or biological evolution itself? Something that hasn’t merely defeated one set of aliens (as was the case with The Expanse) but has defeated all of the potential aliens. Something which because of this inherency will almost certainly defeat us as well.

Back in 1998 Robin Hanson gave a name to this idea of something that defeats all potential aliens, he called it the Great Filter. This is the idea that there is something which prevents intelligent life from developing and spreading across the galaxy in an obvious fashion. Some hurdle which makes it difficult for life to develop in the first place, or which makes it difficult for life, once developed, to achieve intelligence, or which makes it difficult for intelligent life to become multiplanetary. Since Hanson came up with the idea, people have obviously wondered what that hurdle or filter might be, but more importantly they’ve wondered, is it ahead of us or behind us? 

Pulling all of this together, I would say the idea that the Great Filter is ahead of us, and not merely ahead of us, but nearby—a built in consequence of technological progress—is a far scarier solution to the paradox than even the elder gods of The Expanse. The only thing that mitigates the scariness of this solution is the fact that it’s not certain. There is some probability that the true explanation for the paradox is something else. 

It is this uncertainty, and not the magnitude of the catastrophe which represents the key difference between Duarte’s situation and ours.

III.

This is not the first time this blog has covered potential catastrophes with uncertain probabilities. In fact it might be said to represent the primary theme of the blog. So how do you handle this sort of thing if you’re a real, modern day Duarte, rather than the fictional one a couple of centuries in the future? How do you proceed if the threat isn’t certain, if there’s no data to collect, no experiments to run, no motivations to probe? Are there at least precautions one could take?

There might be, but most people who do end up focusing on this sort of thing spend far more time trying to assess the probabilities of the various catastrophes, the various solutions to the paradox, than in trying to understand and mitigate those catastrophes. And frequently the conclusion they come to is that one can explain the paradox without resorting to catastrophic explanations. It can be explained entirely by the fact that we’re extraordinarily lucky. And I mean EXTRA-odinarily lucky. Since I’ve already alluded to Stephen Webb’s book If the Universe Is Teeming with Aliens… Where Is Everybody?: Seventy-Five Solutions to the Fermi Paradox and the Problem of Extraterrestrial Life we might as well look at the account he gives of our unbelievable luck.

I did a very detailed breakdown of it in a previous post, but in essence it assumes that there are 1 trillion planets in the galaxy and out of the trillion places where it could have happened Earth was the only place where life did happen.

That we were lucky enough to be on a planet in the galactic habitable zone.

…which also orbits a sun-like star

…in the habitable zone of that same star

…which turned that luck into life

…that this life was lucky enough to avoid being wiped out prematurely

…developing from single-celled to multicellular life

…and not merely multicellular life, but intelligent, tool-using, mathematical life.

In other words we won the lottery, but actually we did better than that. You actually have a 1 in 300 million chance of winning even a really big lottery, like the Mega Millions. 1 in a trillion is actually 3,000 times less likely even than that. 

This explanation and similar explanations for the paradox are given the label “Rare Earth”, and I’ll admit that I’m probably not the best person to talk about them because they strike me as being optimistic to the point of delusion. Similar to the people in The Expanse who look at the gates and only see the hundreds of inhabitable worlds, not the omnicide of the aliens who built the gates in the first place. Yes, it’s possible that Earth, alone out of the trillion planets in the galaxy, has managed to get past the Great Filter. That some species on some planet was going to get lucky, and it just happened to be us. That, now, as the beneficiaries of this luck, a glorious transhuman future stretches out in front of us, where everything just keeps getting better and better. Certainly this vision is attractive, the question is whether it’s true. Of course it’s impossible to know, but many people have decided to treat it as such. Is this because the body of evidence for this position is overwhelming? Or is it because it’s comforting? My money is on the latter. But we’re not looking for comfort. We’re not interested in the hundreds of habitable worlds. We’re Duarte and we’re focused on the danger. 

This is not to say, in our role as Duarte, that we entirely dismiss the possibility of a Rare Earth explanation. Only that such an explanation is being adequately handled by other people. Duarte doesn’t need to focus on how to speed up the colonization of the newly discovered worlds. Everybody else is doing that. He’s focused on the paradox, and the potential danger. He doesn’t care whether there are a trillion planets in the Milky Way or only 800 billion. He doesn’t worry about knowing the minutia of astrobiology. He’s just worried about preventing humanity’s extinction, and in that effort, spending all of your time debating probabilities is just a distraction. 

Why? Well to begin with, as we’ve seen with people making the Rare Earth argument, people will ignore probabilities when it suits them. And if they were really concerned about assigning probabilities to things, what probability would they assign to the ideas I’m worried about, the ideas I’ve talked about over the course of this blog? For example, the possibility that intelligence inevitably creates the means of its own destruction. Less than 1 in a billion? Less than 1 in a thousand? And yet for reasons of sophistry and comfort they will proudly claim that Fermi’s paradox has been dissolved because we happen to be the result of odds which are much longer than that. 

Second, and even more importantly, assigning such probabilities is difficult to the point of basically being worthless. We have no idea how hard it is for life to arise on an earth-like planet, and still less of an idea how hard it is for that life to progress from its basic form to human-level intelligence. And if, despite these difficulties, we decide that we’re going to persist in trying to assign probabilities, it would seem easier and more productive to try to assign probabilities to the potential catastrophes rather than buttressing our illusion of safety. It’s easier because while we have no other examples of complex life developing we have plenty of examples of complex civilizations collapsing (for examples see the Fall of Civilizations Podcast) And it’s more productive because even if everyone who believes in the rare earth explanation is absolutely correct, we could still be in trouble from our own creations. 

IV.

If the previous parts have been enough to make you sympathetic to the “Duarte viewpoint”, and you’re ready to move from a discussion of probabilities to a discussion of precautions, then the obvious question is what precautions should we be taking?

Here I must confess that I don’t actually know. Certainly there’s the general admonition to gradualism. Also I think we should be attempting to reduce fragility in general. And to the extent I have advice to give on those topics, I have mostly already given it in other posts. What I was hoping to do in this post was to make the whole situation easier to understand by way of analogizing it to the situation in The Expanse and in that effort there are a couple of points I would still like to draw your attention to.

As I said I’m not sure what precautions we should be taking. But I am sure we have more than enough people focused on “colonizing new worlds” and not nearly enough focused on “scary elder gods”. Additionally we seem unwilling to make many tradeoffs in this area. Lots of people give lip service to the terrible power of the elder gods, but almost no one is willing to divert resources from the colonization project in order to better fight, or even just understand their awful power.

Finally there’s the objection I think most people will have, particularly those who’ve read the books, or who are otherwise familiar with totalitarianism. If we do manage to get more Duartes isn’t it possible or even likely that they will go too far? That the neo-neo-luddites will throw the baby out with the bathwater? If the pandemic has taught us anything it’s that reasonable people can disagree about how threatening something is, and whether a given response is appropriate for that threat.

Obviously such an extreme outcome is possible, but thus far it isn’t even clear that we’re going to ban gain of function research despite there being at least some chance that it was responsible for the pandemic. If that’s where we’re currently at on managing the unexpected harms of technological progress I don’t think we’re in much danger of going too far anytime soon. 

I suppose the big takeaway from this post is that we need more Duarte’s. I suspect that there are a lot of people who read The Expanse and think: Those foolish individuals! They’re so focused on colonizing the habitable planets, when really they should be focused on the huge malevolent aliens that wiped out the last civilization. If you are one of the people that comes away with this impression then you should come away with precisely the same impression when viewing our own situation


It’s possible that someone out there is wondering what they could get me for Christmas. Well mostly I want the ability to ruthlessly crush my enemies, just like everyone. But if that seems too difficult to arrange, consider donating


Catastrophe or Singularity? Neither? Both?

If you prefer to listen rather than read, this blog is available as a podcast here. Or if you want to listen to just this post:

Or download the MP3


One of the central themes of this blog has been that the modern world is faced with two possible outcomes: societal collapse or technological singularity. For those of you just joining us, who may not know, a technological singularity is some advancement which completely remakes the world. It’s most often used in reference to creating artificial intelligence which is smarter than the smartest human, but it could also be something like discovering immortality. This is the possible future where technology (hopefully) makes everything alright. But it’s not the only possibility, we’re faced with salvation on one hand and disaster on the other.

This dichotomy was in fact the subject of my very first post. And in that post I said:

Which will it be? Will we be saved by a technological singularity or wiped out by a nuclear war? (Perhaps you will argue that there’s no reason why it couldn’t be both. Or maybe instead you prefer to argue that it will be neither. I don’t think both or neither are realistic possibilities, though my reasoning for that conclusion will have to wait for a future post.)

Once again, in my ongoing effort to catch up on past promises, this is that future post. It’s finally time to fulfill the commitment I made at the very beginning, and answer the question, why can’t it be both or neither?

Let’s start with the possibility that we might experience both at the same time. And right off the bat we have to decide what that would even look like. I think the first thing that pops into my head is the movie Elysium, Neill Blomkamp’s follow-up to District 9. In this movie you have a collapsed civilization on the planet’s surface and a civilization in orbit that has experienced, at a minimum, a singularity in terms of space habitation and health (they have machines that can cure all diseases). At first glance this appears to meet the standard of both a collapse and a singularity happening at the same time, and coexisting. That said, it is fiction. And while I don’t think that should immediately render it useless, it is a big strike against it.

As you may recall I wrote previously about people mistaking fiction for history. But for the moment let’s assume that this exact situation could happen. That one possibility for the future is a situation identical to the one in the movie. Even here we have to decide what our core values are before we can definitively declare that this is a situation where both things have occurred. Or more specifically we have to define our terms.

Most people assume that a singularity, when it comes, will impact everyone. I’ve often said that the internet is an example of a “soft” singularity, and indeed one of its defining characteristics is that it has impacted the life of nearly everyone on the planet. Even if less than half of people use the internet, I think it’s safe to assume that even non-users have experienced the effects of it. Also, since the number of internet users continues to rapidly increase, it could be argued that it’s a singularity which is still spreading. Whereas in Elysium (and other dystopias) there is no spread. Things are static or getting worse, and for whatever reason the singularity is denied to the vast majority of people. (And if I understand the ending of the movie correctly it’s being denied just out of spite.) Which is to say that if you think that a singularity has to have universal impact, Elysium is not a singularity.

If, on the other hand, you view collapse as a condition where technological progress stops, then Elysium is not a story of collapse. Technological progress has continued to advance. Humanity has left the Earth, and there appears to be nothing special stopping them from going even farther. This is where core values really come into play.

I’ve discussed the idea of core values previously, and when I did, I mentioned a friend of mine whose core value is for intelligence to escape this gravity well, and Elysium either qualifies or is well on it’s way to qualifying for this success condition. Which means if you’re my friend Elysium isn’t a story of collapse it’s a story of triumph.

You may feel that I’ve been cheating and that what I’m really saying is that collapse and singularity are fundamentally contradictory terms and that’s why you can’t have both. I will admit that there is a certain amount of truth to that, but also as you can see a lot depends on what your “win” condition is. As another example of this, if you’re on the opposite side of the fence and your core values incline you to hope for a deindustrialized, back to nature, future, then one person’s collapse could be your win condition.

You may wonder why I’m harping on a subject of such limited utility, and further using a mediocre movie to illustrate my point. I imagine before we even began that all of you were already on board with the idea that you can’t have both a technological singularity and a societal collapse. I imagine this doesn’t merely apply to readers of this blog, but that most people agree that you can’t have both, despite a talented performance from Matt Damon which attempts to convince them otherwise. But in spite of the obviousness of this conclusion, I still think there’s some fuzzy thinking on the subject.

Allow me to explain. If, as I asserted in my last post, all societies collapse, and if the only hope we have for avoiding collapse is some sort of technological singularity. Then we are, as I have said from the very beginning, in a race between the two. Now of course structuring things as a race completely leaves out any possibility of salvation through religion, but this post is primarily directed at people who discount that possibility. If you are one of those people and you agree that it’s a race, then you should either be working on some potential singularity or be spending all of your efforts on reducing the fragility of society, so that someone else has as long as possible to stumble upon the singularity, whatever that ends up being.

I admit that the group I just described isn’t a large group, but it may be larger than you think. As evidence of this I offer up some of the recent articles on Silicon Valley Preppers. Recall, that we are looking for people who believe that a collapse is possible but don’t otherwise behave as if we’re in a race in which only one outcome can prevail. In other words, if, like these people, you believe a collapse could happen, you definitely shouldn’t be working on ways to make it more likely, by increasing inequality and fomenting division and anger, which seems to have been the primary occupation of most of these wealthy preppers. On top of this they appear to be preparing for something very similar to the scenario portrayed in Elysium.

Tell me if this description doesn’t come pretty close to the mark.

I was greeted by Larry Hall, the C.E.O. of the Survival Condo Project, a fifteen-story luxury apartment complex built in an underground Atlas missile silo….“It’s true relaxation for the ultra-wealthy,” he said. “They can come out here, they know there are armed guards outside. The kids can run around.” …In 2008, he paid three hundred thousand dollars for the silo and finished construction in December, 2012, at a cost of nearly twenty million dollars. He created twelve private apartments: full-floor units were advertised at three million dollars; a half-floor was half the price. He has sold every unit, except one for himself, he said…. In a crisis, his swat-team-style trucks (“the Pit-Bull VX, armored up to fifty-calibre”) will pick up any owner within four hundred miles. Residents with private planes can land in Salina, about thirty miles away.

A remoted guarded luxury enclave where they can wait out the collapse of the planet? This seems pretty on the money, and don’t even get me started on Peter Thiel’s island.

Far be it from me to criticize someone for being prepared for the worst. Though in this particular case, I’m not sure that fleeing to the rich enclave will be as good of a tactic as they think. John Michael Greer, who I quote frequently, is fond of pointing out that every time some treasure seeker finds gold coins which have been buried, that it’s evidence of a rich prepper, from history, whose plans failed. Where my criticism rest is the fact that they seem to spend hardly any resources on decreasing the fragility of the society we already have.

Reading these prepper stories you find examples of people from Reddit and Twitch and Facebook. What do any of these endeavors do that makes society less fragile? At best they’re neutral, but an argument could definitely be made that all three of these websites contribute to an increase in divisiveness and by extension they actually increase to the risk of collapse. But, as I already alluded to, beyond their endeavors, they are emblematic of the sort of inequality that appears to be at the heart of much of the current tension.

As a final point if these people don’t believe that a societal collapse and a technological singularity are mutually exclusive, what do they imagine the world will look like when they emerge from their bunkers? I see lots of evidence of how they’re going to keep themselves alive, but how do they plan to keep technology and more importantly, infrastructure alive?

A few years ago I read this fascinating book about the collapse of Rome. From what I gathered, it has become fashionable to de-emphasis the Western Roman Empire as an entity. An entity which ended in 476 when the final emperor was deposed. Instead, these days some people like to view what came after 476 as very similar to what came before only with a different group of people in charge, but with very little else changing. This book was written to refute that idea, and to re-emphasis the catastrophic nature of end of Rome. One of the more interesting arguments against the idea of a smooth transition was the quality of pottery after the fall. Essentially before the fall you had high quality pottery made in a few locations and which could be found all over the empire. Afterwards you have low quality, locally made pottery that was lightly fired and therefore especially fragile, a huge difference in quality.

It should go without saying, that a future collapse could have very little in common with the collapse of Rome, but if the former Romans couldn’t even maintain the technology for making quality pottery, what makes us think that we’ll be able to preserve multi-billion dollar microchip fabrication plants, or the electrical grid or even anything made of concrete?

The point is, if there is a collapse, I don’t think it’s going to be anything like the scenario Silicon Valley Preppers have in their head.

And now, for the other half of the post, we finally turn to the more interesting scenario. That we end up with neither. That somehow we avoid the fate of all previous civilizations and we don’t collapse, but, also, despite having all the time in the world to create some sort of singularity, that we don’t manage to do that either.

At first glance I would argue that the “neither” scenario is even more unlikely than the “both” scenario, but this may put me in the minority, which is, I suppose, understandable. People have a hard time imagining any future that isn’t just an extension of the present they already inhabit. People may claim that they can imagine a post-apocalyptic future, but really they’re just replaying scenes from The Road, or Terminator 2 (returning to theaters in 3D this summer!). As an example, take anyone living in Europe in 1906, was there a single person who could have imagined what the next 40 years would bring? The two World Wars? The collapse of so many governments? The atomic bomb? And lest you think I’m only focused on the negative, take any American living in 1976. Could any of them have imagined the next 40 years? Particularly in the realm of electronics and the internet. Which is just to say, as I’ve said so often, predicting the future is hard. People are far more likely to imagine a future very similar to the present, which means no collapses or singularities.

It’s not merely that they dismiss potential singularities because they don’t fit with how they imagine the future, it’s that they aren’t even aware of the possibility of a technological singularity. (This is particularly true for those people living in less developed countries.) Even if they have heard of it, there’s a good chance they’ll dismiss it as a strange technological religion complete with a prophet, a rapture, and a chosen people. This attitude is not only found among those people with no knowledge of AI, some AI researchers are among its harshest critics. (My own opinion is more nuanced.)

All of this is to say that many people who opt for neither have no concept of a technological singularity, or what it might look like or what it might do to jobs. Though to adapt my favorite apocryphal quote from Trotsky. You may not be interested in job automation, but job automation is interested in you.

All of the lack of information, and the present-day bias in thinking, apply equally well to the other end of the spectrum and the idea of society collapsing, but on top of that you have to add in the optimism bias most humans have. This is the difference between the 1906 Europeans and the 1976 Americans. The former would not be willing spend anytime considering what was actually going to happen even if you could describe it to them in exact detail, while the latter would happily spend as much time, as you could spare, listening to you talk about the future.

In other words, most people default to the assumption that neither will happen, not because they have carefully weighed both options, but because they have more pressing things to think about.

As I said at the start I don’t think it can be neither, and I would put the probability of that, well below the probability of an eventual singularity, but that is not to say that I think a singularity is very likely either (if you’ve been reading this blog for any length of time you know that I’m essentially on “Team Collapse”.)

My doubts exist in spite of the fact that I know quite a bit about what the expectations are, and the current state of the technology. All of the possible singularities I’ve encountered have significant problems and this is setting aside my previously mentioned religious objection to most of them. To just go through a few of the big ones and give a brief overview:

  • Artificial Intelligence: We obviously already have some reasonably good artificial intelligence, but for it to be a singularity it would have to be generalized, self-improving, smarter than we are, and conscious. I think the last of those is the hardest, even if it turns out that the materialists are totally right (and a lot of very smart, non-religious people think that they aren’t) we’re not even close to solving the problem.
  • Brain uploading: I talked about this in the post I did about Robin Hansen and the MTA conference, but in essence, all of the objections about consciousness are still essentially present here, and as I mentioned there, if we can’t even accurately model a species with 302 neurons. How do we ever model or replicate a species with over 100 billion?
  • Fusion Power: This would be a big deal, big enough to count as a singularity, but not the game changer that some of the other things would be. Also as I pointed out in a previous post, at a certain point power isn’t the problem if we’re going to keep growing, heat is.
  • Extraterrestrial colonies: Perhaps the most realistic of the singularities at least in the short term, but like fusion not as much of a game changer as people would hope. Refer to my previous post for a full breakdown of why this is harder than people think, but in short, unless we can find some place that’s livable and makes a net profit, long-term extraterrestrial colonies are unsustainable.

In other words while most people reject the idea of a singularity because they’re not familiar with the concept, even if they were, they might, very reasonably, choose to reject it all the same.

You may think at this point that I’ve painted myself into a corner. For those keeping score at home I’ve argued against both, I’ve argued against neither and I’ve argued against a singularity all by itself. (I think they call that a naked singularity, No? That’s something else?) Leaving me with just collapse. If we don’t collapse I’m wrong, and all the people who can neither understand the singularity or imagine a catastrophe will be vindicated. In other words, I’ve left myself in the position of having to show that civilization is doomed.

I’d like to think I went a long way towards that in my last post, but this time I’d like to approach it from another angle. The previous post pointed out the many ways in which our current civilization is similar to other civilizations who’ve collapsed. And while those attributes are something to keep an eye on, even if we were doing great, even if there are no comparisons to be drawn between our civilization and previous civilizations in the years before their collapse, there are still a whole host of external black swans, any one of which would be a catastrophic.

As we close out the post let’s just examine a half dozen potential catastrophes, every one of which has to avoided in the coming years:

1- Global Nuclear War: Whether that be Russia vs. the US or whether China’s peaceful rise proves impossible, or whether it’s some new actor.

2- Environmental Collapse: Which could be runaway global warming or it could be a human caused mass extinction, or it could be overpopulation.

3- Energy Issues: Can alternative energy replace carbon based energy? Will the oil run out? Is our energy use going to continue to grow exponentially?

4- Financial Collapse: I previously mentioned the modern world’s high levels of connectivity, which means one financial black swan can bring down the entire system, which almost happened in 2008.

5- Natural disasters: These include everything from super volcanoes, to giant solar storms, to impact by a comet.

6- Plagues: This could be something similar to the Spanish Flu pandemic, or it could be something completely artificial, an act of bioterrorism for example.

Of course this list is by no means exhaustive. Also remember that we don’t merely have to avoid these catastrophes for the next few decades we have to avoid them forever, particularly if there’s no singularity on the horizon.

Where is the world headed? What should we do? I know I have expressed doubts about the transhumanists, and people like Elon Musk, but at least these individuals are thinking about the future. Most people don’t. They assume tomorrow will be pretty much like today, and that their kids will have a life very similar to theirs. Maybe that’s so, and maybe it’s not, but if the singularity or collapse don’t happen during the life of your children or of their children, it will happen during the lives of someone’s children. And it won’t be both and it won’t be neither. I hope it’s some kind of wonderful singularity, but we should prepare for it to be a devastating catastrophe.

I repeat what I’ve said from the very beginning. We’re in a race between societal collapse and a technological singularity. And I think collapse is in the lead.


If you’re interested in ways to prevent collapse you should consider donating. It won’t stop the collapse of civilization, but it might stop the collapse of the blog.


The Politics of the Zombie Apocalypse

If you prefer to listen rather than read, this blog is available as a podcast here. Or if you want to listen to just this post:

Or download the MP3


One of my favorite blogs is Slatestarcodex, the blog of Scott Alexander. And yes I would offer the obligatory “check it out if you haven’t already.”

As an example of the high esteem I have for his blog I’ve started at the very beginning and I’m reading all the archives, and one of his earliest posts has some bearing on the topic we were discussing in my last post, but is also interesting enough on it’s own account to be worth reviewing. So I’ll start with that and then tie it back to my post. His post is titled A Thrive/Survive Theory of The Political Spectrum, and in it he puts forth his own theory of how to explain the right/left, conservative/liberal divide:

…rightism is what happens when you’re optimizing for surviving an unsafe environment, leftism is what happens when you’re optimized for thriving in a safe environment.

As an example of the rightist/survival mindset he offers the example of a zombie apocalypse. Imagining how you might react to a zombie apocalypse, he feels, is a great way to arrive at most of the things supported by the right/survive side of the political equation. You’d want lots of guns, you’d be very suspicious of outsiders, you’d become very religious (if there are no atheists in foxholes there are definitely no atheists in foxholes surrounded by zombies) extreme black and white thinking would dominate (zombies are not misunderstood, they’re evil), etc.

For the leftist/thrive side of the spectrum he offers the example of a future technological utopia:

Robotic factories produce far more wealth than anyone could possibly need. The laws of Nature have been altered to make crime and violence physically impossible (although this technology occasionally suffers glitches). Infinitely loving nurture-bots take over any portions of child-rearing that the parents find boring. And all traumatic events can be wiped from people’s minds, restoring them to a state of bliss. Even death itself has disappeared.

As you can imagine you’d probably get the exact opposite of the previous scenario. Guns would be nearly non-existent. If you don’t have to compete for resources and violence has been eliminated most of the current objections to foreigners would be gone. Also, based on current trends in the developed world, it seems unlikely that religion would have much of a foothold, nurture bots would make marriage vestigial, etc.

I find his theory very compelling, it makes as much sense as any of the theories I’ve come across, and I have no problem granting that it’s probably accurate. Which leads us to an examination of the implications of the theory, and this is where I think it gets really interesting.

The first thing to consider is which view of the future is more likely to be accurate. Is it going to be closer to the technological utopia or the zombie apocalypse? I think my own views on this subject are pretty clear. (Though as I mentioned way back in the first post I think we’re more likely to see a gradual catabolic collapse than a Mad Max/Walking Dead scenario.) But I’m also on record as saying that I could very well be wrong. Given that we can’t predict the future, what’s more important is not to try and guess what will happen, to say nothing of trying to plan around those guesses, but rather to choose the course where the penalty for being wrong is the smallest.

In other words if the world prepares for disaster and instead we end up with robotic factories that produce everything we could possible need, then it’s fine, and yes we wasted some time and resources preparing for disaster, but in light of the eventual abundance it was a small price to pay. But if the world pins its hopes on robotic factories and we end up with roving zombies then people die, which I understand is much worse than wasting time and money.

Of course one might immediately make the argument that by preparing for disaster we could slow down or actually prevent the technological utopia. Obviously that argument is not easy to dismiss, particularly since, generally, planning for A makes it harder to accomplish B. This is especially true if B is the opposite of A. Thus, on its face that argument would appear to be compelling. But let’s look at how things are actually playing out.

If we want robotic factories then we need to spend resources inventing them. More generally, the best way to guarantee the technological utopia is to put as many resources as we can into innovation. So how are our resources allocated? According to this chart 41% of US GDP goes to the government, not the first place you think of when the word innovation comes to mind. But it’s still possible that some innovation might emerge, but if it does it will most likely come from military spending, the area leftists would most like to cut. I would argue that innovation is least likely to come from entitlement spending the area leftists are most desirous to expand. In other words, at first glance the people planning on the utopian future may, paradoxically, be the people least likely to bring it about.

Of course there’s still the remaining 59% of the economy. It’s certainly conceivable that leftists could be so much better at encouraging innovation in that area of the economy that it makes up for whatever distortions they bring to the percent of GDP consumed by the government. On this count I see evidence going both ways. I think the generally laissez-faire attitude of the rightist is much better for encouraging innovation. On the other hand the hub of modern innovation is San Francisco, a notoriously leftist city. On the gripping hand you have things like Uber not being able to operate in SF because of regulations. Personally I would again say that rightist are better at encouraging innovation then leftists. Best case scenario I have a hard time seeing it as anything other than a wash. Also as our affluence increases the percentage of GDP that goes to government also increases, which takes us back to the first argument.

Remember in the end, we don’t even need to show that rightest are better at innovation, just that their focus on survival doesn’t fatally injure the prospects of the technological utopia, which I don’t see any compelling evidence for.

Having progressed this far, we have the survive/rightist side of the aisle being great as a just-in-case measure, which doesn’t slow down the thrive/leftist side and may actually speed it up. In fact at this point you may think that Alexander obviously created the post as a defense of rightism, and many of the commenters on his blog felt the same way, but that was not the case. Here’s his response

…this post was not intended to sell Reaction [rightism/survive]. If anything, it was about how it was adapted for conditions that no longer exist. If you’re in a stable society without zombies, optimizing your life for zombie defense is a waste of time; working towards not-immediately-survival-related but nice and beautiful and enjoyable things like the environment and equality and knowledge-for-knowledge’s sake may be an excellent choice.

Does he have a point? Is the survive mindset a relic of the past which now just represents a waste of time and resources? This is where we return to my last post. If you haven’t read it here’s the 30 second summary. Some smart concerned people wanted poor countries to use opiates like morphine to ease the pain of the dying. They refused. Instead it was all the rich countries who started using opiates leading to the deaths of an additional 100,000 people, just in the US, from prescription opiate overdoses.

This is a great example of the thrive/survive dichotomy. In typical survive fashion the poor countries were not worried about easing the pain of people who were effectively already dead. Rather, they were a lot more worried about addiction and overdosing among the young, healthy population. Whereas in typical thrive we-shouldn’t-have-to-worry-about-anything fashion, the rich world prescribed opiates like candy. In our post scarcity world why should anyone have to worry about pain? But as it turned out despite living in what is arguably already a technological utopia (I mean have you seen this thing called the internet?!?) heroin is still really addictive. And using technology to switch a few molecules around and slap a time release coating on it (and call it oxycontin) didn’t make as much of a difference as people hoped.

This should certainly not be taken as sufficient evidence to say that “survive” is superior (though I think that’s where we’re headed) but it should at least serve as sufficient evidence to refute the idea that the conditions where the survive mindset is beneficial “no longer exist.”

So we have 100.000 people, at least, who wish the needle had been a little bit more on the survive end of dial and a little bit less on the thrive side of dial. With a number like that one starts to wonder why we even have people who are optimized for thrive. Well, just like everything, it goes back to evolution. Of course anytime you start putting forth an evolutionary explanation for things you’re in danger of constructing a just-so story. Though this particular theory does have some evidence behind it. Here Alexander and I are once again largely in agreement so I’ll pass it back to him:

Developmental psychology has gradually been moving towards a paradigm where our biology actively seeks out information about our environment and then toggles between different modes based on what it finds. Probably the most talked-about example of this paradigm is the thrifty phenotype idea, devised to explain the observation that children starved in the womb will grow up to become obese

Coincidently I came across another example of this just the other day. My research began when I came across an article that indicated that Dawkin’s theory of the Selfish Gene had fallen out of favor and I wanted to know why. As it turns out this paradigm of phenotypical toggling was a big reason. The example given by this article dealing with the problems of the Selfish Gene concerned grasshoppers and locusts. What people didn’t realize until very recently is that grasshoppers and locusts are the same species, but grasshoppers turn into locusts when a switch is flipped by environmental cues. Continuing with Alexander:

It seems broadly plausible that there could be one of these switches for something like “social stability”. If the brain finds itself in a stable environment where everything is abundant, it sort of lowers the mental threat level and concludes that everything will always be okay and it’s job is to enjoy itself and win signaling games. If it finds itself in an environment of scarcity, it will raise the mental threat level and set its job to “survive at any cost”.

In other words humans switch to thrive when things are going well because it works better, and when things aren’t going well they switch to survive because that works better. Of course the immediate question is, what does it mean for something to “work better”. Since we’re talking about evolution, working better means reproductive success, or having more offspring. The fact that the people most associated with the thrive side of things have the least children is something that seems like a big flashing neon sign, which makes me want to switch to a completely separate topic, but I’m going to resist it.

Also if we’re talking in terms of an evolutionary response the thrive side of things has to have been a potential strategy for a long, long time. It can’t have been something that developed in the last 100 years, or even the last 500 years. We’re talking about something that’s been around for probably tens of thousands of years. Thus, any theory about it’s benefits would have to encompass a pre-historical reason for the thrive switch to exist.

As I warned earlier. discussions like this are apt to look like just so stories, so if even the hint of ad hoc reasoning bothers you, you should skip the next 5 paragraphs.

Obviously one category of people who might benefit from the thrive switch would be whoever ends up being in the ruling class. You might think that’s too small a category to deserve it’s own evolutionary switch, but I direct your attention to the fact that 1 in every 200 men are descendants of Genghis Khan, and the related finding that there were more mothers than fathers in the past indicating strong polygyny, almost certainly concentrated in the ruling class. What this implies is that even if something is only triggered a small amount of the time, it could have a disproportionate evolutionary effect. Sure, you might only be on the top of the heap a short time, perhaps only a few generations, but a switch to take advantage of that could have an enormous long term effect.

If we’re willing to grant that the thrive switch was largely designed to take advantage of your time on top, and we’re willing to see where speculation might take us (you were warned) it generates some interesting ideas.

First it definitely explains the promiscuity. It explains the hedonism. It explains the enormous focus on jockeying for status and signalling games. But so far I haven’t departed that much from Alexander’s position. What if I told you it explains microaggressions?

The concept of microaggressions has been much discussed over the last few years. Most people view it as a new and disturbing trend. But microaggressions have been around forever, however up until now they were restricted to royalty. In dealing with royalty you have to be careful not to give the slightest hint of offense, to use exactly the right words when addressing them. Can anyone look at this chart explaining the proper form of address for royalty and tell me it’s not the most elaborate system ever for avoiding microaggressions? Is the rising objection to microaggressions an unavoidable consequence of the increasing dominance of the thrive paradigm?

Okay perhaps that’s a stretch, speculation and just-so-story time over we’ll return to firmer ground.

Much of what we understand about the kind of evolutionary switching we’re talking about comes from game theory. And of course the classic example of game theory is prisoner’s dilemma. Iterated prisoner’s dilemma is often used as a proxy for group dynamics and evolution. In this case the strategy that works best is a tit-for tat strategy, but game theory also tells us that occasionally, particularly in the short term, it can be advantageous to defect. Could the thrive switch be just this? That when the rewards for defecting reach a certain level, the switch flips and the individual defects? The exact nature of the defection (and the abandoned co-operation) are not entirely clear to me, but we are still talking about a certain payoff leading to a switch in strategy. And you don’t have to be a hard core libertarian to think that the baron in his castle has a more predatory relationship with the peasant than the peasant has with another peasant.

I admit that I am once again speculating to a large degree. But this speculation proceeds from some reasonable assumptions. Assumption one: the thrive switch works in conjunction with the the survive switch. That there’s a reason grasshoppers aren’t locusts 100% of the time. Assumption two: this symbiotic relationship has not gone away (see the previous point about opiates.) Assumption three: There are unseen reasons for the historical equilibrium between the two modes.  In other words, one could certainly imagine that the thrive strategy relies on having a certain level of surrounding survive. That evolutionarily speaking a society that’s 20% thrive and 80% survive works great, but a society in which those numbers are reversed, works horribly, or is in any case much more fragile than the society which is only 20% thrive.

How might we test this? What would count as evidence for an imbalance between the strive and thrive portions of society? What would count as evidence of the imbalance being dangerous? I can think of few things:

-College: This area could provide a blog post or three all on it’s own. As Alexander says if you’re in thrive mode then pursuing “knowledge-for-knowledge’s sake may be an excellent choice.” But there’s definitely a strong case to be made that we’ve reached a point where too many people go to college. And even if you agree with the general benefit of college and want it spread as widely as possibly, you can still probably agree that too many people take on too much debt to get degrees in fields with very little economic benefit. If that’s not evidence of a thrive imbalance than I think you have to invalidate the entire construct.   

-Debt: I’m reminded of an exchange in Anna Karenina when one of the main characters complains of being in debt. The noble’s he’s with asks how much and he responds with the amount of twenty thousand roubles, and they all laugh at him because it’s so small. One of the nobles is five million roubles in debt on a salary of twenty thousand a year. This to me encapsulates the idea that debt is something that was traditionally only available to the wealthy. But today we have a staggering amount of debt at all levels. I was just reading in The Economist that the unfunded pension liability in 20 OECD countries is $78 trillion dollars. That’s an amount that takes a minute to sink in, but for help $78 trillion is about the world’s GDP for an entire year. Now maybe Krugman and Yglesias and Keynes are all correct and government debt (even $78 trillion of it) is no big deal, but what about consumer debt, and student debt, and corporate debt. Is it all no big deal?

-Virtue Signalling: I mentioned signalling games earlier, and you may still be unclear on what those actually are. Well as Alexander explains:

When people are no longer constrained by reality, they spend most of their energy in signaling games. This is why rich people build ever-bigger yachts and fret over the parties they throw and who got invited where. It’s why heirs and heiresses so often become patrons of the art, or donors to major charities. Once you’ve got enough money, the next thing you need is status, and signaling is the way to get it.

So the people of this final utopia will be obsessed with looking good. They will become moralists, and try to prove themselves more virtuous than their neighbors.

In a virtue signalling arms race it becomes harder and harder to establish that you are truly the most virtuous, and as a result virtue get’s sliced into smaller and smaller parts. If three genders (male, female and other) is virtuous, surely seven is more virtuous, thirty-one still more virtuous and fifty-one the most virtuous of all (until someone comes along with their list of sixty-three or, not to be outdone, seventy-one.) Is this evidence of a thrive/survive imbalance? It sure looks like one, and of course, this is also just one example. Is it evidence of the imbalance being dangerous? That I’m less sure about, I guess it depends on how far the arms race goes. I have a hard time imagining that will eventually reach the point where murdering the transphobic is considered more virtuous than yelling at them, but honestly I never imagined we’d get as far as we have already.

Whether you accept these three points as evidence of a dangerous imbalance will largely depend on how closely your own biases and prejudices match mine. I’m certainly not the only one who thinks that worthless college degrees, massive debt, and the virtue arms race are problems. I just may be the only one who has tried to tie them to a single cause.

Since this is technically an LDS blog (though I’ve hid it very well the last couple of posts) you might constructively wonder what the Church’s stance on things is. And while the Church would strenuously object to an accusation that everyone in the Church is a Republican (particularly in light of the current candidate) and would probably also object (albeit perhaps less strenuously) over being labeled a Right-wing organization. With their emphasis on food storage, avoiding debt, chastity and family would they or anyone else object to them being labeled a “survive” organization


We Are Not Saved

If you prefer to listen rather than read, this blog is available as a podcast here. Or if you want to listen to just this post:

Or download the MP3


The harvest is past, the summer is ended, and we are not saved.

Jeremiah 8:20

When I was a boy. I couldn’t imagine anything beyond the year 2000. I’m not sure how much of that had to do with the supposed importance of the beginning of a new millennium, how much of it is just due to the difficulty of extrapolation in general, and how much of it was due to my religious upbringing. (Let’s get that out of the way right up front. Yes, I am LDS/Mormon.)

It’s 2016 and we’re obviously well past the year 2000 and 16 years into the future I couldn’t imagine. For me, at least, it definitely is The Future, and any talk about living in the future is almost always followed by an observation that we were promised flying cars and spaceships and colonies on the moon. This observation is then followed by the obligatory lament that none of these promises have materialized. Of course moon colonies and flying cars are all promises made when I was a boy. Now we have a new set of promises: artificial intelligence, fusion reactors, and an end to aging, to name just a few. One might ask why the new promises are any more likely to be realized than the old promises. And here we see the first hint of the theme of this blog, But before we dive into that, I need to lay a little more groundwork.

I have already mentioned my religious beliefs, and these will be a major part of this blog (though in a different way than you might expect.) In addition to that I will also be drawing heavily from the writings of Nassim Nicholas Taleb. Taleb’s best known book is The Black Swan. For Taleb a black swan is something which is hard to predict and has a massive impact. Black swans can come in two forms: positive and negative. A positive black swan might be investing in a startup that later ends up being worth a billion dollars. A negative black swan, on the other hand, might be something like a war. Of course there are thousands of potential black swans of both types, and as Taleb says, “A Black Swan for the turkey is not a Black Swan for the butcher.”

The things I mentioned above, AI, fusion and immortality, are all expected to be positive black swans, though, of course, it’s impossible to be certain. Some very distinguished people have warned that artificial intelligence could mean the end of humanity. But for the moment we’re going to assume that they all represent positive black swans.

In addition to being positive black swans, these advancements could also be viewed as technological singularities. Here I use the term a bit more broadly than is common. Generally when people talk about the singularity they are using the term with respect to artificial intelligence. But as originally used (back in 1958) the singularity referred to technology progressing to a point where human affairs would be unrecognizable. In other words these developments will have such a big impact that we can’t imagine what life is like afterwards. AI, fusion and immortality all fall into this category, but they are certainly by no means the only technology that could create a singularity. I would argue that the internet is an excellent example of a singularity. Certainly people saw it coming, and and some of those even correctly predicted some aspects of it (just as, if we ever achieve AI, there will no doubt be some predictions which will also prove true.) But no one predicted anything like Facebook or other social media sites and those sites have ended up overshadowing the rest of the internet. My favorite observation about the internet illustrates the point:

If someone from the 1950s suddenly appeared today, what would be the most difficult thing to explain to them about life today?

I possess a device, in my pocket, that is capable of accessing the entirety of information known to man.

I use it to look at pictures of cats and get in arguments with strangers.

Everything I have said so far deserves, and will eventually get, a deeper examination, what I’m aiming for now is just the basic idea that one possibility for the future is a technological singularity. Something which would change the world in ways we can’t imagine, and if proponents are to be believed, it would be a change for the better.

If, on the one hand, we have the possibility of a positive black swans, technological singularities and utopias, is there also the possibility of negative black swans, technological disasters and dystopias on the other hand? Of course that’s a possibility. We could be struck by a comet or annihilate each other in a nuclear war or end up decimated by disease.

Which will it be? Will we be saved by a technological singularity or wiped out by a nuclear war? (Perhaps you will argue that there’s no reason why it couldn’t be both. Or maybe instead you prefer to argue that it will be neither. I don’t think both or neither are realistic possibilities, though my reasoning for that conclusion will have to wait for a future post.)

It’s The Future and two paths lie ahead of us, the singularity or the apocalypse, and this blog will argue for apocalypse. Many people have already stopped reading or are prepared to dismiss everything I’ve said because I have already mentioned that I’m Mormon. Obviously this informs my philosophy and worldview, but I will not use, “Because it says so in the Book of Mormon” as a step in any of my arguments, which is not to say that you will agree with my conclusions. In fact I expect this blog to be fairly controversial. The original Jeremiah had a pretty rough time, but it wasn’t his job to be popular, it was his job to warn of the impending Babylonian captivity.

I am not a prophet like Jeremiah, and I am not warning against any specific calamity. While I consider myself to be a disciple of Jesus Christ, as I have already mentioned, this blog will be at least as much informed by my being a disciple of Taleb. And as such I am not willing to make any specific predictions except to say that negative black swans are on the horizon. That much I know. And if I’m wrong? One of the themes of this blog will be that if you choose to prepare for the calamities and they do not happen, then you haven’t lost much, but if you are not prepared and calamities occur, then you might very well lose everything. As Taleb says in one of my favorite quotes:

If you have extra cash in the bank (in addition to stockpiles of tradable goods such as cans of Spam and hummus and gold bars in the basement), you don’t need to know with precision which event will cause potential difficulties. It could be a war, a revolution, an earthquake, a recession, an epidemic, a terrorist attack, the secession of the state of New Jersey, anything—you do not need to predict much, unlike those who are in the opposite situation, namely, in debt. Those, because of their fragility, need to predict with more, a lot more, accuracy.

I have already mentioned Taleb as a major influence. To that I will add John Michael Greer, the archdruid. He joins me (or rather I join him) in predicting the apocalypse, but he does not expect things to suddenly transition from where we are to a Mad Max style wasteland (which interestingly enough is the title of the next movie.) Rather he puts forward the idea of a catabolic collapse. The term catabolism broadly refers to a metabolic condition where the body starts consuming itself to stay alive. Applied to a civilization the idea is that as a civilization matures it gets to the point where it spends more than it “makes” and eventually the only way to support that spending is to start selling off or cannibalizing assets. In other words, along with Greer, I do not think that civilization will be wiped out in one fell swoop by an unconstrained exchange of nukes, and if it is than nothing will matter. I think it will be a slow-decline, broken up by a series of mini collapses.

All of this will be discussed in due time, suffice it to say that despite the religious overtones, when I talk about the apocalypse, you should not be visualizing The Walking Dead, The Road, or even Left Behind. But the things I discuss may nevertheless seem pretty apocalyptic. Earlier this week I stayed up late watching the Brexit vote come in. In the aftermath of that people are using words like terrifying, bombshell, flipping out, and furthermore talking about a global recession, all in response to the vote to Leave. If people are that scared about Britain leaving the EU I think we’re in for a lot of apocalypses.

You may be wondering how this is different than any other doom and gloom blog, and here, at last we return to the scripture I started with, which gives us the title and theme of the blog. Alongside all of the other religions of the world, including my own, there is a religion of progress, and indeed progress over the last several centuries has been remarkable.

These many years of progress represent the summer of civilization. And out of that summer we have assembled a truly staggering harvest. We have conquered diseases, split the atom, invented the integrated circuit and been to the moon. But if you look closely you will realize that our harvest is basically at an end. And despite the fantastic wealth we have accumulated, we are not saved. But in contemplating this harvest it is easier than ever before to see why we need to be saved. We understand the vastness of the universe, the potential of technology and the promise of the eternities. The fact that we are not wise enough to grasp any of it, makes our pain all the more acute.

And this is the difference between this blog and other doom and gloom blogs. Another blog may talk about the inevitable collapse of the United States because of the national debt, or runaway global warming, or cultural tension. Someone with faith in continued scientific progress may ignore all of that, assuming that once we’re able to upload our brains into a computer that none of it will matter. Thus, anyone who talks about about potential scenarios of doom without also talking about potential advances and singularities, is only addressing half of the issue. In other words you cannot talk about civilizational collapse without talking about why technology and progress cannot prevent it. They are opposite sides of the same coin.

That’s the core focus, but this blog will range over all manner of subjects including but not limited to:

  • Fermi’s Paradox
  • Roman History
  • Antifragility
  • Environmental Collapse
  • Philosophy
  • Current Politics
  • Book Reviews
  • War and conflict
  • Science Fiction
  • Religion
  • Artificial Intelligence
  • Mormon apologetics

As in the time of Jeremiah, disaster, cataclysms and destruction lurk on the horizon, and it becometh every man who hath been warned to warn his neighbor.

The harvest is past, the summer is ended, and we are not saved.