Tag: <span>taleb</span>

Eschatologist #11: Black Swans

If you prefer to listen rather than read, this blog is available as a podcast here. Or if you want to listen to just this post:

Or download the MP3


February 2020, the last month of normalcy, probably feels like a long time ago. I spent the last week of it in New York City. Which was already ground zero for the pandemic—though no one knew that yet. I was there to attend the Real World Risk Institute. A week-long course put on by Nassim Taleb, who’s best known as the author of The Black Swan. The coincidence of learning more about black swans while a very large one was already in process is not lost on me.

(Curiously enough, this is not the first time I was in New York right before a black swan. I also happened to be there a couple of weeks before 9/11.)

Before we go any further, for any who might be unfamiliar with the term, a black swan is an unpredictable, rare event with extreme consequences. And, one of the things I was surprised to learn while at the institute is that Taleb, despite inventing the term, has grown to dislike it. There are a couple of reasons for this. First people apply it to things which aren’t really black swans, to things which can be foreseen. The pandemic is actually a pretty good example of this. Experts had been warning about the inevitability of one for decades. We had one in 1918, and beyond that several recent near misses with SARS, MERS, and Ebola. And that was just in the last couple of decades. If all this is the case, why am I still calling it a black swan?

First off, even if the danger of a pandemic was fairly well known, the second order effects have given us a whole flock of black swans. Things like supply chain shocks, teleworking, housing craziness, inflation, labor shortages, and widespread civil unrest, to name just a few. This is the primary reason, but on top of that I think Taleb is being a little bit dogmatic with this objection. (I.e. it’s hard to think of what phrase other than “black swan” better describes the pandemic.)

However, when it comes to his second objection I am entirely in agreement with him. People use the term as an excuse. “It was a black swan. How could we possibly have prepared?!?” And herein lies the problem, and the culmination of everything I’ve been saying since the beginning, but particularly over the last four months.

Accordingly saying “How could we possibly have prepared?” is not only a massive abdication of responsibility, it’s also an equally massive misunderstanding of the moment. Because preparedness has no meaning if it’s not directed towards preparing for black swans. There is nothing else worth preparing for.

You may be wondering, particularly if black swans are unpredictable, how is one supposed to do that? The answer is less fragility, and ideally antifragility, but a full exploration of what that means will have to wait for another time. Though I’ve already touched on how religion helps create both of these at the level of individuals and families. But what about levels above that? 

This is where I am the most concerned. And where the excuse, “It was a black swan! Nothing could be done!” has caused the greatest damage. In a society driven by markets, corporations have great ability to both help and harm by the risks they take. We’re seeing some of these harms right now. We saw even more during the 2007-2008 financial crisis. When these harms occur, it’s becoming more common to use this excuse. That it could not be foreseen. It could not be prevented.

If corporations suffered the effects of their lack of foresight that would be one thing. But increasingly governments provide a backstop against such calamities. In the process they absorb at least some of the risk. Making the government itself more susceptible to future, bigger black swans. And if that happens, we have no backstop.

Someday a black swan will either end the world, or save it. Let’s hope it’s the latter.


One thing you might not realize is that donations happen to also be black swans. They’re rare (but becoming more common) and enormously consequential. If you want to feel what it’s like to have that sort of power, consider trying it out. 


Eschatologist #10: Mediocristan and Extremistan

If you prefer to listen rather than read, this blog is available as a podcast here. Or if you want to listen to just this post:

Or download the MP3


Last time we talked about mistakenly finding patterns in randomness—patterns that are then erroneously extrapolated into predictions. This time we’re going to talk about yet another mistake people make when dealing with randomness, confusing the extreme with the normal.

When I use the term “normal” you may be thinking I’m using it in a general sense, but in the realm of randomness, “normal” has a very specific meaning, i.e. a normal distribution. This is the classic bell curve: a large hump in the center and thin tails to either side. In general occurrences in the natural world fall on this curve. The classic example is height, people cluster around the average (5’9” for men and 5’4” for women, at least in the US) and as you get farther away from average—say men who are either 6’7” or 4’11”—you find far fewer examples. 

Up until relatively recently, most of the things humans encountered followed this distribution. If your herd of cows normally produced 20 calves in a year, then on a good year the herd might produce 30 and on a bad year they might produce 10. The same might be said of the bushels of grain that were harvested or the amount of rain that fell. 

These limits were particularly relevant when talking about the upper end of the distribution. Disaster might cause you to end up with no calves, or no harvest or not enough rain. But there was no scenario where you would go from 20 calves one year to 2000 the next. And on an annualized basis even rainfall is unlikely to change very much. Phoenix is not going to suddenly become Portland even if they do get the occasional flash flood. 

Throughout our history these normal distributions are so common that we often fall into the trap of assuming that everything follows this distribution, but randomness can definitely appear in other forms. The most common of these is the power law, and the most common example of a power law is a Pareto distribution, one example of which is called the 80/20 rule. This originally took the form of observing that 20% of the people have 80% of the wealth. But you can also see it in things like software, where 20% of the features often account for 80% of the usage. 

I’ve been drawing on the work of Nassim Taleb a lot in these newsletters, and in order to visualize the difference between these two distributions he came up with the terms mediocristan and extremistan. And he points out that while most people think they live in mediocristan, because that’s where humanity has spent most of its time, that the modern world has gradually been turning more and more into extremistan. This has numerous consequences, one of the biggest is when it comes to prediction.

In mediocristan one data point is never going to destroy the curve. If you end up at a party with a hundred people and you toss out the estimate that the average height of all the men is 5’9” you’re unlikely to be wrong by more than a couple of inches in either direction. And even if an NBA player walks through the door it’s only going to throw off things by a half an inch. But if you’re estimating the average wealth things get a lot more complicated. Even if you were to collect all the data necessary to have the exact number, the appearance of, the fashionably late, Bill Gates will completely blow that up. For instance an average wealth of $1 million pre-Bill Gates to $2.7 billion after he shows up.

Extreme outliers like this can either be very good or very bad. If Gates shows up and you’re trying to collect money to pay the caterers it’s good. If Gates shows up and it’s an auction where you’re both bidding on the same thing it’s bad. But where such outliers really screw things up is when you’re trying to prepare for future risk, particularly if you’re using the tools of mediocristan to prepare for the disasters of extremistan. Disasters which we’ll get to next time…


As it turns out blogging is definitely in extremistan. Only in this case you’re probably looking at 5% of the bloggers who get 95% of the traffic. As someone who’s in the 95% of the bloggers that gets 5% of the traffic I really appreciate each and every reader. If you want to help me get into that 5%, consider donating.


Eschatologist #9: Randomness

If you prefer to listen rather than read, this blog is available as a podcast here. Or if you want to listen to just this post:

Or download the MP3


Over the last couple of newsletters we’ve been talking about how to deal with an unpredictable and dangerous future. To put a more general label on things, we’ve been talking about how to deal with randomness. We started things off by looking at the most extreme random outcome imaginable: humanity’s extinction. Then I took a brief detour into a discussion of why I believe that religion is a great way to manage randomness and uncertainty. Having laid the foundation for why you should prepare yourself for randomness, in this newsletter I want to take a step back and examine it in a more abstract form.

The first thing to understand about randomness is that it frequently doesn’t look random. Our brain wants to find patterns, and it will find them even in random noise. An example:

T​​he famous biologist Stephen Jay Gould was touring the Waitomo glowworm caves in New Zealand. When he looked up he realized that the glowworms made the ceiling look like the night sky, except… there were no constellations. Gould realized that this was because the patterns required for constellations only happened in a random distribution (which is how the stars are distributed) but that the glowworms actually weren’t randomly distributed. For reasons of biology (glowworms will eat other glowworms) each worm had a similar spacing. This leads to a distribution that looks random but actually isn’t. And yet, counterintuitively we’re able to find patterns in the randomness of the stars, but not in the less random spacing of the glowworms.

One of the ways this pattern matching manifests is in something called the Narrative Fallacy. The term was coined by Nassim Nicholas Taleb, one of my favorite authors, who described it thusly: 

The narrative fallacy addresses our limited ability to look at sequences of facts without weaving an explanation into them, or, equivalently, forcing a logical link, an arrow of relationship upon them. Explanations bind facts together. They make them all the more easily remembered; they help them make more sense. Where this propensity can go wrong is when it increases our impression of understanding.

That last bit is particularly important when it comes to understanding the future. We think we understand how the future is going to play out because we’ve detected a narrative. To put it more simply: We’ve identified the story and because of this we think we know how it ends.

People look back on the abundance and economic growth we’ve been experiencing since the end of World War II and see a story of material progress, which ends in plenty for all. Or they may look back on the recent expansion of rights for people who’ve previously been marginalized and think they see an arc to history, an arc which “bends towards justice”. Or they may look at a graph which shows the exponential increase in processor power and see a story where massively beneficial AI is right around the corner. All of these things might happen, but nothing says they have to. If the pandemic taught us no other lesson, it should at least have taught us that the future is sometimes random and catastrophic. 

Plus, even if all of the aforementioned trends are accurate the outcome doesn’t have to be beneficial. Instead of plenty for all, growth could end up creating increasing inequality, which breeds envy and even violence. Instead of justice we could end up fighting about what constitutes justice, leading to a fractured and divided country. Instead of artificial intelligence being miraculous and beneficial it could be malevolent and harmful, or just put a lot of people out of work. 

But this isn’t just a post about what might happen, it’s also a post about what we should do about it. In all of the examples I just gave, if we end up with the good outcome, it doesn’t matter what we do, things will be great. We’ll either have money, justice or a benevolent AI overlord, and possibly all three. However, if we’re going to prevent the bad outcome, our actions may matter a great deal. This is why we can’t allow ourselves to be lured into an impression of understanding. This is why we can’t blindly accept the narrative. This is why we have to realize how truly random things are. This is why, in a newsletter focused on studying how things end, we’re going to spend most of our time focusing on how things might end very badly. 


I see a narrative where my combination of religion, rationality, and reading like a renaissance man leads me to fame and adulation. Which is a good example of why you can’t blindly accept the narrative. However if you’d like to cautiously investigate the narrative a good first step would be donating.


Tetlock, the Taliban, and Taleb

If you prefer to listen rather than read, this blog is available as a podcast here. Or if you want to listen to just this post:

Or download the MP3


I.

There have been many essays written in the aftermath of our withdrawal from Afghanistan. One of the more interesting was penned by Richard Hanania, and titled “Tetlock and the Taliban”. Everyone reading this has heard of the Taliban, but there might be a few of you who are unfamiliar with Tetlock. And even if that name rings a bell you might not be clear on what his relation is to the Taliban. Hanania himself apologizes to Tetlock for the association, but “couldn’t resist the alliteration”, which is understandable. Neither could I. 

Tetlock is known for a lot of things, but he got his start by pointing out that “experts” often weren’t. To borrow from Hanania:

Phil Tetlock’s work on experts is one of those things that gets a lot of attention, but still manages to be underrated. In his 2005 Expert Political Judgment: How Good Is It? How Can We Know?, he found that the forecasting abilities of subject-matter experts were no better than educated laymen when it came to predicting geopolitical events and economic outcomes.

From this summary the connection to the Taliban is probably obvious. This is an arena where the subject matter experts got things very wrong. Hanania’s opening analogy is too good not to quote:

Imagine that the US was competing in a space race with some third world country, say Zambia, for whatever reason. Americans of course would have orders of magnitude more money to throw at the problem, and the most respected aerospace engineers in the world, with degrees from the best universities and publications in the top journals. Zambia would have none of this. What should our reaction be if, after a decade, Zambia had made more progress?

Obviously, it would call into question the entire field of aerospace engineering. What good were all those Google Scholar pages filled with thousands of citations, all the knowledge gained from our labs and universities, if Western science gets outcompeted by the third world?

For all that has been said about Afghanistan, no one has noticed that this is precisely what just happened to political science.

Of course Hanania’s point is more devastating than Tetlock’s. The experts weren’t just “no better” than the Taliban’s “educated laymen”. The “experts” were decisively outcompeted despite having vastly more money and in theory, all the expertise. Certainly they had all the credentialed expertise…

In some ways Hanania’s point is just a restatement of Antonio García Martínez’s point, which I used to end my last post on Afghanistan—the idea we are an unserious people. That we enjoy “an imperium so broad and blinding” we’ve never been “made to suffer the limits of [our] understanding or re-assess [our] assumptions about [the] world”

So the Taliban needed no introduction, and we’ve introduced Tetlock, but what about Taleb? Longtime readers of this blog should be very familiar with Nassim Nicholas Taleb, but if not I have a whole post introducing his ideas. For this post we’re interested in two things, his relationship to Tetlock and his work describing black swans: rare, consequential and unpredictable events. 

Taleb and Tetlock are on the same page when it comes to experts, and in fact for a time they were collaborators, co-authoring papers on the fallibility of expert predictions and the general difficulty of making predictions—particularly when it came to fat-tail risks. But then, according to Taleb, Tetlock was seduced by government money and went from pointing out the weaknesses of experts to trying to supplant them, by creating the Good Judgement project, and the whole project of superforecasting.

The key problem with expert prediction, from Tetlock’s point of view, is that experts are unaccountable. No one tracks whether they were eventually right or wrong. Beyond that, their “predictions” are made in such a way that even making a determination of accuracy is impossible. Additionally experts are not any better at prediction than educated laypeople. Tetlock’s solution is to offer the chance for anyone to make predictions, but in the process ensure that the predictions can be tracked, and assessed for accuracy. From there you can promote those people with the best track record. A sample prediction might be “I am 90% confident that Joe Biden will win the 2020 presidential election.” 

Taleb agreed with the problem, but not with the solution. And this is where black swans come in. Black swans can’t be predicted, they can only be hedged against, and prepared for, but superforecasting, by giving the illusion of prediction, encourages people to be less prepared for black swans, and in the end worse off than they would have been without the prediction.

In the time since writing The Black Swan Taleb has come to hate the term, because people have twisted it into an excuse for precisely the kind of unpreparedness he was trying to prevent. 

“No one could have done anything about the 2007 financial crisis. It was a black swan!”

“We couldn’t have done anything about the pandemic in advance. It was a black swan!” 

“Who could have predicted that the Taliban would take over the country in nine days! It was a black swan!”

Accordingly, other terms have been suggested. In my last post I reviewed a book which introduced the term “gray rhino”, something people can see coming, but which they nevertheless ignore. 

Regardless of the label we decide to apply to what happened in Afghanistan, it feels like we were caught flat footed. We needed to be better prepared. Taleb says we can be better prepared if we expect black swans. Tetlock says we can be better prepared by predicting what to prepare for. Afghanistan seems like precisely the sort of thing superforecasting was designed for. Despite this I can find no evidence that Tetlock’s stable of superforecasters predicted how fast Afghanistan would fall, or any evidence that they even tried. 

As a final point before we move on. This last bit is one of the biggest problems with superforecasting. The idea that you should only be judged for what you got wrong, that if you were never asked to make a prediction about something that the endeavor “worked”. But reality doesn’t care about what you chose to make predictions on vs. what you didn’t. Reality does whatever it feels like. And the fact that you didn’t choose to make any predictions about the fall of Afghanistan doesn’t mean that thousands of interpreters didn’t end up being left behind. And the fact that you didn’t choose to make any predictions about pandemics doesn’t mean that millions of people didn’t die. This is the chief difference between Tetlock and Taleb.

II.

I first thought about this issue when I came across a poll on a forum I frequent, in which users were asked how long they thought the Afghan government would last. The options and results were:

(In the interest of full disclosure the bolded option indicates that I said one to two years.)

While it is true that a plurality of people said less than six months, six months was still much longer than the nine days it actually took (from capturing the first provincial capital to the fall of Kabul) and from the discussion that followed the poll, it seemed most of those 16 people were thinking that the government would fall at closer to six months or even three months than one week. In fact the best thing, prediction-wise, to come out of the discussion was when someone pointed out that 10 years previously The Onion had posted an article with the headline U.S. Quietly Slips Out Of Afghanistan In Dead Of Night, which is exactly what happened at Bagram. 

As it turns out this is not the first time The Onion has eerily predicted the future. There’s a whole subgenre of noticing all the times it’s happened. How do they do it? Well of course part of the answer is selection bias.  No one is expecting them to predict the future; nobody comments on all the articles that didn’t come true.  But when one does, it’s noteworthy. But I think there’s something else going on as well: I think they come up with the worst or most ridiculous thing that could happen, and because of the way the world works, some of the time that’s exactly what does happen. 

Between the poll answers being skewed from reality and the link to the Onion article, the thread led me to wonder: where were the superforecasters in all of this?

I don’t want to go through all of the problems I’ve brought up with superforecasting (I’ve easily written more than 10,000 words on the subject) but this event is another example of nearly all of my complaints. 

  • There is no methodology to account for the differing impact of being incorrect on some predictions vs. others. (Being wrong about whether the Tokyo Olympics will be held is a lot less consequential than being wrong about Brexit.)
  • Their attention is naturally drawn to obvious questions where tracking predictions is easy. 
  • Their rate of success is skewed both by only picking obvious questions, and by lumping together both the consequential and the inconsequential.
  • People use superforecasting as a way of more efficiently allocating resources, but efficiency is essentially equal to fragility, which leaves us less prepared when things go really bad. (It was pretty efficient to just leave Bagram all at once.)

Or course some of these don’t apply because as far as I can tell the Good Judgment project and it’s stable of superforecasters never tackled the question, but they easily could have. They could have had a series of questions about whether the Taliban would be in control of Kabul by a certain date. This seems specific enough to meet their criteria. But as I said, I could find no evidence that they had. Which means either they did make such predictions and were embarrassingly wrong, so it’s been buried, or despite its geopolitical importance it never occurred to them to make any predictions about when Afghanistan would fall. (But it did occur to a random poster on a fringe internet message board?) Both options are bad.

When people like me criticize superforecasting and Tetlock’s Good Judgment project in this manner, the common response is to point out all the things they did get right and further that superforecasting is not about getting everything right; it’s about improving the odds, and getting more things right than the old method of relying on the experts. This is a laudable goal. But as I point out it suffers from several blindspots. The blindspot of impact is particularly egregious and deserves more discussion. To quote from one of my previous posts where I reflected on their failure to predict the pandemic:

To put it another way, I’m sure that the Good Judgement project and other people following the Tetlockian methodology have made thousands of forecasts about the world. Let’s be incredibly charitable and assume that out of all these thousands of predictions, 99% were correct. That out of everything they made predictions about 99% of it came to pass. That sounds fantastic, but depending on what’s in the 1% of the things they didn’t predict, the world could still be a vastly different place than what they expected. And that assumes that their predictions encompass every possibility. In reality there are lots of very impactful things which they might never have considered assigning a probability to. That in fact they could actually be 100% correct about the stuff they predicted but still be caught entirely flat footed by the future because something happened they never even considered. 

As far as I can tell there were no advance predictions of the probability of a pandemic by anyone following the Tetlockian methodology, say in 2019 or earlier. Or any list where “pandemic” was #1 on the “list of things superforecasters think we’re unprepared for”, or really any indication at all that people who listened to superforecasters were more prepared for this than the average individual. But the Good Judgement Project did try their hand at both Brexit and Trump and got both wrong. This is what I mean by the impact of the stuff they were wrong about being greater than the stuff they were correct about. When future historians consider the last five years or even the last 10, I’m not sure what events they will rate as being the most important, but surely those three would have to be in the top 10. They correctly predicted a lot of stuff which didn’t amount to anything and missed predicting the few things that really mattered.

Once again we find ourselves in a similar position. When we imagine historians looking back on 2021, no one would find it surprising if they ranked the withdrawal of the US and subsequent capture of Afghanistan by the Taliban as the most impactful event of the year. And yet superforecasters did nothing to help us prepare for this event.

IV.

The natural next question is to ask how should we have prepared for what happened? Particularly since we can’t rely on the predictions of superforecasters to warn us. What methodology do I suggest instead of superforecasting? Here we return to the remarkable prescience of The Onion. They ended up accurately predicting what would happen in Afghanistan 10 years in advance, by just imagining the worst thing that could happen. And in the weeks since Kabul fell, my own criticism of Biden has settled around this theme. He deserves credit for realizing that the US mission in Afghanistan had failed, and that we needed to leave, that in fact we had needed to leave for a while. Bad things had happened, and bad things would continue to happen, but in accepting the failure and its consequences he didn’t go far enough. 

One can imagine Biden asserting that Afghanistan and Iraq were far worse than Bush and his “cronies” had predicted. But then somehow he overlooked the general wisdom that anything can end up being a lot worse than predicted, particularly in the arena of war (or disease). If Bush can be wrong about the cost and casualties associated with invading Afghanistan, is it possible that Biden might be wrong about the cost and casualties associated with leaving Afghanistan? To state things more generally, the potential for things to go wrong in an operation like this far exceeds the potential for things to go right. Biden, while accepting past failure, didn’t do enough to accept the possibility of future failure. 

As I mentioned, my answer to the poll question of how long the Afghanistan government was going to last was 1-2 years. And I clearly got it wrong (whatever my excuses). But I can tell you what questions I would have aced (and I think my previous 200+ blog posts back me up on this point): 

  • Is there a significant chance that the withdrawal will go really badly?
  • Is it likely to go worse than the government expects?

And to be clear I’m not looking to make predictions for the sake of predictions. I’m not trying to be more accurate, I’m looking for a methodology that gives us a better overall outcome. So is the answer to how we could have been better prepared, merely “More pessimism?” Well that’s certainly a good place to start, beyond that there’s things I’ve been talking about since the blog was started. But a good next step is to look at the impact of being wrong. Tetlock was correct when he pointed out that experts are wrong most of the time. But what he didn’t account for is it’s possible to be wrong most of the time, but still end up ahead. To illustrate this point I’d like to end by recycling an example I used the last time I talked about superforecasting:

The movie Molly’s Game is about a series of illegal poker games run by Molly Bloom. The first set of games she runs is dominated by Player X, who encourages Molly to bring in fishes, bad players with lots of money. Accordingly, Molly is confused when Player X brings in Harlan Eustice, who ends up being a very skillful player. That is until one night when Eustice loses a hand to the worst player at the table. This sets him off, changing him from a calm and skillful player, into a compulsive and horrible player, and by the end of the night he’s down $1.2 million.

Let’s put some numbers on things and say that 99% of the time Eustice is conservative and successful and he mostly wins. That on average, conservative Eustice ends the night up by $10k. But, 1% of the time, Eustice is compulsive and horrible, and during those times he loses $1.2 million. And so our question is should he play poker at all? (And should Player X want him at the same table he’s at?) The math is straightforward, his expected return over 100 games is -$210k. It would seem clear that the answer is “No, he shouldn’t play poker.”

But superforecasting doesn’t deal with the question of whether someone should “play poker” it works by considering a single question, answering that question and assigning a confidence level to the answer. So in this case they would be asked the question, “Will Harlan Eustice win money at poker tonight?” To which they would say, “Yes, he will, and my confidence level in that prediction is 99%.” 

This is what I mean by impact. When things depart from the status quo, when Eustice loses money, it’s so dramatic that it overwhelms all of the times when things went according to expectations.  

Biden was correct when he claimed we needed to withdraw from Afghanistan. He had no choice, he had to play poker. But once he decided to play poker he should have done it as skillfully as possible, because the stakes were huge. And as I have so frequently pointed out, when the stakes are big, as they almost always are when we’re talking about nations, wars, and pandemics, the skill of pessimism always ends up being more important than the skill of superforecasting.


I had a few people read a draft of this post. One of them complained that I was using a $100 word when a $1 word would have sufficed. (Any guesses on which word it was?) But don’t $100 words make my donors feel like they’re getting their money’s worth? If you too want to be able to bask in the comforting embrace of expensive vocabulary consider joining them.


Remind Me What The Heck Your Point is Again?

If you prefer to listen rather than read, this blog is available as a podcast here. Or if you want to listen to just this post:

Or download the MP3


The other day I was talking to my brother and he said, “How would you describe your blog in a couple of sentences?”

It probably says something about my professionalism (or lack thereof) that I didn’t have some response ready to spit out. An elevator pitch, if you will. Instead I told him, “That’s a tough one.” Much of this difficulty comes because, if I were being 100% honest, the fairest description of my blog would boil down to: I write about fringe ideas I happen to find interesting. Of course, this description is not going to get me many readers, particularly if they have no idea whether there’s any overlap between what I find interesting and what they find interesting.

I didn’t say this to my brother, mostly because I didn’t think of it at the time. Instead, after few seconds, I told him, well of course the blog does have a theme, it’s right there in the title, but I admitted that it might be more atmospheric than explanatory. Though I think we can fix that with the addition of a few words. Which is how Jeremiah 8:20 shows up on my business cards. (Yeah, that’s the kind of stuff your donations get spent on, FYI.) With those few words added it reads:

The harvest [of technology] is past, the summer [of progress] is ended, and we are not saved.

If I was going to be really pedantic, I might modify it, and hedge, so it read as follows:

Harvesting technology is getting more complex, the summer where progress was easy is over, and I think we should prepare for the possibility that we won’t be saved.

If I was going to be more literary and try to pull in some George R.R. Martin fans I might phrase it:

What we harvest no longer feeds us, and winter is coming.

But once again, you would be forgiven if, after all this, you’re still unclear on what this blog is about (other than weird things I find interesting). To be fair, to myself, I did explain all of this in the very first post, and re-reading it recently, I think it held up fairly well. But it could be better, and this assumes that people have even read my very first post, which is unlikely since at the time my readership was at its nadir, and despite my complete neglect of anything resembling marketing, since then, it has grown, and presumably at least some of those people have not read the entire archive.

Accordingly, I thought I’d take another shot at it. To start, one concept which runs through much (though probably not all) of what I write, is the principle of antifragility, as introduced by Nassim Nicholas Taleb in his book of (nearly) the same name.

I already dedicated an entire post to explaining the ideas of Taleb, so I’m not going to repeat that here. But, in brief, Taleb starts with what should be an uncontroversial idea, that the world is random. He then moves on to point out the effects of that, particularly in light of the fact that most people don’t recognize how random things truly are. They are often Fooled by Randomness (the title of his first book) into thinking that there’s patterns and stability when there aren’t. From there he moves on to talk about extreme randomness through introducing the idea of a Black Swan (the name of his second book) which is something that:

  1. Lies outside the realm of regular expectations
  2. Has an extreme impact
  3. People go to great lengths afterwards to show how it should have been expected.

It’s important at this point to clarify that not all black swans are negative. And technology has generally had the effect of increasing the number of black swans of both the positive (internet) and negative (financial crash) sort. In my very first post I said that we were in a race between these two kinds of black swans, though rather than calling them positive or negative black swans I called them singularities and catastrophes. And tying it back into the theme of the blog a singularity is when technology saves us, and a catastrophe is when it doesn’t.

If we’re living in a random world, with no way to tell whether we’re either going to be saved by technology or doomed by it, then what should we do? This is where Taleb ties it all together under the principle of antifragility, and as I mentioned it’s one of the major themes of this blog. Enough so that another short description of the blog might be:

Antifragility from a Mormon perspective.

But I still haven’t explained antifragility, to say nothing of antifragility from a Mormon perspective, so perhaps I should do that first. In short, things that are fragile are harmed by chaos and things that are antifragile are helped by chaos. I would argue that it’s preferable to be antifragile all of the time, but it is particularly important when things get chaotic. Which leads to two questions: How fragile is society? And how chaotic are things likely to get? I have repeatedly argued that society is very fragile and that things are likely to get significantly more chaotic. And further, that technology increases both of these qualities

Earlier, I provided a pedantic version of the theme, changing (among other things) the clause “we are not saved” to the clause “we should prepare for the possibility that we won’t be saved.” As I said, Taleb starts with the idea that the world is random, or in other words unpredictable, with negative and positive black swans happening unexpectedly. Being antifragile entails reducing your exposure to negative black swans while increasing your exposure to positive black swans. In other words being prepared for the possibility that technology won’t save us.

To be fair, it’s certainly possible that technology will save us. And I wouldn’t put up too much of a fight if you argued it was the most likely outcome. But I take serious issue with anyone who wants to claim that there isn’t a significant chance of catastrophe. To be antifragile, consists of realizing that the cost of being wrong if you assume a catastrophe and there isn’t one, is much less than if you assume no catastrophe and there is one.

It should also be pointed out that most of the time antifragility is relative. To give an example, if I’m a prepper and the North Koreans set off an EMP over the US which knocks out all the power for months. I may go from being a lower class schlub to being the richest person in town. In other words chaos helped me, but only because I reduced my exposure to that particular negative black swan, and most of my neighbors didn’t.

Having explained antifragility (refer back to the previous post if things are still unclear) what does Mormonism bring to the discussion? I would offer that it brings a lot.

First, Mormonism spends quite a bit of time stressing the importance of antifragility, though they call it self reliance, and emphasis things like staying out of debt, having a plan for emergency preparedness, and maintaining a multi-year supply of food. This aspect is not one I spend a lot of time on, but it is definitely an example of Mormon antifragility.

Second, Mormons, while not as apocalyptic as some religions nevertheless reference the nearness of the end right in their name. We’re not merely Saints, we are the “Latter-Day Saints”. While it is true that some members are more apocalyptic than others, regardless of their belief level I don’t think many would dismiss the idea of some kind of Armageddon outright. Given that, if you’re trying to pick a winner in the race between catastrophe and singularity or more broadly, negative or positive black swans, belonging to religion which claims we’re in the last days could help break that tie. Also as I mentioned it’s probably wisest to err on the side of catastrophe anyway.

Third, I believe Mormon Doctrine provides unique insight into some of the cutting edge futuristic issues of the day. Over the last three posts I laid out what those insights are with respect to AI, but in other posts I’ve talked about how the LDS doctrine might answer Fermi’s Paradox. And of course there’s the long running argument I’ve had with the Mormon Transhumanist Association over what constitutes an appropriate use of technology and what constitutes inappropriate uses of technology. This is obviously germane to the discussion of whether technology will save us. And what the endpoint of that technology will end up being. And it suggests another possible theme:

Connecting the challenges of technology to the solutions provided by LDS Doctrine.

Finally, any discussion of Mormonism and religion has to touch on the subject of morality. For many people issues of promiscuity, abortion, single-parent families, same sex marriage, and ubiquitous pornography are either neutral or benefits of the modern world. This leads some people to conclude that things are as good as they’ve ever been and if we’re not on the verge of a singularity then at least we live in a very enlightened era, where people enjoy freedoms they could have never previously imagined.

The LDS Church and religion in general (at least the orthodox variety) take the opposite view of these developments, pointing to them as evidence of a society in serious decline. Perhaps you feel the same way, or perhaps you agree with the people who feel that things are as good as they’ve ever been, but if you’re on the fence. Then, one of the purposes of this blog is to convince you that even if there is no God, that it would be foolish to dismiss religion as a collection of irrational biases, as so many people do. Rather, if we understand the concept of antifragility, it is far more likely that rather than being irrational that religion instead represents the accumulated wisdom of a society.

This last point deserves a deeper dive, because it may not be immediately apparent to you why religions would necessarily accumulate wisdom or what any of this has to do with antifragility. But religious beliefs can only be either fragile or antifragile, they can either break under pressure or get stronger. (In fairness, there is a third category, things which neither break nor get stronger, Taleb calls this the robust category, but in practice it’s very rare for things to be truly robust.) If religious beliefs were fragile, or created fragility then they would have disappeared long ago. Only beliefs which created a stronger society would have endured.

Please note that I am not saying that all religious beliefs are equally good at encouraging antifragile behavior. Some are pointless or even irrational, but others, particularly those shared by several religions are very likely lessons in antifragility. But a smaller and smaller number of people have any religious beliefs and an even smaller number are willing to actively defend these beliefs, particularly those which prohibit a behavior currently in fashion.

However, if these beliefs are as useful and as important as I say they are then they need all the defending they can get. Though in doing this a certain amount of humility is necessary. As I keep pointing out, we can’t predict the future. And maybe the combination of technology and a rejection of traditional morality will lead to some kind of transhuman utopia, where people live forever, change genders whenever they feel like it and live in a fantastically satisfying virtual reality, in which everyone is happy.

I don’t think most people go that far in their assessment of the current world, but the vast majority don’t see any harm in the way things are either, but what if they’re wrong about that?

And this might in fact represent yet another way of framing the theme of this blog:

But what if we’re wrong?

In several posts I have pointed out the extreme rapidity with which things have changed, particularly in the realm of morality, where, in a few short years, we have overturned religious taboos stretching back centuries or more. The vast majority of people have decided that this is fine, and, that in fact, as I already mentioned, it’s an improvement on our benighted past. But even if you don’t buy my argument about religions being antifragile I would hope you would still wonder, as I do, “But what if we’re wrong?”

This questions not only applies to morality, but technology saving us, the constant march of progress, politics, and a host of other issues. And I can’t help but think that people appear entirely too certain about the vast majority of these subjects.

In order bring up the possibility of wrongness, especially when you’re the ideological minority there has to be freedom of speech, another area I dive into from time to time in this space. Also you can’t talk about freedom of speech or the larger ideological battles around speech without getting into the topic of politics. A subject I’ll return to.

As I have already mentioned, and as you have no doubt noticed the political landscape has gotten pretty heated recently and there are no signs of it cooling down. I would argue, as others have, that this makes free speech and open dialogue more important than ever. In this endeavor I end up sharing a fair amount of overlap with the rationalist community. Which you must admit is interesting given the fact that this community clearly has a large number of atheists in it’s ranks. But that failing aside, I largely agree with much of what they say, which is why I link to Scott Alexander over at SlateStarCodex so often.

On the subject of free speech the rationalists and I are definitely in agreement. Eliezer Yudkowsky, an AI theorist, who I mentioned a lot in the last few posts, is also one of the deans of rationality and he had this to say about free speech:

There are a very few injunctions in the human art of rationality that have no ifs, ands, buts, or escape clauses. This is one of them. Bad argument gets counterargument. Does not get bullet. Never. Never ever never for ever.

I totally agree with this point, though I can see how some people might choose to define some of the terms more or less broadly, leading to significant differences in the actual implementation of the rule. Scott Alexander is one of those people, and he chooses to focus on the idea of the bullet, arguing that we should actually expand the prohibition beyond just literal bullets or even literal weapons. Changing the injunction to:

Bad argument gets counterargument. Does not get bullet. Does not get doxxing. Does not get harassment. Does not get fired from job. Gets counterargument. Should not be hard.

In essence he want’s to include anything that’s designed to silence the argument rather than answer it. And why is this important? Well if you’ve been following the news at all you’ll know that there has been a recent case where exactly this thing happened, and a bad argument got someone fired. (Assuming it even was a bad argument which might be a subject for another time.)

Which ties back into asking, “But what if we’re wrong?” Because unless we have a free and open marketplace of ideas where things can succeed and fail based on their merits, rather than whether they’re the flavor of the month, how are we ever going to know if we’re wrong? If you have any doubts as to whether the majority is always right then you should be incredibly fearful of any attempt to allow the majority to determine what gets said.

And this brings up another possible theme for the blog:

Providing counterarguments for bad arguments about technology, progress and religion.

Running through all of this, though most especially with the topic I just discussed, free speech, is politics. The primary free speech ground is political, but issues like morality and technology and fragility all play out at the political level as well.

I often joke that you know those two things that you’re not supposed to talk about? Religion and politics? Well I decided to create a blog where I discuss both. Leading me to yet another possible theme:

Religion and Politics from the perspective of a Mormon who thinks he’s smarter than he probably is.

Perhaps the final thread running through everything, is like most people I would like to be original, which is hard to do. The internet has given us a world where almost everything you can think of saying has been said already. (Though I’ve yet to find anyone making exactly the argument I make when it comes to Fermi’s Paradox and AI.) But there is another way to approximate originality and that is to say things that other people don’t dare to say, but which hopefully, are nevertheless true. Which is part of why I record under a pseudonym. So far the episode that most fits that description is the episode I did on LGBT youth and suicide, with particular attention paid to the LDS stand and role in that whole debate.

Going forward I’d like to do more of that. And it suggests yet another possible theme:

Saying what you haven’t thought of or have thought of but don’t dare to bring up.

In the end, the most accurate description of the blog is still, that I write about fringe ideas I happen to find interesting, but at least by this point you have a better idea of the kind of things I find interesting and if you find them interesting as well, I hope you’ll stick around. I don’t think I’ve ever mentioned it within an actual post, but on the right hand side of the blog there’s a link to sign up for my mailing list, and if you did find any of the things I talked about interesting, consider signing up.


Do you know what else interests me? Money. I know that’s horribly crass, and I probably shouldn’t have stated it so bluntly, but if you’d like to help me continue to write, consider donating, because money is an interesting thing which helps me look into other interesting things.


Straddling Optimism and Pessimism; Religion and Rationality

If you prefer to listen rather than read, this blog is available as a podcast here. Or if you want to listen to just this post:

Or download the MP3


One of the regular readers of this blog, who also happens to be an old friend of mine, is constantly getting after me for being too pessimistic. He’s more of an optimist than I am, and this optimism largely derives from his religious faith. Which happens to be basically the same as mine (we’re both LDS and very active). Despite this similarity, he’s optimistic and hopeful, and I’m gloomy and pessimistic. Or at least that’s what it looks like to him, and I’m sure there’s a certain amount of truth to that. I do have a tendency to immediately gravitate to the worst-case scenario, and an even greater tendency to use my pessimism to fuel my writing, but I don’t think I’m as pessimistic as my friend imagines or as one might assume just from reading my posts. I already explored this idea at some length in a previous post, (a post he was quick to compliment) but I think it’s time to revisit it from a different angle.

The previous post was more about whether my outward displays of pessimism reflected an inward cynicism that needed to be fixed, i.e. was I being called to repentance. (I think the answer I arrived at was, “Maybe.”) This post is more about what the blog is designed to do, who the audience is, and how writing in service of those two things is a lot like serving two masters (wait… Is that bad?) And therefore may not give an accurate impression of my core beliefs, beliefs which I’ll also get into. Yes, I’m writing a post about the blog’s mission nearly a year into things. Make of that what you will. Though I think we can all agree that occasionally it’s useful for a person to step back and figure out what they’re really trying to accomplish.

I think the briefest way to describe the purpose of this blog is that it’s designed to encourage antifragility. Hopefully you’re already familiar with this concept, and the ideas of Nassim Nicholas Taleb in general, but if not I wrote a post all about it. But if you don’t have the time to read it, in short, one way to think about antifragility is to view it as a methodology for benefitting from big positive rare events and protecting yourself against big negative rare events. In Taleb’s philosophy these are called black swans. And here we touch on the first area in which writing about a topic may give an incorrect view of my actual attitudes and opinions. In this instance, writing about black swans automatically makes them appear more likely than they actually are, or than I believe them to be. Black Swans are rare, and if I wrote about them only in proportion to their likelihood I would hardly ever mention them, but recall that a black swan, by definition, has gigantic consequences, which means they have an impact far out of proportion to their frequency. Thus, if you were to judge my topic choice and my pessimism just based on the rarity of these events, you would have to conclude that I spend too much time writing about them and that I’m excessively negative on top of that. But if I’m writing about black swans in proportion to their impact I think my frequency and negativity end up being a much better fit.

Of course writing about them, period, is only worthwhile if you can offer some ideas on how individuals can protect themselves from negative black swans. And this is another point where my writing diverges somewhat from my actual behavior, and where we get into the topic of religion. As a very religious person I truly believe that the best way to protect yourself from negative black swans is to have faith, keep the commandments, attend church, love your neighbor, and cleave to your wife/husband. But as long time readers of this blog know, while I don’t shy away from those topics, neither are they the focus of my writing either. Why is this? Because I think there are a lot of people already speaking on those topics and that they’re doing a far better job than I could ever do.

If there are already many people, from LDS General Authorities to C.S. Lewis who are doing a better job than I could ever do, in covering purely religious topics, I have to find some other way of communicating that plays to my strengths, without abandoning religion entirely. But just because I’m not going to try and compete with them directly doesn’t mean I can’t borrow some of their methodology, and one of the things that all of these individuals are great at is serving milk before meat. Or starting with stuff that’s easy to digest and then once someone can swallow that, moving on to the tougher, chewier, but ultimately tastier stuff. and in considering this it occurred to me that what’s milk to one person may be meat to another. As an example, if you have a son, as I do, who is nearly allergic to vegetables (or so he likes to claim). And you want him to eat more vegetables, you wouldn’t start out with brussel sprouts or spinach.  You’d start with corn on the cob soaked in butter and liberally seasoned with salt and pepper. On the opposite side of the equation if someone were to decide, after many years, that they are done being a vegetarian, you wouldn’t introduce them to meat by serving them chicken hearts or liver.

In a like fashion, there are, in this world, many people who already believe in God. And for those people starting with faith, repentance, and baptism is a natural milk, before moving to the meat of chastity, tithing and the Word of Wisdom. There are however other people who think that rationality, rather than faith, is the key to understanding the world. With these people, it is my hope, that survival is the milk. Because if you can’t survive, you can’t do anything else, however rational you are in all other respects. And then, once we agree on that, we can move on to the meat of black swans, technological fragility, and what religion has to say about singularities.

It should be mentioned that before we leave the topic of “milk before meat,” that it’s actually got something of a bad reputation in the rationalist community (to say nothing of the ex-mormon community). They view it as a Mormon variant of a bait and switch, where we get you into the Church with the promise of three hour meetings on Sunday, paying 10% of your income to the church, giving up all extramarital sex, along with booze, drugs and cigarettes (recall, that you have to agree to all of this before you can even be baptized.) And then I guess only after that do we hit you with the fact that you might have to one day be the Bishop or the Relief Society President? Actually I’m not clear what the switch is in this scenario. I think all of the hard things about Mormonism are revealed right at the beginning. Also I’m not quite sure why they take issue with the idea of starting with the easier stuff. We literally do give children milk before meat; we teach algebra before calculus; and don’t even get me started on sex ed. In other words this is one of those times when I think the lady doth protest too much.

Moving on… Choosing a different audience and a different approach does not mean that I am personally any less devoted to the faith and hope inherent in my religion. And that hope comes with a fair amount of optimism. Certainly there are people more optimistic than me, but I am optimistic enough that I have no doubt that things will work out eventually. The problem is the “eventually,” I don’t know when that will be, and until that time comes, we still have to deal with competing ideologies, with different ways for arriving at truth, and with the world as it exists, not as we would like it to be. Also if we’re only able to talk to other Christians (and often not even to them) then we’re excluding a large and growing segment of the population.

But it doesn’t have to be this way, and much of the motivation for this blog came from seeing areas of surprising overlap between technology and religion, particularly at the more speculative edge of technology. As an example, look at the subject of immortality. In this area the religious have had a plan, and have been following it for centuries. They know what they need to do, and while everyone is not always as successful as they could be in doing what they should, the path forward is pretty clear. They have a very specific plan for their life which happens to include the possibility of living forever. Some may think this plan is silly, and that it won’t work, but the religious do have a plan. And, up until very recently, the religious plan was the only game in town. Which doesn’t mean that everyone bought into it, but, as I mentioned in a previous post, If you were really looking for an existence beyond this one that involved more than just memories, then it was the only option.

Obviously not everyone bought into the plan, people have been rejecting the religion for almost as long as it’s been in existence. But it’s only recently that there has been any hope for an alternative, for immortality outside of divine intervention. Some people hope to achieve this through cryonic suspension, e.g.freezing their body after death in the hopes of revival later. Some people hope to achieve this by digitizing their brain, or recording all of their experiences so that the recordings can be used to reconstruct their consciousness once they’re dead. Other people just hope that we’ll figure out how to stop aging.

These different concepts of immortality represent an area of competition between technology and religion, but the fact that both sides are talking about immortality is, I would opine, a neglected area we see the overlap I mentioned. Previously only the religious talked about immortality and now transhumanists, are talking about it as well. When presented with this fact, most people focus on the competition and use it as another excuse to abandon religion. But there are a few who recognize the overlap, and the surprising consequences that might entail. Certainly the Mormon Transhumanist Association is in this category and that’s one of the things I admire about them.

To take it a little farther, if we imagine that there are some people who just want a chance at immortality, and they don’t care how they get it, then previously these people would have had no other option than religion. Whether religion is effective, given such a selfish motivation, is beyond the scope of this post though I did touch on it in a previous post. But in any event it doesn’t matter because, here, we’re not concerned with whether it’s a good idea, we’re concerned with whether such a group of people exists and whether, given the promise of technological immortality, how many have, so to speak, switched sides.

I’m not sure how many people this group represents. Also I’m sure the motivations of most religious individuals are far more complicated than just a single minded quest for immortality. But you can certainly imagine that the promise of immortality through technology might be enough to take someone who would have been religious in an earlier age and convince them to seek immortality through technology instead. If there are people in this category, it’s unlikely that much is being written specifically with them in mind. All of this is not to say that my blog is targeted at “people who yearn for immortality, but think technology is currently a better bet than religion.” A group that has to be pretty small regardless of the initial assumptions, but this is certainly an example, albeit an extreme one, of the ways in which technology overlaps not only the practice of religion, but also the ideology, morals and even philosophy.

It’s easy to view technology as completely separate from religion, and maybe at one point it was, but as we get closer to developing the technology to genetically alter ourselves and our descendents, eliminate the need for work, or create artificial Gods (and recall we already have the technology to destroy the world) then suddenly technology is very much encroaching on areas which have previously been the sole domain of religion. And taking a moment to examine whether religion might have some insights into these issues before we discard it, is, I believe, a worthwhile endeavor. This is where, by straddling the two, I hope to cover some ground the General Authorities and people like C.S. Lewis have missed.

Interestingly, this is where religion ends up providing both the source of my pessimism as well as the source of my optimism. I have already mentioned how faith in God is a source of limitless hope, but on the other hand it also provides a framework for understanding how prideful technology has made us, and how quick we have been to discard the lessons of the both history and religion. We are faced with a situation where people are not merely ignoring the morality of religion, they are in many cases charting a course in the opposite direction. In this case, what other response is there than pessimism?

Of course, and I should have mentioned this earlier (both in this post and in the blog as a whole.) You have probably guessed that my name is not actually Jeremiah, that it’s a pseudonym I adopted for the purposes of this blog. Not only because I took the theme from the book of Jeremiah but also because I think there are some parallels between the doom he could see coming and many potential dooms we face. I assume that Jeremiah had faith, I assume that he figured it would all eventually work out for him, but that doesn’t mean that he wasn’t pessimistic about the world around him, enough so that a we still use the word jeremiad to mean a long, mournful complaint. And I think he was onto something. I know it’s common these days to declare that we just need to be optimistic and love people regardless of what they’re doing. But I’m inclined to think a pessimistic approach which is closer to Jeremiah’s might actually produce better results. And this is where we return to antifragility, which is another area of overlap between religion and technology, though probably less clear than the immortality overlap we talked about (which is why I started with it.)

The great thing about striving to be antifragile is that it’s a fantastic plan regardless of whether you’re religious or not. As I mentioned earlier my hope is that survival may provide a useful entry point, the milk so to speak, even for people who aren’t religious. In particular I think self-identified rationalists place too much weight on being right in the short term and not enough weight on surviving in the long term. Which are strengths of both antifragility specifically and religion generally. Obviously we don’t have the time to get into a complete dissection of how rationalists neglect the long-term, and I have definitely seen some articles from that side of things that did an admirable job of tacking the potential of future catastrophe. Perhaps, it’s more accurate to state that whatever their consideration for the long term that religion does not factor in at all.

But religion is important here for at least three reasons. First as I said in a previous post, even if there is no God, the taboos and commandments of religion are the accumulated knowledge about how to be antifragile. Second religion is one of the best ways we have for creating resilient social structures going forward. Which is to say, who’s better at recovering from disaster? The rationalists in San Francisco or the Mormons in Utah? Finally, if there is a God, being religious gives you access to the ultimate antifragility, eternal life. Obviously this final point is the most controversial of all, and you’re free to dismiss it, (though you might want to read my Pascal’s Wager post before you do.) But, with all of this, are you really sure that religion has no value in our modern, technological world? To return to the main theme of this post, I think people underestimate the value that comes from straddling the two worlds.

The problem with all of this is that in trying to speak on these subjects the minute you bring in religion and God many people are going to tune out entirely. Thus, despite this being an emphatically LDS blog, I don’t spend as much time speaking about religion as perhaps you might expect. In part this is because I honestly think you can get to most of the places I want to go without relying on deus ex machina. Believing in God does make everything easier to a certain extent (across all facets of life) but what if you don’t believe in God? Does that mean that you can throw out religion in it’s entirety, root and branch? I know people want to dismiss religion as a useless or even harmful relic of the past, but is that really a rational point of view? Is it really rational to take the position that countless hours, untold resources, and millions of lives were wasted on something that brought no benefit to our ancestors? Or worse caused harm? If this is your position then I think it’s obvious that the burden of proof rests with you.

There is a God in Heaven. And so I have all the optimism in the world. But, when so called rationalists, mock thousands of years of wisdom, then I’m also a huge pessimist. To use another quote from Shakespeare, remember “There are more things in heaven and earth… than are dreamt of in your philosophy.


I think it’s obvious that whether you’re an optimist or a pessimist, religious or rational (or ideally both) that we’re basically on the same page. So why not donate?


Time Preference and the Survival of Civilizations

If you prefer to listen rather than read, this blog is available as a podcast here. Or if you want to listen to just this post:

Or download the MP3


In my ongoing quest to catch up on those topics I promised to revisit someday but never have, in this post I’m turning my attention to a statement I made all the way back in July of last year. (As I said I’ve been negligent about keeping my promises.) Back then, as aside on the topic of taboos, I said:

Of course this takes us down another rabbit hole of assuming that the survival of a civilization is the primary goal, as opposed to liberty or safety or happiness, etc. And we will definitely explore that in a future post, but for now, let it suffice to say that a civilization which can’t survive, can’t do much of anything else.

Well, this is that future post and it’s time to talk about Civilization! With a capital C! And no, not the classic Sid Meier’s game of the same name. Though that is a great game.

To begin with though, in timing that can only be evidence of the righteousness of my cause (that’s sarcasm by the way.) I recently listened to several interesting podcasts that directly tied into this topic. (By the way, you all know that you can get this blog as a podcast, right?) The first was a podcast titled Here Are The Signs That A Civilization Is About To Collapse. I confess it wasn’t as comprehensive as I had hoped, but their guest, Arthur Demarest, brought up some very interesting points. And if he had had a book on civilizational collapse I would have bought it in a heartbeat, but it appears that his books are all academically oriented and mostly focused on the Mayans. In any case here are some of the points that dovetail well with things I have already talked about.

  1. Civilization allows increasing complexity and connectivity, resulting in increased efficiency. But this connectivity and complexity increases the fragility of the system. Demarest gave the example of a slowdown in China causing pizza parlors to close in Chile.
  2. This complexity also leads to increased maintenance costs, and overhead. And eventually maintenance expands to the point where there’s very little room for innovation and no flexibility to unwind any of the complexity.
  3. When civilizations get in trouble they often end up doubling down on whatever got them in trouble in the first place. Demarest gives the example of the Mayans who built ever more elaborate temples as collapse threatened, in an effort to prop up the rulers.
  4. A civilization’s strength can often end up being the cause of its downfall.
  5. As things intensify thinking becomes more and more short term.
  6. Observations that the current period is the greatest ever often act as a warning that the civilization has already peaked, and the collapse is in progress.

As you may notice we already check most if not all of these boxes and I’ve already talked about all of them in one form or another, but more importantly, what he also points out, and what should be obvious, is that all civilizations collapse. Now you may argue that all we can say for sure is that every previous civilization has collapsed; ours may be different. This is indeed possible. But I think, for a variety of reasons which I mention again and again, that it’s safer to assume that we aren’t different. If we do make this completely reasonable and cautionary assumption, then the only questions which remain are: when is the current civilization going to collapse and is there anything we can do to extend its life?

I mentioned that I had listened, coincidently, and by virtue of the righteousness of my cause (once again sarcasm), to several podcasts which spoke to this issue. The second of these podcasts was Dan Carlin’s Common Sense. In this most recent episode he spent the first half of the program talking about the increasing hostility that exists between the two halves of the country and specifically the hostility between the Antifas (short for anti-fascists) and the hardcore Trump supporters. Carlin mentioned videos of violence which has been erupting at demonstrations and counter demonstrations all over the country. I would link to some of these videos, but it’s hard to find any that aren’t edited in a nakedly partisan fashion by one or the other side. But they’re easy enough to find if you do a search.

This is not a new phenomenon, we’ve had violence since election day, and I already spent an entire post talking about it. But Carlin frames things in an interesting way. He asks us to imagine that we were elected as president, and that our only goal was to heal the divisions that exist in the country. How would we do it? What policy would we implement that would bring the country back together again?

Carlin accurately points out that there’s not some anti-racist policy you could pass that would suddenly make everything all better. In fact it could be argued that we already have lots of anti-racist policies and that rather than helping, they might be making it worse. In my previous post I pushed for greater federalism, which is less a policy than a roll-back of a lot of previous policies. But as Carlin points out this is probably infeasible. First off because that’s just not how government works. Governments don’t ever voluntarily become less powerful. And second there’s not a lot of support for the idea even if the government was predisposed to let it happen.

Carlin spends the second half of the podcast talking about the Syrian missile strike. And in a common theme this discussion flows into his criticism of the ever expanding power of the executive. As you probably all know, only Congress has the power to declare war, and it last used that power in 1942 when it declared war on Bulgaria, Hungary and Romania. Since then it hasn’t used that power, though generally the President still seeks congressional approval for military action, what Carlin calls the fig leaf. He points out that Trump didn’t even do that. These days if someone dares to mention that this all might be unconstitutional, they are viewed as being very much on the fringe. But Carlin, like me, is grateful when people bring it up because at least it’s being talked about.

As I said executive overreach and expansion is a common theme for Carlin and one of the points he always returns to is that whatever tools you give your guy when he’s President are going to be used by the other side when they eventually get the presidency back. And this idea touches on the central idea that I want to explore, and the idea that unites the two halves of Carlin’s podcast, the idea of short term thinking. Both the current political crisis and the expansion of the presidency are examples of this short term thinking. And exactly the kind of thing that Demarest was talking about when he described historical civilizations which have collapsed.

As an extreme example of what I mean let me turn to one final recent podcast, the episode on Nukes from Radiolab. In the episode they examine the nuclear chain of command to determine if there are any checks on the ability of a US President to unilaterally launch a nuclear strike. That is, launch a nuclear strike without getting anyone else’s permission. And the depressing conclusion they come to is that there are effectively no checks. This is not to say that someone couldn’t disobey the order in that situation, but it’s hard to imagine such insubordination would hit 100%. In other words if Trump really wants to launch an ICBM, ICBMs will be launched.

But, for me, this is an issue which goes beyond Trump, and it’s scary basically regardless of who’s president. But it’s also a classic example of short term thinking. At some point it became clear that in the event of a Soviet first strike that there would be no time for a committee to assemble or multiple people to be called, and in that moment and based on this very narrow scenario, it was decided that sole control of the nuclear arsenal would be given to the President. If I remember the episode correctly this policy really firmed up during the Kennedy administration (and if you couldn’t trust Kennedy who could you trust?)

One could potentially understand this rationale for investing all of the power with the President, even if you don’t agree with it. But no thought was given to what should be done if the Cold War ever ended, and indeed when it did end, nothing changed. No thought or effort was even made to restrict this control to just the scenario of responding to a Soviet first strike. As it stands the President can launch missiles entirely at his discretion and for any reason whatsoever.

One would think that if Trump is as dangerous and unstable as people claim that they would be doing everything in their power to limit his ability to unilaterally start a nuclear war. That, at a minimum they would limit the President’s authority over nuclear weapons so that it applied only in situations where another country attacked us first. (I’m not sure how broad to make the standard of proof in this case, but even if it was fairly expansive we’d still be in a much better position than we are now. ) Instead, as of this writing, such a concern is nowhere to be found, and rather the headlines are about another GOP stab at a health bill, or how much the FBI director may have influenced the election or the sentencing of a woman who laughed at Jeff Sessions (the Attorney General).

Perhaps all of these issues will end up being of long term importance. Though that seems unlikely, particularly the story about the protestor laughing at Sessions, and even the story about the FBI director concerns something that already happened, and is therefore essentially unchangeable. It’s even harder to imagine how any of the issues currently in the news have more long term importance than the issue of the President’s singular control of the nuclear arsenal. And that’s just one example of long term dangers being overwhelmed by short-term worries.

You might argue at this point that the stories I mentioned are not unique to this moment in history, that people have been focused on their immediate needs and wants to the exclusion of longer term concerns for hundreds if not thousands of years. I don’t agree with this argument, I do think historically it has been different. And as a counter example I offer up the American Civil War where the focus may have been almost too long term. But even if I’m wrong and historically people were every bit as short-term in their outlook as they are now, the stakes today are astronomically greater.

I wanted to focus on short term thinking because it all builds up to my favorite definition of what civilization is. You may have noticed that we’ve come all this way without even clearly defining what we’re talking about, and I want to rectify that. Civilization is nothing more or less than low time preference. What’s time preference? It’s the amount of weight you give to something happening now versus in the future. As the term is commonly used it mostly relates to economics, how much more valuable is $1000 is today than $1000 in a month or a year. If $1000 today is the same as $1000 in three months then you have a time preference of zero. If you’re a loan shark and you want someone to pay you $2000 next week in exchange for $1000 today then you have a very high time preference, and are consequently engaging in what may be described as an uncivilized transaction, or at least a low-trust transaction, but of course trust is a big part of civilization.

Outside of economics, having a low time preference allows people to plan for the future, to build infrastructure, to establish institutions and perhaps most importantly to rely on the operation of the law, having faith that it’s not important to get justice right this second if you will get justice eventually. Perhaps you can see why I worry about what’s happening right now.

On the other hand it can easily be seen that corruption, the cancer of civilization, is a high time preference activity, people would rather get a bribe right now, because they have no trust in what the future will bring. When people talk about institutions, the rule of law, societal trust, and even the absence of violence they’re talking about low time preference. And let’s all agree right now that it’s a little bit confusing for “high” to be bad and low to be “good”.

Everything I’ve said so far is necessary to show that short-termism isn’t a symptom of the decline of civilization it IS the decline of civilization. But of course things can look fine for quite a while, because of the low time preference which existed up until this point. Meaning that those who came before us invested a lot in the future (because of their low time preference) and we can reap the benefits of those investments for a long time before it finally catches up to us.

Way back in the beginning of this post I stated that if you assume that our civilization is going to eventually collapse then the only question we’re left with is when and is there any way to delay that collapse? I think I’ve already answered the question of “when?” (Not immediately but sooner than most people think.) And now we need to look at the question, “What can we do to slow it down?” A simple, but somewhat impractical answer would be to lower our time preference. But as you can imagine this exhortation is unlikely to appear on a protest sign any time soon. (Perhaps I’ll try it out if we ever have a demonstration in Salt Lake City.) But, if we can’t get people to lower their time preference directly, perhaps we can do it indirectly.

If you were to use the term sacrifice, in place of low time preference, you would not be far from the mark. And restating the entire problem as, “We need greater sacrifice,” is something people understand, and it, also, just might make a good protest sign. But stating the solution this way just makes the scope of the problem all the more apparent. Because the last thing any of the people who are currently angry want to be told is that they need to sacrifice more.

It is, as far as I can tell, the exact opposite. All of the interested parties, left and right, rich and poor, minority and non, citizen and immigrant all feel that they have sacrificed enough, that now is the time for them to “get what they deserve.” Obviously not every poor person or every minority feels this way, but those who do feel this way are the ones who are out on the streets. And once again it all comes back to low time preference. No one wants to wait 10 years for something. No one is content to see their children finally get the rights they’ve been protesting for (if they even have children) and no one wants to wait four years for the next election.

All of this is not to say that people are entirely unwilling to sacrifice. People make sacrifices all the time for the things they want. But what I’m calling for, if we want to postpone collapse, is sacrifice specifically for civilization, which is, I admit, a fairly nebulous endeavor. But I think it starts with identifying what civilization is, and how it’s imperiled. Which is, in part, the point of this post. (In fact, I firmly expect all protesting and unrest to stop once it’s released.)

Joking aside, I fear there is no simple solution even if you have managed to identify the problem, and it may in fact be that there is nothing we can do to delay the end at this point. To return to Carlin’s question about the sorts of policies you might implement if you were made President and your one goal was to heal the country. I do think that creating some shared struggle we could all sacrifice for, would be a good plan, as good as any, and maybe even the best plan, which is not to say that it would succeed. And this hypothetical still relies on getting someone like that elected. Which is also not something that seems very likely. In other words things may already be too far gone.

One of my biggest reasons for pessimism is that I don’t think people see any connection between the unrest we’re currently experiencing (both here and abroad) and the weakening of civilization and more specifically the country. But there are really only three possibilities, the massive anger which exists can either strengthen the country, it can weaken it, or it can have no effect. If you think it’s making the country stronger, (or even having no effect) I’d love to hear your reasoning. But rather, I think any sober assessment would have to conclude that it can’t be strengthening it, and it can’t be having no effect, therefore it must be weakening it. Leaving only the question of by how much.

None of this is to claim that anger about Trump or alternatively support for Trump (or any of the other issues) will single-handedly bring down the country. But it’s all part of a generalized trend towards higher and higher time preference. Towards wanting justice and change right now. And I understand, of course, that the differences of opinion which have split the country are real and consequential. But what is the end game? What is the presidential policy that will make it all better? What are people willing to sacrifice? To repeat a quote I used in a previous post from Oliver Wendell Holmes:

Between two groups that want to make inconsistent kinds of world I see no remedy but force.

It’s a dangerous road we’re on and I would argue that as thinking get’s more and more short-term that the survival of civilization is at stake. And it’s at stake precisely because long-term thinking and planning is precisely what civilization is.

To come back to the assertion that started this all off, the assertion that I promised to return to. A civilization which can’t survive can’t do much of anything else. Of course at one level this is just a tautology. But at another level it ends up being a question of whether certain things can exist together. Can Trump supporters and Trump opponents live in the same country? Can a country give you everything you think you deserve right now, and yet still be solvent in 100 years? Can you have a system which is really good at reducing violence (as Pinker points out) but never abuses it’s power?

It’s entirely possible that the answer to all of those questions is yes. And I hope that’s the case. I hope that my worries are premature. I hope that similar to the unrest in the late 60’s/early 70’s that things will peak and then dissipate. That it will happen without a Kent State Shooting, or worse. But I also know that civilization takes sacrifice, it takes compromise, and however unsexy and dorky this sounds. It takes a low time preference.


You may have considered donating, but never gotten around it. Perhaps because you have low time preference and you assume that a dollar someday is as good as a dollar now. Well on this one issue I have very high time preference, so consider donating now.


Why I Hope the MTA Is Right, but Also Why It’s Safer to Assume They’re Not

If you prefer to listen rather than read, this blog is available as a podcast here. Or if you want to listen to just this post:

Or download the MP3


Last week’s post was titled Building the Tower of Babel, and it was written as a critique of the position and views of the Mormon Transhumanist Association (MTA). Specifically it was directed at an article written by Lincoln Cannon titled Ethical Progress is not Babel. In response to my post Cannon came by and we engaged in an extended discussion in the comments section. If you’re interested in seeing that back and forth, I would recommend that you check them out. Particularly if you’re interested in seeing Cannon’s defense of the MTA. (And what open-minded person wouldn’t be?)

I was grateful Cannon stopped by for several reasons. First I was worried about misrepresenting the MTA, and indeed it’s clear that I didn’t emphasize enough that, for the MTA, technology is just one of many means to bring about salvation and in their view insufficient by itself. Second a two-sided discussion of the issues is generally going to be more informative and more rigorous than a one-sided monologue. And third because I honestly wasn’t sure what to do with the post, or with the MTA in general. Allow me to explain.

In a previous post I put people into three categories: the Radical Humanists, the Disengaged Middle and the Actively Religious. And in that post I said I had more sympathy for and felt more connected to the Radical Humanists than to the Disengaged Middle. The MTA is almost unique in being part of both the Radical Humanist group and the Actively Religious. Consequently I should be very favorably disposed to them, and I am, but that doesn’t mean that I think they’re right, though if it were completely up to me I’d want them to be right. This is the difficulty. On the one hand I think there are a lot of issues where we agree. And on those issues both of us (but especially me) need all the allies we can get. On the other hand, I think they’re engaged in a particularly seductive and subtle form of heresy. (That may be too strong of a word.) And I am well-positioned to act as a defender of the Mormon Orthodoxy against this, let’s say, mild heresy. And it should go without saying that I could be wrong about this. Which is one of the reasons why I think you should go read the discussion in the comments of the last post and decide for yourself.

Perhaps a metaphor might help to illuminate how I see and relate to the MTA. Imagine that you and your brother both dream of selling chocolate covered asparagus. So one day the two of you decide to start a business doing just that. As your business gets going your father offers you a lot of advice. His advice is wise and insightful and by following it your business gradually grows to the point where it’s a regional success story. But at some point your father dies.

Initially this doesn’t really change anything, but eventually you and your brother are faced with a business decision where you don’t see eye to eye, and your father isn’t around anymore. Let’s say the two of you are approached by someone offering to invest a lot of money in the business. You think the guy is shady and additionally that once he’s part owner, that he may change the chocolate covered asparagus business in ways that would damage it, alter it into something unrecognizable or potentially even destroy it. Perhaps he might make you switch to lower quality chocolate, or perhaps he wants to branch into chocolate covered broccoli. (Which is just insane.) Regardless, you don’t trust him or his motives.

On the other hand, your brother thinks it’s a great opportunity to really expand the chocolate covered asparagus business from being a regional player into a worldwide concern. In the past your father might have settled the dispute, but he’s gone, and as the two of you look back on his copious advice you can both find statements which seem to support your side in the dispute. And, not only that, both of you feel that the other person is emphasizing some elements of your father’s advice while ignoring other parts. In any event you’re adamant that you don’t want this guy as an investor and part owner, and your brother is equally adamant that it’s a tremendous opportunity and the only way your chocolate covered asparagus business is really going to be successful.

None of this means that you don’t still love your brother, or that either of you is any less committed to the vision of chocolate covered asparagus. Or that either of you is less respectful of your late father. But these commonalities do nothing to resolve the conflict. You still feel that this new investor may destroy the chocolate covered asparagus business, while your brother feels that the investor is going to provide the money necessary to make it a huge success. And perhaps, most interesting of all, if you could just choose the eventual outcome of the decision you would choose your brother’s expected outcome. You would choose for the investment to be successful, and for chocolate covered asparagus to fill the world, bringing peace and prosperity in it’s wake.

But, you can’t choose one future over another, you can’t know what will happen when you take on the investor. And in your mind it’s better to preserve the company you have than risk losing it all on a unclear bet and a potentially unreliable partner.

Okay that metaphor ended up being longer than I initially planned, also, as with all metaphors it’s not perfect, but hopefully it gives you some sense of the spirit in which I’m critiquing the MTA. And perhaps the metaphor also helps explain why there are many ways in which I hope the MTA is right, and I’m wrong. Finally I hope it also provides a framework for my conclusion that the best course of action is to assume that they’re not right. But, let’s start by examining a couple of areas where I definitely hope they are correct.

The first and largest area where I hope the MTA is right and I’m wrong is war and violence. There is significant evidence that humans are getting less violent. The best book on the subject is Better Angels of Our Nature by Steven Pinker, which I reviewed in a previous post. As I mentioned in that post I do agree that there has been a short term trend of less violence, and also, a definite decrease in the number of deaths due to war. This dovetails nicely with the MTA’s assertion that humanity’s morality is increasing at the same rate as its technology, and given these trends, there is certainly ample reason to be optimistic. But this is where the Mormon part of the MTA comes into play. While it’s certainly reasonable for Pinker and secular transhumanists to be optimistic about the future, for Mormons and Christians in general, there is the little matter of Armageddon. Or as it’s described in one of my favorite scriptures Doctrine and Covenants Section 87 verse 6:

And thus, with the sword and by bloodshed the inhabitants of the earth shall mourn; and with famine, and plague, and earthquake, and the thunder of heaven, and the fierce and vivid lightning also, shall the inhabitants of the earth be made to feel the wrath, and indignation, and chastening hand of an Almighty God, until the consumption decreed hath made a full end of all nations;

I assume that the MTA has an explanation for this scripture that is different than mine, but I’m having a hard time finding anything specific online. If I had to guess, I imagine they would say that it has already happened. But in any case, they have to have an alternative explanation because if we assume that the situation described above has yet to arrive, then the MTA will have at least two problems. First the trend of decreasing violence and increasing morality will have definitely ended, and second I think it’s safe to assume that if we have to pass through the “full end of all nations”, that what comes out on the other side won’t bear any resemblance to the utopian transhumanist vision of the MTA. Again, I hope they’re right, and I hope I’m wrong, I hope that scripture has somehow already been fulfilled, or that I’m completely misinterpreting it. I hope that humanity is more peaceful than I think, rather than less. But just because I want something to be a certain way doesn’t mean that’s how it’s actually going to play out.

For our second area, let’s take a look at genetic engineering. Just today I was listening to the Radiolab podcast, specifically the most recent episode which was an update to an older episode exploring a technology called CRISPR. If you’re not familiar with it, CRISPR is a cheap and easy technology for editing DNA, and the possibilities for it’s use are nearly endless. The most benign and least controversial application of CRISPR would be using it to eliminate genetic diseases like hemophilia (something they’re already testing in mice.) From this we move on to more questionable uses, like using CRISPR to add beneficial traits to human embryos (very similar to the movie Gattaca). Another questionable application would involve using CRISPR to edit some small portion of a species and then, taking advantage of another technique called Gene Drive, use the initially modified individuals to spread the edited genes to the rest of the species. An example of this would be modifying mosquitos so that they no longer carry malaria. But it’s easy to imagine how this might cause unforeseen problems. Also how the technique could be used in the service of other, less savory goals. I’ll allow you a second to imagine some of the nightmare scenarios this technique makes available to future evil geniuses.

CRISPR is exactly the sort of technology the MTA and other transhumanists have been looking forward to. It’s not hard to see how the cheap and easy editing of DNA makes it easier to achieve things like immortality and greater intelligence. But as I already pointed out even positive uses for CRISPR have been controversial. According to the Radiolab podcast the majority of bioethicists are opposed to using CRISPR to add beneficial traits to human embryos. (Which hasn’t stopped China from experimenting with it.)

As far as I understand it the MTA’s position on all of this is that it’s going to be great, that the bioethicists worry to much. This attitude stems from their aforementioned belief that morality and technology are advancing together. Which means that by the time we master a technology we will also have developed the morality to handle it. As it turns out DNA editing is another area of agreement between the MTA and Steven Pinker, who said the following:

Biomedical research, then, promises vast increases in life, health, and flourishing. Just imagine how much happier you would be if a prematurely deceased loved one were alive, or a debilitated one were vigorous — and multiply that good by several billion, in perpetuity. Given this potential bonanza, the primary moral goal for today’s bioethics can be summarized in a single sentence.

Get out of the way.

A truly ethical bioethics should not bog down research in red tape, moratoria, or threats of prosecution based on nebulous but sweeping principles such as “dignity,” “sacredness,” or “social justice.” Nor should it thwart research that has likely benefits now or in the near future by sowing panic about speculative harms in the distant future. These include perverse analogies with nuclear weapons and Nazi atrocities, science-fiction dystopias like “Brave New World’’ and “Gattaca,’’ and freak-show scenarios like armies of cloned Hitlers, people selling their eyeballs on eBay, or warehouses of zombies to supply people with spare organs. Of course, individuals must be protected from identifiable harm, but we already have ample safeguards for the safety and informed consent of patients and research subjects.

Given this description perhaps you can see why I hope the MTA, and Pinker are right. I hope that CRISPR and other similar technologies do yield a better life for billions. I hope that humanity is mature enough to deal with the technology, and that it’s just as cool, and as transformative as they predict. That the worries of the bioethicists concerning CRISPR and the warnings of scripture concerning war, turn out to be overblown. That the future really is as awesome as they say it’s going to be. Wouldn’t it be nice if it were true.

But perhaps, like me, you don’t think it is. Or perhaps, you’re just not sure. Or maybe despite my amazing rhetoric and ironclad logic, you still think that the MTA is right, and I’m wrong. The key thing, as always, is that we can’t know. We can’t predict the future, we can’t know for sure who is right and who is wrong. Though to be honest I think the evidence is in my favor, but even so let’s set that aside for the moment and examine the consequences of being wrong from either side.

If I’m wrong, and the MTA is correct, then my suffering will be minimal. Sure the transhumanist overlords will dredge up my old blog posts and use them to make me look foolish. Perhaps I’ll be included in a hall of fame of people who made monumentally bad predictions. But I’ll be too busy living to 150, enjoying a post scarcity society, and playing amazingly realistic video games, to take any notice of their taunting.

On the other hand, if I’m right and the MTA is wrong. Then the sufferings of those who were unprepared could be extreme. Take any of the things mentioned in D&C 87:6 and it’s clear that even a little preparation in advance could make a world of difference. I’m not necessarily advocating that we all drop everything and build fallout shelters, I’m talking about the fundamental asymmetry of the situation. Which is to say that the consequences of being wrong are much worse in one situation than in the other.

The positions of the MTA and the transhumanists and of Pinker are asymmetrical in several ways. First is the way I already mentioned, and is inherent in the nature of extreme negative events, or black swans as we like to call them. If you’re prepared for a black swan it only has to happen once to make all the preparation worth while, but if you’re not prepared then it has to NEVER happen. To use an example from a previous post, imagine if I predicted a nuclear war. And I had moved to a remote place and built a fallout shelter and stocked it with shelf after shelf of canned goods. Every year I predict a nuclear war and every year people mock me, because year after year I’m wrong. Until one year, I’m not. At that point, it doesn’t matter how many times I was the crazy guy from Wyoming, and everyone else was the sane defender of modernity and progress, because from the perspective of consequences they got all the consequences of being wrong despite years and years of being right, and I got all the benefits of being right despite years and years of being wrong.

The second way in which their position is asymmetrical is the number of people who have to be “good”. CRISPR is easy enough and cheap enough and powerful enough that a small group of people could inflict untold damage. The same goes for violence due to war. It’s not enough for the US and Russia to not get into a war. China, Pakistan, North Korea, Israel, France, Japan, Taiwan, India, Brazil, Vietnam, the Ukraine, and on and on, all have to behave as well. The point being that even if you are impressed with modern standards of morality (which I’m not by the way) if only 1% of the people decide to be really bad, it doesn’t matter how good the other 99% are.

The final asymmetry is that of time. A large part of the transhumanist vision came about because we’re in a very peaceful time where technology is advancing very quickly. Thus the transhumanists came into being during a brief period where it seems obvious that things are going to continue getting better. But they seem to largely ignore the possibility that in 100 years an enormous number of things might have changed. The US might no longer exist, perhaps democracy itself will be rare, we could hit a technological plateau, and of course we’ll have to go that entire time without any of the black swans I already mentioned. No large scale nuclear wars, no horrible abuses of DNA editing, nor any other extreme negative events which might derail our current rate of progress and our current level of peace.

As my final point, in addition to the two things I hope the MTA is right about I’m going to add one thing which I hope they’re not right about. To introduce the subject I’d like to reference a series of books I just started reading. It’s the Culture Series by Iain M. Banks, named after the civilization at the core of all the books. Wikipedia describes Culture as a utopian, post-scarcity space communist society of humanoids, aliens, and very advanced artificial intelligences. We find out additionally that its citizens can live to be up to 400. So not immortal, but very long lived. In other words Culture is everything transhumanists hope for. As far as I can tell citizens of the Culture spend their time in either extreme boredom, some manner of an orgy or transitioning from one gender into another and back again. Perhaps this is someone’s idea of heaven, but it’s not mine. In other words if this or something like it is what the MTA has in mind as the fulfillment of all the things promised by the scriptures, then I hope they’re wrong. And I would offer up that they suffer from a failure of imagination.

I hope that resurrection is more than just cloning and cryonics, that transfiguration is more than having my mind uploaded into a World of Warcraft server, that “worlds without number” is more than just a SpaceX colony on Mars. That immortality is more than just the life I already have, but infinitely longer. If you’re thinking at this point that my description of things is a poor caricature of what the MTA really aspires to then you’re almost certainly correct, but I hope that however lofty the dreams of the MTA that those lofty dreams are in turn a poor caricature of what God really has in store for us.

Returning to my original point. I am very favorably disposed to the MTA. I think they have some great ideas, and I’ve very impressed with the way they’ve combined science and religion. Unfortunately, despite all that, we have very different philosophies when it comes to the business of chocolate covered asparagus.


Given that we don’t yet live in a post-scarcity society consider donating. And if you’re pretty sure we eventually will, that’s all the more reason to donate, since money will soon be pointless anyway.


Reframing Pinker’s The Better Angels of Our Nature

If you prefer to listen rather than read, this blog is available as a podcast here. Or if you want to listen to just this post:

Or download the MP3


The title of this blog is “We Are Not Saved”. I just got done reading a book by Steven Pinker, the well-known Harvard professor, which easily could have been titled “We Are Saved”. Obviously reading a book with a conclusion so different from my own required a blog post. Pinker’s book is actually titled The Better Angels Of Our Nature: Why Violence Has Declined. But before getting into it, if I’m going to keep my recent resolution to avoid the curse of knowledge, it’s necessary to give a brief summary.

If you’ve heard of the book at all, it’s probably from the standpoint of the decline of war. And most of the criticism of the book has been in that vein. Perhaps the key question on that front is whether the Long Peace, the absence of conflicts between major powers since World War II, is just a random lucky run, like a winning streak in sports, or whether it represents a new and improved era for humanity. On this point Pinker comes down on the side of it being a new era, while Taleb is of the opinion that it’s random, and as we saw in the last post, Taleb knows how easy it is to be fooled by randomness.

That’s the big headline, but the book is much broader than that. Pinker covers the decrease of violence in all forms, the general march of civilization, increases in humanitarian impulses, and the rights revolution. I said that it could easily have been titled “We Are Saved”, and in Pinker’s opinion things are not only getting better but will continue to get better. As an explanation he offers up the march of technology, reason and the values of The Enlightenment. With reason and technology taking a center stage, his view of religion is mixed to say the least. To be fair, even though he’s a self-admitted atheist, he’s not as bad as Richard Dawkins, or the late Christopher Hitchens. But the book is full of shots at religion and he has nothing but disdain for religion in its ancient form, particularly the Old Testament.  

Hopefully that’s enough of an overview get our discussion started. The book is over 800 pages and I’m obviously only going to be able to talk about a small part of it in the few thousand words available to me in a blog post. And further I’m going to use some of those words to introduce the concept of the motte and bailey argument. This idea was popularized by Scott Alexander of SlateStarCodex (though not his idea originally) and I can’t really improve on his description, so I’ll just quote it.

[The motte-and-bailey was] a form of medieval castle, where there would be a field of desirable and economically productive land called a bailey, and a big ugly tower in the middle called the motte. If you were a medieval lord, you would do most of your economic activity in the bailey and get rich. If an enemy approached, you would retreat to the motte and rain down arrows on the enemy until they gave up and went away. Then you would go back to the bailey, which is the place you wanted to be all along.

So the motte-and-bailey doctrine is when you make a bold, controversial statement. Then when somebody challenges you, you claim you were just making an obvious, uncontroversial statement, so you are clearly right and they are silly for challenging you. Then when the argument is over you go back to making the bold, controversial statement.

Instances of this tactic abound, and if you were paying attention there were numerous examples of it during the recent election. As in when Trump starts off by saying he’s going to round up all of the illegal aliens (the bailey), but when pressed, he says he’s only going to deport the criminals (the motte). He whips up his base with the bailey, and then retreats to the motte when closely questioned.

I bring up the motte and bailey tactic because it’s woven all throughout Better Angels, and accordingly makes a good framework for my criticism of the book. With respect to numbers and data, the book is very solid. In every area he covers, he can show a clear trend of things getting less violent. Whether it’s a decrease in deaths due to warfare from prehistory to the present day, a decrease in English homicides since the 1600, or decrease in domestic violence since 1970, things have clearly been getting better. This is his motte. His bailey is to extrapolate that trend forwards in time. But when someone accuses him of that, of claiming the age of war is over, he falls back to the motte and claims that he has made no predictions about the future, he’s just assembling statistics from the past. For example:

I am sometimes asked, “How do you know there won’t be a war tomorrow (or a genocide, or an act of terrorism) that will refute your whole thesis?” The question misses the point of this book. The point is not that we have entered an Age of Aquarius in which every last earthling has been pacified forever. It is that substantial reductions in violence have taken place, and it is important to understand them….The goal of this book is to explain the facts of the past and present, not to augur the hypotheticals of the future…The truth is, I don’t know what will happen across the entire world in the coming decades, and neither does anyone else.

As I said the motte is the unassailable part of the argument, and I think largely Pinker has succeeded in this. But even here he uses some slight of hand. As I mentioned above, we can extrapolate warfare deaths back thousands of years, with archeological data all the way back to 10,000 BC in some places. This gives a pretty clear trendline for violent deaths due to war. But it’s a trendline with big gaps in it. We have data stretching back thousands of years, but if you go back more than a few centuries that data is really sparse. What this means is that it’s hard to know if the decrease in deaths from warfare is the trend a thousand years in the making or a only a few hundred. And even if it is a thousand years in the making the sparsity of data means that we don’t know how smooth the trend is. How many giant peaks of violence are there? And how many valleys of peace?

Whether or not it was intentional, by pulling in data going back thousands of years and comparing hunter-gatherer society to modern civilizations Pinker appears to be making the case that the decrease in violence represents a trend that’s thousands of years old, which is much more impressive than if it’s just a few centuries old. And this is the first example of the bailey, the impression that decreasing violence of all types is a trend stretching thousands of years into the past and therefore likely to continue indefinitely into the future. Even though from the perspective of data we can only talk about warfare deaths and even then the data is spotty.

As I said, Better Angels is not just a book about war, Pinker wants to show that the past was more violent on nearly all measurements. In service of his thesis he moves from deaths due to war to deaths from homicides. Here he’s only able to go back to the 13th century (and I think there’s some significant assumptions involved to get back that far.) And again we see a graph that starts high and slopes downward, giving us the impression that we’re dealing with a trend that’s that’s been progressing in the same direction for a hundreds and hundreds of years. The problem once again comes from the data that’s missing. His numbers are largely from Western Europe, this gives him a particularly low endpoint since present day Western Europe is extraordinarily nonviolent by historical standards, and without saying it explicitly, Western Europe ends up as a proxy for the world at large, and by extension the endpoint to which we’re all headed. However once you’re outside of Western Europe the trend is a lot less obvious, for example the current murder rate in Venezuela is as bad as it ever was in Europe even if you go all the way back to the 13th and 14th centuries. I assume Pinker doesn’t think it will take another 700 years for Venezuela to reach the level of Sweden, but since he never mentions Venezuela it’s hard to say. Instead he selects data in a way designed to give the impression that the downward trend in violence is global, and hundreds of years in the making, when, on closer inspection it appears to be both more recent and more localized.

From homicides he moves on to domestic abuse. Once again we see a distinct downward trend, but with each new category of violence his data is restricted to a smaller and smaller time frame. For war deaths he was able to go back thousands of years, for homicides, hundreds of years, for domestic violence he’s only able to go back a few decades to the 1970’s, and nearly all of that data is from the US. A trend that’s thousands or hundreds of years old is impressive, a trend that’s only as old as I am, less so. But the way it’s structured you get the impression that everything from war deaths, to murders, to domestic violence all the way through to spanking is part of a vast arrow of progress carrying us forward to a continually brighter tomorrow.

This is Pinker in his bailey getting rich, it’s this claim of a trend stretching into the future coupled with the triumph of progress that gets people’s attention, it’s this claim that gets Slate to call the book a monumental achievement. Of course, when necessary, Pinker retreats to his motte and claims that he’s not predicting anything, but the whole appeal of the book is what it implies about the future, and the longer he can extend the arrow of progress into the past, the father it appears to extend into the future.

In tying everything together in a single arc, he does two things. First there’s the structure I already mentioned where he anchors your thinking thousands of years in the past by using archaeological data on warfare deaths and then layering the rest on top that base. And then, secondly, he fills in the missing data, particularly in the realm of domestic abuse and rights more broadly, with the use of countless anecdotes. These anecdotes are naturally compelling. As humans we love stories, and Pinker knows that, but he also knows that they’re no substitute for actual data. Still he uses them to construct something that looks like the fortified tower that is the motte, but really isn’t.

Using both of these techniques together Pinker makes it seem like the decrease in violence is a historical juggernaut whose speed is only increasing as both social and technological progress becomes more rapid. He may deny that he’s making any predictions about the future, but once the reader has an unstoppable, accelerating juggernaut in his head, it’s going to be hard for him to imagine it stopping suddenly, let alone going in reverse. I see, and agree with the same data Pinker does, I just don’t see a juggernaut, I see something far more fragile.

In service of his argument Pinker is very committed to painting the past in as violent a light as possible. The first chapter of the book is titled “A Foreign Country” as in the past is a foreign country. Well the future is a foreign country as well, and I see at least six ways in which the decrease in violence is more fragile than Pinker’s book would indicate. Even if we grant a trend in decreasing violence lasting hundreds of years, which, itself, is a shakier thesis than Pinker wants to admit.

First while Pinker offers various explanations for why violence has decreased. One that he comes back to over and over to is the Leviathan, a term coined by Thomas Hobbes in 1651 to describe an all powerful state. In Pinker’s opinion decreases in violence are directly tied to increases in state power. That in fact if you look closely at his data you’ll find that the clearest trendline for a decrease in violence isn’t the length of time which has passed, but the trend from hunter-gatherer to hunter-horticulturalist to full agriculture, with the accompanying increase at each step in the centralization and power of the state. If you have any libertarian leanings, this trend should worry you, but even if you don’t, by tying up everything into a single larger and larger entity we introduce fragility, even if it’s just through the single points of failure we create. You may agree that this is still a great deal, but is it still a great deal if the endpoint of the trend is zero murders, but a 1984 style surveillance state?

Second, and closely related to the last point, it would appear that Pinker’s juggernaut relies on the continued health and stability of the state. As I said his warfare data had a lot of gaps, even though it went back thousands of years. One of the gaps that seemed particularly noteworthy was the period after the fall of the Roman Empire. Pinker gives the impression that violence has decreased on a smooth line since the Sumerians first planted wheat in the valleys of the Tigris and the Euphrates. But if the Leviathan collapses, I can only assume that violence rockets back up. Pinker doesn’t touch on this point, but the biggest single point of failure in the Leviathan is the Leviathan itself. And this time around if there’s any collapse of the state we’ll get to add nukes to the mix. In other words we are only saved if the state remains healthy, and I think that at present there’s reason for a lot of concern on that count.

Third, even if the Leviathan remains healthy, the modern world in general is more fragile. Pinker can be right about everything and still have a single bioterrorist bring the entire thing down, illustrating that however peaceful we’ve become that one big difference between the future and that past is the amount of damage a single individual can do. Catastrophes caused by more powerful weapons aren’t just limited to bioterrorism, they include the potential threat of artificial intelligence, nanotechnology, and the grandaddy of threats, nuclear war. Pinker doesn’t spend any time on the first two, but as you might imagine he spends significant time with nuclear weapons. On this count he has some compelling arguments, but I think that he overlooks one big part of the argument. Whether this is purposeful or not I don’t know, but the part he overlooks is the enormous time horizon he’s dealing with. Perhaps it’s true that nuclear weapons won’t be used in the next 50 or the next 100, but what about the next 500? Even if we somehow get rid of them all the technology will still be around.

Fourth, even though I claim that the harvest is past and the summer is ended, I don’t claim that there was no harvest or that there was no summer. There was a harvest and there was a summer, I’m merely saying it can’t last forever, and that it won’t provide permanent salvation. If you look at Pinker’s data, and even his anecdotes, you’ll find that they mostly concern this same summer and this same harvest that I’ve talked about. The period that starts roughly with The Enlightenment and continues to the present day. Where we disagree is how long it can last. As I pointed out in my blogpost about the limits of growth, there are limits to progress, limits we may have already reached. As I said in the last post we may already be out of dragons to slay. The technological progress which has enabled the decrease in violence may be about to hit a wall. Historically the few hundred years of progress we’ve experienced is not in the general scheme of things, all that long, the only difference between this period and previous periods of relative stability is the speed of technology, a speed which is ultimately unsustainable.

Thus far I’ve been focusing on more tangible and quantifiable concerns, but for the last two points I’d like look at a couple of things that are more speculative. Thus far I’ve largely talked about the decrease in violence, but Pinker’s writing reaches out to encompass the entire arc of progress, including what he terms the rights revolution. Under this heading he includes everything from civil rights to gay rights to animal rights and unlike with the other trends, he admits that recognition of most of these rights is a relatively recent phenomenon.

As my fifth point I worry about where it all ends. When speaking of rights I agree with Pinker that there is a trend and the trend is accelerating, but we’re running out of road. We already have rights movements for everything imaginable, from animals, to transgendered individuals, to children (though not fetal rights.) What else is there? It may be too soon to tell but it appears now that the only thing left is to restrict the rights of those who’ve traditionally been privileged, a weird circular progression with strange unknowable consequences (including, possibly, the election of Trump?) It is possible to have too much of a good thing? Antibiotics were a true miracle when discovered, but using them for everything eventually makes them completely ineffective. As bacteria develop resistance. I’ve seen the same argument made about accusations of racism. Initially they were useful and very effective, but we’ve gotten to a point where the accusations have been overused. Once again it may not be a big deal, but there are a lot of times where things work until suddenly they don’t, where violence decreases until suddenly it doesn’t and I wonder if the rights revolution is an early example of that.

Finally, when I have an idea there’s one friend in particular that I always run it by. First off explaining it to him inevitably clarifies my thinking, secondly if this friend sees a hole in my argument he’s going to pounce on it, and most of my discussions with him end up more as low intensity debates. Additionally, in any discussion/debate with this friend he wants to make sure that he understands the core value(s) of the other person. Since it’s hard to have a productive debate if the two people can’t even agree on what’s important. For example a productive debate on whether incarceration rates are too high is going to prove difficult if one person’s core value is maximum liberty, and the other person’s core value is zero crime.

And this takes me to my final point. What is our core value? Pinker’s is the reduction of suffering and violence. This is laudable, and I certainly don’t fault Pinker (or anyone) if that is in fact his core value. But it’s not my core value and it probably shouldn’t be yours. To begin with, if you’re Mormon, you believe we already rejected the plan of zero sin and zero suffering. If you’re not Mormon, but you’re still Christian your core value should be to do God’s will. (A sentiment Pinker finds abhorrent.) But what if you’re an atheist like Pinker? Well then a reduction in violence may be your core value, but I can think of one that’s better. It’s the core value of my friend. His core value is “For Intelligence to Escape This Gravity Well.”

You may not initially agree that this is a better core value. But if this doesn’t happen then we’re definitely not saved. Humanity could end up perfectly peaceful and nonviolent, but if they don’t eventually leave the Earth they’re going to be wiped out anyway, even if it’s several billion years from now when the Earth can no longer support life, but it would probably be a lot sooner than that. Or perhaps you do agree that it’s probably a better core value, but you don’t think there’s any reason we can’t do both. I’m not so sure about that. The countries that are the farthest along the Better Angels path spend vast sums of money on the welfare state which is essentially a nonviolent humanitarian project, and very little on making humanity a two+ planet species. A staggeringly difficult project regardless of what Elon Musk says.

I’m glad that we live in a time of unprecedented peace, and I thoroughly enjoyed Pinker’s book. But despite this I think he falls into the trap common to most defenders of modernity: thinking that the recent summer of progress is an eternal summer and that the harvest of technology will last forever.


Nukes

If you prefer to listen rather than read, this blog is available as a podcast here. Or if you want to listen to just this post:

Or download the MP3


The key theme of this blog is that progress has not saved us. It has not made us any less sinful, it has not improved our lives in any of the ways that really matter, but has rather introduced opportunities to sin that for someone living 200 years ago would beggar the imagination.

Of course it’s easy and maybe even forgivable to think this is not the case. We live longer, there’s less hunger and poverty, along with this comes more freedom and less violence. For now we’re going to focus on that last assertion, that things are less violent. And since we already broached the subject of nukes in our last post, we’re specifically going to continue to expand on that idea.

One of the best known arguments about a decrease in violence comes from someone who I actually admire quite a bit, Steven Pinker. He made the argument in his book The Better Angels of our Nature. Taleb, as you might imagine, disagrees with Pinker’s thesis and in what is becoming a common theme, asserts that Pinker is confusing the absence of volatility with an absence of fragility. If you want to read Taleb’s argument you can find it here. Needless to say, as much as I admire Pinker, on this issue I agree with Taleb.

As I have already said, this post is going to be an extension of my last post. In that last post I urged people to take a longer term outlook, and to eschew the immediate political fight in favor of a longer term historical outlook. In other words that post was about being wise, and this post is about what will happen if we aren’t wise. In particular what things look like as far as nukes.

As you can imagine if our survival hinges on our wisdom, then I’m not optimistic, and I personally predict that nukes are in our future. In this, I think, as with so many things, that I am contradicting conventional wisdom, or at least what most people believe about nuclear weapons, if they in fact believe anything at all.  If they do they might be thinking something along these lines: It’s been over 70 years since the last nuke was exploded in anger. (In fact I am writing these words on the 71st anniversary of Nagasaki, though they won’t be published until a few days later.) And they may further think: Yes, we have nukes, but we’re not going to use them. Sure some crazy terrorist may explode one, but the kind of all-out exchange we were worried about during the cold war is not going to happen. First don’t underestimate the impact of a loan terrorist nuke, and secondly don’t write off an all-out exchange either. Particularly if we’re going to poke the bear in the manner I described in my last post.

The first question to consider is why are we still worried about nukes even 70 years after their invention? Generally the development of a technology is quickly followed by the development of countermeasures. To take just one example, being able to drop bombs from the air was terrifying to people when that first became a possibility, but it didn’t take long to develop fighter aircraft, anti-aircraft guns and surface to air missiles. Then why, 71 years after Nagasaki and 50+ years after the development of the ICBM, can we still not defend ourselves? Can’t we shoot missiles down? Well first off even if we could a lot of people think building a missile defense system is the ultimate way of poking the bear. For what it’s worth I don’t fall into that camp despite my reluctance, in general, to poke the bear. But even if we decide that’s okay, right now it just isn’t technologically feasible to make a missile defense system that works against someone like Russia or China.

At this point I’d like to offer up data on the effectiveness of various anti missile systems and unfortunately there’s not a lot of it, and what there is isn’t good. If North Korea or Iran happened to launch a single missile at the United States we might be able to stop it, but when asked what he would do in that case one knowledgeable US official is reported to have said:

If a North Korean ICBM were launched in the direction of Seattle, …[I] would fire a bunch of GMD interceptors and cross [my] fingers.

Some clarification: GMD stands for Ground-based Midcourse Defense and is our current anti-ballistic missile platform, also North Korea currently doesn’t have a missile capable of reaching Seattle. But it’s interesting to note what they do have, given how impoverished the country is in all other respects.

As I said I’d like to offer up some data, but there isn’t much of it. Recent tests of our anti-missile systems have been marginally promising but they have mostly been conducted in a reasonably controlled environment, not on actual missiles being fired by surprise from a random location, at a time chosen by the aggressor for optimal effectiveness.

Tacked on at the end of the Wikipedia article on the US’s efforts at missile defense is a great summary of the difficulties of defending against a Russian or Chinese ICBM. In short:

  • Boost-stage defenses are the only layer that can successfully destroy a MIRV (an ICBM that has multiple warheads.)
  • Even so, boost stage interception is really difficult particularly against solid fuel ICBMs of the type that Russia and China use.
  • And even then the only current technology capable of doing it has to be within 40 km (~25 miles) of where the missile is launched. For those in Utah that means that if you had an anti missile defense system located at Hill Air Force Base it could shoot down missiles launched from no farther away than downtown Salt Lake City.

The Wikipedia article concludes by saying that, “There is no theoretical perspective for economically viable boost-phase defense against the latest solid-fueled ICBMs, no matter if it would be ground-based missiles, space-based missiles, or airborne laser (ABL).” (A reference from the following paper.)

In the end it’s not hard to see why nuclear missiles are so hard to defend against. Your defense can’t be porous at all. Letting even a single warhead get through can cause massive destruction. Add to that their speed and small size and you have the ultimate offensive weapon.

Thus far we’ve talked about the difficulties in defending against a Russian or Chinese ICBM. But of course we haven’t done anything to address why they might decide to nuke us. I did cover that at some length in my last post, but before we dive back into that, let’s look at people who we know want to nuke us, terrorists.

Obviously there are no shortage of terrorist groups who would love to nuke us if they could get their hands on one. Thus far we’ve been lucky and as far as we know there are no loose nukes. And I’m sure that preventing it is one of the top priorities of every intelligence agency out there, so perhaps it won’t happen. Still this is another situation where we’re in a race between singularity and catastrophe. On a long enough time horizon the chances that there will be some act of nuclear terrorism approach 100%. To argue otherwise would be to assert that eventually terrorism and nukes will go away. I will address the later point in a minute, but as to the first I don’t think anyone believes that terrorism will disappear. If anything, most sources of grievance have increased in the last few years. If you think I’m wrong on this point I’d be glad to hear your argument.

Of course, if we never have an incident of nuclear terrorism, then, as I frequently point out, that’s great. If I’m wrong nothing happens. But if I’m right

Perhaps you might argue that a single nuke going off in New York or Paris or London is not that bad. Certainly it would be one of the biggest new stories since the explosion of the first nuclear weapons and frankly it’s hard to see how it doesn’t end up radically reshaping the whole world, at least politically. Obviously a lot depends on who ultimately ended up being responsible for the act, but we invaded Iraq after 9/11 and they had nothing to do with it (incidentally this is more complicated than most people want to admit, but yeah, basically they didn’t have anything to do with it and we invaded them anyway.) Imagine who we might invade if an actual nuke went off.

And then of course there’s the damage to the American psyche. Look at how much things changed just following 9/11. I can only imagine what kind of police state we would end up with after a terrorist nuke exploded in a major city. In other words, I would argue that a terrorist nuke is inevitable and that when it does happen it’s going to have major repercussions.

But we still need to return to a discussion of a potential World War III, a major nuclear exchange between two large nation states. What are the odds of that? Since the end of the Cold War the conventional wisdom has been that the odds are quite low, but I can think of at least a half a dozen factors which might increase the odds.

The first factor is the one I covered in my last post, and that is that we seem determined to encircle and antagonize the two major countries that have a large quantity of nuclear weapons. I previously spoke mostly about Russia, but if you follow what’s happening in the South China Sea (that article was three hours old when I wrote this) or if you’ve heard about the recent ruling by the Hague we’re not exactly treating China with kid gloves either. I’ve already said a lot about this factor so we’ll move on to the others.

The next factor which I think increases the odds of World War III is the proliferation of nuclear weapons. I know that most recently Iran looks like a success story. Here’s a country who wanted nuclear weapons and we stopped them. Well of course that remains to be seen, but it does seem intuitive that the longer we go the more countries will have nukes. Perhaps it might be instructive to determine a rate at which this is happening. In 1945 there was one country. Today in 2016, everyone pretty much agrees that there are nine. Dividing 71 years by 8 we get a new nuclear nation every nine years. Which means that in 99 years we’ll have another 11 nations with nuclear weapons, assuming that the rate of acquisition doesn’t increase. But actually most technological innovation doesn’t follow a linear curve. Consequently we may see an explosion (no pun intended) in nations with nuclear weapons, or it may be gradual or it may not happen at all (again this would be great, but unexpected.)

But let’s assume the rate at which new countries are added to the nuclear club stays constant and it takes 9 years on average to add a nation to the club and that in 100 years we’ve only added 11 more countries. On the face of it that may seem fairly minor, but if we assume that any two belligerents could start World War III then we would have 55 potential starting points for World War III rather than the one starting point we had during the bipolar situation which existed during the Cold War.

In saying this I realize, of course, there were more than two nations with nukes during the Cold War, but everyone had basically lined up on one side or another, in 100 years who knows what kind of alliances there will be. Even France and the United States have had rocky patches in their relationship over the last several decades. (More about France later.)

The third factor which might increase the odds is the wildcard that is China. As I mentioned in my last point for a long time we had a bipolar world. The Soviet Union only had to worry about the United States and vice versa. Now we have an increasingly aggressive China whose intentions are unclear, but they’re certainly very ambitious. And, from the standpoint of nuclear weapons, they’re keeping their cards very close to their chest.

Most people have a tendency to dismiss China, because they are still quite far behind the US and Russia. But they’re catching up fast, and also since they weren’t really part of the Cold War there’s a lot of restrictions that apply to Russia and the US which don’t apply to China’s weapons, allowing them (from the article I just linked to)

…considerably more freedom to explore the technical frontiers of ballistic and cruise missiles than either the US or Russia.

The fourth factor involves a concept we’re going to borrow from Dan Carlin, of the podcast Hardcore History, it’s the concept of the Historical Arsonist. These are people like Hitler, Napoleon, Genghis Khan, etc. Who burn down the world, generally not caring how many people die or what else happens, in their quest to remake things in their image. You can see people like this going back as far as we have records up to as recently as World War II. While it’s certainly possible that we no longer have to worry about this archetype, they seem to be a fairly consistent feature of humanity. If they haven’t disappeared, then when the next one comes along he’s going to have access to nuclear weapons. What does that look like? During Hitler’s rise he was able to gain a significant amount of territory just by asking, how much more effective would he have been if he had threatened nuclear annihilation if he didn’t get his way?

This brings up another point, are we even sure we know all the ways someone could use nuclear weapons? In the past one of the defining features of these historical arsonists was they took military technology and used it in a way no one expected. Napoleon was the master of the artillery and was able to mobilize and field a much bigger army than had previously been possible. Hitler combined the newly developed tank and aircraft into an unstoppable blitzkrieg. Alexander the Great had the phalanx. Nuclear weapons, as I’ve mentioned, are hard enough to defend against in any case, but imagine the most deviously clever thing someone could do with that, and then imagine that it was even more devious than that. With something of that level, you might have historical arson on a scale never before imagined.

The fifth factor which makes the odds of World War III greater than commonly imagined is the potential change in the underlying geopolitics. By this I mean, nations can break up, they change governments, national attitudes mutate, etc. We’ve already seen the Soviet Union break up, and while that went fairly smoothly (at least so far, it actually hasn’t been that long when you think about it.) There’s no reason to assume that it will go that smoothly the next time. Particularly when you look at the lesson of the former Soviet Republics who did give up their weapons. When you look at what’s happening in Ukraine it seems probable that they might now regret giving up their nukes.

Of course the US isn’t going to last forever. I have no firm prediction what the end of the country looks like, and once again it’s possible that we’ll reach some sort of singularity long before that, but it may happen sooner than we imagine, particularly if the increased rancor of the current election represents any kind of trend. Thus if, but more likely when, something like that happens, what does that look like in terms of nukes? If Texas breaks off that’s one thing, but if you end up with seven nations who ends up with the nukes?

And then of course you could have the possibility of a radical change in government. Some people think that Trump would be catastrophic in this respect. On the other side of the aisle, many conservatives think that a country like France might get taken over by Muslims if demographic trends continue and immigration isn’t stopped. Certainly a book about the subject has proven very popular. Does a Muslim run France with nukes act exactly the same as the current nation? Maybe, maybe not.

The final factor to consider, at least for those who believe in revelation and scripture, are the various references to the last days which fit very well with what might be expected from nuclear warfare. We believe that war will be poured out upon all nations, and that the elements will melt with a  fervent heat and finally that the earth will be baptized by fire. Obviously saying I know what this prophecy means is a dangerous and prideful game, and that is not what I’m doing. What I am saying is that this is one more factor to be added to and weighed alongside the other factors which have already been mentioned.

The point of all this is not to convince you drop everything and start building a bomb shelter (though I think if you already have one you shouldn’t demolish it.) Along with everything I’ve said I still believe that no man knoweth the hour. I’m also not saying I know that some form of nuclear armageddon will accompany the second coming. My point as always is that we are not saved and cannot be saved through our own efforts. Only the Son of Man and Prince of Peace has the ability to bring true and lasting peace. Further, and perhaps even more importantly, thinking we have or even can achieve peace on our own, that we just need to keep pushing the spread science, or liberal democracy, or our “enlightened” western values, is more dangerous and more likely to hasten what we fear than reminding ourselves of the fallen nature of man and restricting ourselves to the preaching of gospel, while eschewing the preaching of progress.

In the end, attempting to eliminate World War III may paradoxically hasten its arrival…`