Tag: <span>taleb</span>

Tetlock, the Taliban, and Taleb

If you prefer to listen rather than read, this blog is available as a podcast here. Or if you want to listen to just this post:

Or download the MP3


I.

There have been many essays written in the aftermath of our withdrawal from Afghanistan. One of the more interesting was penned by Richard Hanania, and titled “Tetlock and the Taliban”. Everyone reading this has heard of the Taliban, but there might be a few of you who are unfamiliar with Tetlock. And even if that name rings a bell you might not be clear on what his relation is to the Taliban. Hanania himself apologizes to Tetlock for the association, but “couldn’t resist the alliteration”, which is understandable. Neither could I. 

Tetlock is known for a lot of things, but he got his start by pointing out that “experts” often weren’t. To borrow from Hanania:

Phil Tetlock’s work on experts is one of those things that gets a lot of attention, but still manages to be underrated. In his 2005 Expert Political Judgment: How Good Is It? How Can We Know?, he found that the forecasting abilities of subject-matter experts were no better than educated laymen when it came to predicting geopolitical events and economic outcomes.

From this summary the connection to the Taliban is probably obvious. This is an arena where the subject matter experts got things very wrong. Hanania’s opening analogy is too good not to quote:

Imagine that the US was competing in a space race with some third world country, say Zambia, for whatever reason. Americans of course would have orders of magnitude more money to throw at the problem, and the most respected aerospace engineers in the world, with degrees from the best universities and publications in the top journals. Zambia would have none of this. What should our reaction be if, after a decade, Zambia had made more progress?

Obviously, it would call into question the entire field of aerospace engineering. What good were all those Google Scholar pages filled with thousands of citations, all the knowledge gained from our labs and universities, if Western science gets outcompeted by the third world?

For all that has been said about Afghanistan, no one has noticed that this is precisely what just happened to political science.

Of course Hanania’s point is more devastating than Tetlock’s. The experts weren’t just “no better” than the Taliban’s “educated laymen”. The “experts” were decisively outcompeted despite having vastly more money and in theory, all the expertise. Certainly they had all the credentialed expertise…

In some ways Hanania’s point is just a restatement of Antonio García Martínez’s point, which I used to end my last post on Afghanistan—the idea we are an unserious people. That we enjoy “an imperium so broad and blinding” we’ve never been “made to suffer the limits of [our] understanding or re-assess [our] assumptions about [the] world”

So the Taliban needed no introduction, and we’ve introduced Tetlock, but what about Taleb? Longtime readers of this blog should be very familiar with Nassim Nicholas Taleb, but if not I have a whole post introducing his ideas. For this post we’re interested in two things, his relationship to Tetlock and his work describing black swans: rare, consequential and unpredictable events. 

Taleb and Tetlock are on the same page when it comes to experts, and in fact for a time they were collaborators, co-authoring papers on the fallibility of expert predictions and the general difficulty of making predictions—particularly when it came to fat-tail risks. But then, according to Taleb, Tetlock was seduced by government money and went from pointing out the weaknesses of experts to trying to supplant them, by creating the Good Judgement project, and the whole project of superforecasting.

The key problem with expert prediction, from Tetlock’s point of view, is that experts are unaccountable. No one tracks whether they were eventually right or wrong. Beyond that, their “predictions” are made in such a way that even making a determination of accuracy is impossible. Additionally experts are not any better at prediction than educated laypeople. Tetlock’s solution is to offer the chance for anyone to make predictions, but in the process ensure that the predictions can be tracked, and assessed for accuracy. From there you can promote those people with the best track record. A sample prediction might be “I am 90% confident that Joe Biden will win the 2020 presidential election.” 

Taleb agreed with the problem, but not with the solution. And this is where black swans come in. Black swans can’t be predicted, they can only be hedged against, and prepared for, but superforecasting, by giving the illusion of prediction, encourages people to be less prepared for black swans, and in the end worse off than they would have been without the prediction.

In the time since writing The Black Swan Taleb has come to hate the term, because people have twisted it into an excuse for precisely the kind of unpreparedness he was trying to prevent. 

“No one could have done anything about the 2007 financial crisis. It was a black swan!”

“We couldn’t have done anything about the pandemic in advance. It was a black swan!” 

“Who could have predicted that the Taliban would take over the country in nine days! It was a black swan!”

Accordingly, other terms have been suggested. In my last post I reviewed a book which introduced the term “gray rhino”, something people can see coming, but which they nevertheless ignore. 

Regardless of the label we decide to apply to what happened in Afghanistan, it feels like we were caught flat footed. We needed to be better prepared. Taleb says we can be better prepared if we expect black swans. Tetlock says we can be better prepared by predicting what to prepare for. Afghanistan seems like precisely the sort of thing superforecasting was designed for. Despite this I can find no evidence that Tetlock’s stable of superforecasters predicted how fast Afghanistan would fall, or any evidence that they even tried. 

As a final point before we move on. This last bit is one of the biggest problems with superforecasting. The idea that you should only be judged for what you got wrong, that if you were never asked to make a prediction about something that the endeavor “worked”. But reality doesn’t care about what you chose to make predictions on vs. what you didn’t. Reality does whatever it feels like. And the fact that you didn’t choose to make any predictions about the fall of Afghanistan doesn’t mean that thousands of interpreters didn’t end up being left behind. And the fact that you didn’t choose to make any predictions about pandemics doesn’t mean that millions of people didn’t die. This is the chief difference between Tetlock and Taleb.

II.

I first thought about this issue when I came across a poll on a forum I frequent, in which users were asked how long they thought the Afghan government would last. The options and results were:

(In the interest of full disclosure the bolded option indicates that I said one to two years.)

While it is true that a plurality of people said less than six months, six months was still much longer than the nine days it actually took (from capturing the first provincial capital to the fall of Kabul) and from the discussion that followed the poll, it seemed most of those 16 people were thinking that the government would fall at closer to six months or even three months than one week. In fact the best thing, prediction-wise, to come out of the discussion was when someone pointed out that 10 years previously The Onion had posted an article with the headline U.S. Quietly Slips Out Of Afghanistan In Dead Of Night, which is exactly what happened at Bagram. 

As it turns out this is not the first time The Onion has eerily predicted the future. There’s a whole subgenre of noticing all the times it’s happened. How do they do it? Well of course part of the answer is selection bias.  No one is expecting them to predict the future; nobody comments on all the articles that didn’t come true.  But when one does, it’s noteworthy. But I think there’s something else going on as well: I think they come up with the worst or most ridiculous thing that could happen, and because of the way the world works, some of the time that’s exactly what does happen. 

Between the poll answers being skewed from reality and the link to the Onion article, the thread led me to wonder: where were the superforecasters in all of this?

I don’t want to go through all of the problems I’ve brought up with superforecasting (I’ve easily written more than 10,000 words on the subject) but this event is another example of nearly all of my complaints. 

  • There is no methodology to account for the differing impact of being incorrect on some predictions vs. others. (Being wrong about whether the Tokyo Olympics will be held is a lot less consequential than being wrong about Brexit.)
  • Their attention is naturally drawn to obvious questions where tracking predictions is easy. 
  • Their rate of success is skewed both by only picking obvious questions, and by lumping together both the consequential and the inconsequential.
  • People use superforecasting as a way of more efficiently allocating resources, but efficiency is essentially equal to fragility, which leaves us less prepared when things go really bad. (It was pretty efficient to just leave Bagram all at once.)

Or course some of these don’t apply because as far as I can tell the Good Judgment project and it’s stable of superforecasters never tackled the question, but they easily could have. They could have had a series of questions about whether the Taliban would be in control of Kabul by a certain date. This seems specific enough to meet their criteria. But as I said, I could find no evidence that they had. Which means either they did make such predictions and were embarrassingly wrong, so it’s been buried, or despite its geopolitical importance it never occurred to them to make any predictions about when Afghanistan would fall. (But it did occur to a random poster on a fringe internet message board?) Both options are bad.

When people like me criticize superforecasting and Tetlock’s Good Judgment project in this manner, the common response is to point out all the things they did get right and further that superforecasting is not about getting everything right; it’s about improving the odds, and getting more things right than the old method of relying on the experts. This is a laudable goal. But as I point out it suffers from several blindspots. The blindspot of impact is particularly egregious and deserves more discussion. To quote from one of my previous posts where I reflected on their failure to predict the pandemic:

To put it another way, I’m sure that the Good Judgement project and other people following the Tetlockian methodology have made thousands of forecasts about the world. Let’s be incredibly charitable and assume that out of all these thousands of predictions, 99% were correct. That out of everything they made predictions about 99% of it came to pass. That sounds fantastic, but depending on what’s in the 1% of the things they didn’t predict, the world could still be a vastly different place than what they expected. And that assumes that their predictions encompass every possibility. In reality there are lots of very impactful things which they might never have considered assigning a probability to. That in fact they could actually be 100% correct about the stuff they predicted but still be caught entirely flat footed by the future because something happened they never even considered. 

As far as I can tell there were no advance predictions of the probability of a pandemic by anyone following the Tetlockian methodology, say in 2019 or earlier. Or any list where “pandemic” was #1 on the “list of things superforecasters think we’re unprepared for”, or really any indication at all that people who listened to superforecasters were more prepared for this than the average individual. But the Good Judgement Project did try their hand at both Brexit and Trump and got both wrong. This is what I mean by the impact of the stuff they were wrong about being greater than the stuff they were correct about. When future historians consider the last five years or even the last 10, I’m not sure what events they will rate as being the most important, but surely those three would have to be in the top 10. They correctly predicted a lot of stuff which didn’t amount to anything and missed predicting the few things that really mattered.

Once again we find ourselves in a similar position. When we imagine historians looking back on 2021, no one would find it surprising if they ranked the withdrawal of the US and subsequent capture of Afghanistan by the Taliban as the most impactful event of the year. And yet superforecasters did nothing to help us prepare for this event.

IV.

The natural next question is to ask how should we have prepared for what happened? Particularly since we can’t rely on the predictions of superforecasters to warn us. What methodology do I suggest instead of superforecasting? Here we return to the remarkable prescience of The Onion. They ended up accurately predicting what would happen in Afghanistan 10 years in advance, by just imagining the worst thing that could happen. And in the weeks since Kabul fell, my own criticism of Biden has settled around this theme. He deserves credit for realizing that the US mission in Afghanistan had failed, and that we needed to leave, that in fact we had needed to leave for a while. Bad things had happened, and bad things would continue to happen, but in accepting the failure and its consequences he didn’t go far enough. 

One can imagine Biden asserting that Afghanistan and Iraq were far worse than Bush and his “cronies” had predicted. But then somehow he overlooked the general wisdom that anything can end up being a lot worse than predicted, particularly in the arena of war (or disease). If Bush can be wrong about the cost and casualties associated with invading Afghanistan, is it possible that Biden might be wrong about the cost and casualties associated with leaving Afghanistan? To state things more generally, the potential for things to go wrong in an operation like this far exceeds the potential for things to go right. Biden, while accepting past failure, didn’t do enough to accept the possibility of future failure. 

As I mentioned, my answer to the poll question of how long the Afghanistan government was going to last was 1-2 years. And I clearly got it wrong (whatever my excuses). But I can tell you what questions I would have aced (and I think my previous 200+ blog posts back me up on this point): 

  • Is there a significant chance that the withdrawal will go really badly?
  • Is it likely to go worse than the government expects?

And to be clear I’m not looking to make predictions for the sake of predictions. I’m not trying to be more accurate, I’m looking for a methodology that gives us a better overall outcome. So is the answer to how we could have been better prepared, merely “More pessimism?” Well that’s certainly a good place to start, beyond that there’s things I’ve been talking about since the blog was started. But a good next step is to look at the impact of being wrong. Tetlock was correct when he pointed out that experts are wrong most of the time. But what he didn’t account for is it’s possible to be wrong most of the time, but still end up ahead. To illustrate this point I’d like to end by recycling an example I used the last time I talked about superforecasting:

The movie Molly’s Game is about a series of illegal poker games run by Molly Bloom. The first set of games she runs is dominated by Player X, who encourages Molly to bring in fishes, bad players with lots of money. Accordingly, Molly is confused when Player X brings in Harlan Eustice, who ends up being a very skillful player. That is until one night when Eustice loses a hand to the worst player at the table. This sets him off, changing him from a calm and skillful player, into a compulsive and horrible player, and by the end of the night he’s down $1.2 million.

Let’s put some numbers on things and say that 99% of the time Eustice is conservative and successful and he mostly wins. That on average, conservative Eustice ends the night up by $10k. But, 1% of the time, Eustice is compulsive and horrible, and during those times he loses $1.2 million. And so our question is should he play poker at all? (And should Player X want him at the same table he’s at?) The math is straightforward, his expected return over 100 games is -$210k. It would seem clear that the answer is “No, he shouldn’t play poker.”

But superforecasting doesn’t deal with the question of whether someone should “play poker” it works by considering a single question, answering that question and assigning a confidence level to the answer. So in this case they would be asked the question, “Will Harlan Eustice win money at poker tonight?” To which they would say, “Yes, he will, and my confidence level in that prediction is 99%.” 

This is what I mean by impact. When things depart from the status quo, when Eustice loses money, it’s so dramatic that it overwhelms all of the times when things went according to expectations.  

Biden was correct when he claimed we needed to withdraw from Afghanistan. He had no choice, he had to play poker. But once he decided to play poker he should have done it as skillfully as possible, because the stakes were huge. And as I have so frequently pointed out, when the stakes are big, as they almost always are when we’re talking about nations, wars, and pandemics, the skill of pessimism always ends up being more important than the skill of superforecasting.


I had a few people read a draft of this post. One of them complained that I was using a $100 word when a $1 word would have sufficed. (Any guesses on which word it was?) But don’t $100 words make my donors feel like they’re getting their money’s worth? If you too want to be able to bask in the comforting embrace of expensive vocabulary consider joining them.


Nukes

If you prefer to listen rather than read, this blog is available as a podcast here. Or if you want to listen to just this post:

Or download the MP3


The key theme of this blog is that progress has not saved us. It has not made us any less sinful, it has not improved our lives in any of the ways that really matter, but has rather introduced opportunities to sin that for someone living 200 years ago would beggar the imagination.

Of course it’s easy and maybe even forgivable to think this is not the case. We live longer, there’s less hunger and poverty, along with this comes more freedom and less violence. For now we’re going to focus on that last assertion, that things are less violent. And since we already broached the subject of nukes in our last post, we’re specifically going to continue to expand on that idea.

One of the best known arguments about a decrease in violence comes from someone who I actually admire quite a bit, Steven Pinker. He made the argument in his book The Better Angels of our Nature. Taleb, as you might imagine, disagrees with Pinker’s thesis and in what is becoming a common theme, asserts that Pinker is confusing the absence of volatility with an absence of fragility. If you want to read Taleb’s argument you can find it here. Needless to say, as much as I admire Pinker, on this issue I agree with Taleb.

As I have already said, this post is going to be an extension of my last post. In that last post I urged people to take a longer term outlook, and to eschew the immediate political fight in favor of a longer term historical outlook. In other words that post was about being wise, and this post is about what will happen if we aren’t wise. In particular what things look like as far as nukes.

As you can imagine if our survival hinges on our wisdom, then I’m not optimistic, and I personally predict that nukes are in our future. In this, I think, as with so many things, that I am contradicting conventional wisdom, or at least what most people believe about nuclear weapons, if they in fact believe anything at all.  If they do they might be thinking something along these lines: It’s been over 70 years since the last nuke was exploded in anger. (In fact I am writing these words on the 71st anniversary of Nagasaki, though they won’t be published until a few days later.) And they may further think: Yes, we have nukes, but we’re not going to use them. Sure some crazy terrorist may explode one, but the kind of all-out exchange we were worried about during the cold war is not going to happen. First don’t underestimate the impact of a loan terrorist nuke, and secondly don’t write off an all-out exchange either. Particularly if we’re going to poke the bear in the manner I described in my last post.

The first question to consider is why are we still worried about nukes even 70 years after their invention? Generally the development of a technology is quickly followed by the development of countermeasures. To take just one example, being able to drop bombs from the air was terrifying to people when that first became a possibility, but it didn’t take long to develop fighter aircraft, anti-aircraft guns and surface to air missiles. Then why, 71 years after Nagasaki and 50+ years after the development of the ICBM, can we still not defend ourselves? Can’t we shoot missiles down? Well first off even if we could a lot of people think building a missile defense system is the ultimate way of poking the bear. For what it’s worth I don’t fall into that camp despite my reluctance, in general, to poke the bear. But even if we decide that’s okay, right now it just isn’t technologically feasible to make a missile defense system that works against someone like Russia or China.

At this point I’d like to offer up data on the effectiveness of various anti missile systems and unfortunately there’s not a lot of it, and what there is isn’t good. If North Korea or Iran happened to launch a single missile at the United States we might be able to stop it, but when asked what he would do in that case one knowledgeable US official is reported to have said:

If a North Korean ICBM were launched in the direction of Seattle, …[I] would fire a bunch of GMD interceptors and cross [my] fingers.

Some clarification: GMD stands for Ground-based Midcourse Defense and is our current anti-ballistic missile platform, also North Korea currently doesn’t have a missile capable of reaching Seattle. But it’s interesting to note what they do have, given how impoverished the country is in all other respects.

As I said I’d like to offer up some data, but there isn’t much of it. Recent tests of our anti-missile systems have been marginally promising but they have mostly been conducted in a reasonably controlled environment, not on actual missiles being fired by surprise from a random location, at a time chosen by the aggressor for optimal effectiveness.

Tacked on at the end of the Wikipedia article on the US’s efforts at missile defense is a great summary of the difficulties of defending against a Russian or Chinese ICBM. In short:

  • Boost-stage defenses are the only layer that can successfully destroy a MIRV (an ICBM that has multiple warheads.)
  • Even so, boost stage interception is really difficult particularly against solid fuel ICBMs of the type that Russia and China use.
  • And even then the only current technology capable of doing it has to be within 40 km (~25 miles) of where the missile is launched. For those in Utah that means that if you had an anti missile defense system located at Hill Air Force Base it could shoot down missiles launched from no farther away than downtown Salt Lake City.

The Wikipedia article concludes by saying that, “There is no theoretical perspective for economically viable boost-phase defense against the latest solid-fueled ICBMs, no matter if it would be ground-based missiles, space-based missiles, or airborne laser (ABL).” (A reference from the following paper.)

In the end it’s not hard to see why nuclear missiles are so hard to defend against. Your defense can’t be porous at all. Letting even a single warhead get through can cause massive destruction. Add to that their speed and small size and you have the ultimate offensive weapon.

Thus far we’ve talked about the difficulties in defending against a Russian or Chinese ICBM. But of course we haven’t done anything to address why they might decide to nuke us. I did cover that at some length in my last post, but before we dive back into that, let’s look at people who we know want to nuke us, terrorists.

Obviously there are no shortage of terrorist groups who would love to nuke us if they could get their hands on one. Thus far we’ve been lucky and as far as we know there are no loose nukes. And I’m sure that preventing it is one of the top priorities of every intelligence agency out there, so perhaps it won’t happen. Still this is another situation where we’re in a race between singularity and catastrophe. On a long enough time horizon the chances that there will be some act of nuclear terrorism approach 100%. To argue otherwise would be to assert that eventually terrorism and nukes will go away. I will address the later point in a minute, but as to the first I don’t think anyone believes that terrorism will disappear. If anything, most sources of grievance have increased in the last few years. If you think I’m wrong on this point I’d be glad to hear your argument.

Of course, if we never have an incident of nuclear terrorism, then, as I frequently point out, that’s great. If I’m wrong nothing happens. But if I’m right

Perhaps you might argue that a single nuke going off in New York or Paris or London is not that bad. Certainly it would be one of the biggest new stories since the explosion of the first nuclear weapons and frankly it’s hard to see how it doesn’t end up radically reshaping the whole world, at least politically. Obviously a lot depends on who ultimately ended up being responsible for the act, but we invaded Iraq after 9/11 and they had nothing to do with it (incidentally this is more complicated than most people want to admit, but yeah, basically they didn’t have anything to do with it and we invaded them anyway.) Imagine who we might invade if an actual nuke went off.

And then of course there’s the damage to the American psyche. Look at how much things changed just following 9/11. I can only imagine what kind of police state we would end up with after a terrorist nuke exploded in a major city. In other words, I would argue that a terrorist nuke is inevitable and that when it does happen it’s going to have major repercussions.

But we still need to return to a discussion of a potential World War III, a major nuclear exchange between two large nation states. What are the odds of that? Since the end of the Cold War the conventional wisdom has been that the odds are quite low, but I can think of at least a half a dozen factors which might increase the odds.

The first factor is the one I covered in my last post, and that is that we seem determined to encircle and antagonize the two major countries that have a large quantity of nuclear weapons. I previously spoke mostly about Russia, but if you follow what’s happening in the South China Sea (that article was three hours old when I wrote this) or if you’ve heard about the recent ruling by the Hague we’re not exactly treating China with kid gloves either. I’ve already said a lot about this factor so we’ll move on to the others.

The next factor which I think increases the odds of World War III is the proliferation of nuclear weapons. I know that most recently Iran looks like a success story. Here’s a country who wanted nuclear weapons and we stopped them. Well of course that remains to be seen, but it does seem intuitive that the longer we go the more countries will have nukes. Perhaps it might be instructive to determine a rate at which this is happening. In 1945 there was one country. Today in 2016, everyone pretty much agrees that there are nine. Dividing 71 years by 8 we get a new nuclear nation every nine years. Which means that in 99 years we’ll have another 11 nations with nuclear weapons, assuming that the rate of acquisition doesn’t increase. But actually most technological innovation doesn’t follow a linear curve. Consequently we may see an explosion (no pun intended) in nations with nuclear weapons, or it may be gradual or it may not happen at all (again this would be great, but unexpected.)

But let’s assume the rate at which new countries are added to the nuclear club stays constant and it takes 9 years on average to add a nation to the club and that in 100 years we’ve only added 11 more countries. On the face of it that may seem fairly minor, but if we assume that any two belligerents could start World War III then we would have 55 potential starting points for World War III rather than the one starting point we had during the bipolar situation which existed during the Cold War.

In saying this I realize, of course, there were more than two nations with nukes during the Cold War, but everyone had basically lined up on one side or another, in 100 years who knows what kind of alliances there will be. Even France and the United States have had rocky patches in their relationship over the last several decades. (More about France later.)

The third factor which might increase the odds is the wildcard that is China. As I mentioned in my last point for a long time we had a bipolar world. The Soviet Union only had to worry about the United States and vice versa. Now we have an increasingly aggressive China whose intentions are unclear, but they’re certainly very ambitious. And, from the standpoint of nuclear weapons, they’re keeping their cards very close to their chest.

Most people have a tendency to dismiss China, because they are still quite far behind the US and Russia. But they’re catching up fast, and also since they weren’t really part of the Cold War there’s a lot of restrictions that apply to Russia and the US which don’t apply to China’s weapons, allowing them (from the article I just linked to)

…considerably more freedom to explore the technical frontiers of ballistic and cruise missiles than either the US or Russia.

The fourth factor involves a concept we’re going to borrow from Dan Carlin, of the podcast Hardcore History, it’s the concept of the Historical Arsonist. These are people like Hitler, Napoleon, Genghis Khan, etc. Who burn down the world, generally not caring how many people die or what else happens, in their quest to remake things in their image. You can see people like this going back as far as we have records up to as recently as World War II. While it’s certainly possible that we no longer have to worry about this archetype, they seem to be a fairly consistent feature of humanity. If they haven’t disappeared, then when the next one comes along he’s going to have access to nuclear weapons. What does that look like? During Hitler’s rise he was able to gain a significant amount of territory just by asking, how much more effective would he have been if he had threatened nuclear annihilation if he didn’t get his way?

This brings up another point, are we even sure we know all the ways someone could use nuclear weapons? In the past one of the defining features of these historical arsonists was they took military technology and used it in a way no one expected. Napoleon was the master of the artillery and was able to mobilize and field a much bigger army than had previously been possible. Hitler combined the newly developed tank and aircraft into an unstoppable blitzkrieg. Alexander the Great had the phalanx. Nuclear weapons, as I’ve mentioned, are hard enough to defend against in any case, but imagine the most deviously clever thing someone could do with that, and then imagine that it was even more devious than that. With something of that level, you might have historical arson on a scale never before imagined.

The fifth factor which makes the odds of World War III greater than commonly imagined is the potential change in the underlying geopolitics. By this I mean, nations can break up, they change governments, national attitudes mutate, etc. We’ve already seen the Soviet Union break up, and while that went fairly smoothly (at least so far, it actually hasn’t been that long when you think about it.) There’s no reason to assume that it will go that smoothly the next time. Particularly when you look at the lesson of the former Soviet Republics who did give up their weapons. When you look at what’s happening in Ukraine it seems probable that they might now regret giving up their nukes.

Of course the US isn’t going to last forever. I have no firm prediction what the end of the country looks like, and once again it’s possible that we’ll reach some sort of singularity long before that, but it may happen sooner than we imagine, particularly if the increased rancor of the current election represents any kind of trend. Thus if, but more likely when, something like that happens, what does that look like in terms of nukes? If Texas breaks off that’s one thing, but if you end up with seven nations who ends up with the nukes?

And then of course you could have the possibility of a radical change in government. Some people think that Trump would be catastrophic in this respect. On the other side of the aisle, many conservatives think that a country like France might get taken over by Muslims if demographic trends continue and immigration isn’t stopped. Certainly a book about the subject has proven very popular. Does a Muslim run France with nukes act exactly the same as the current nation? Maybe, maybe not.

The final factor to consider, at least for those who believe in revelation and scripture, are the various references to the last days which fit very well with what might be expected from nuclear warfare. We believe that war will be poured out upon all nations, and that the elements will melt with a  fervent heat and finally that the earth will be baptized by fire. Obviously saying I know what this prophecy means is a dangerous and prideful game, and that is not what I’m doing. What I am saying is that this is one more factor to be added to and weighed alongside the other factors which have already been mentioned.

The point of all this is not to convince you drop everything and start building a bomb shelter (though I think if you already have one you shouldn’t demolish it.) Along with everything I’ve said I still believe that no man knoweth the hour. I’m also not saying I know that some form of nuclear armageddon will accompany the second coming. My point as always is that we are not saved and cannot be saved through our own efforts. Only the Son of Man and Prince of Peace has the ability to bring true and lasting peace. Further, and perhaps even more importantly, thinking we have or even can achieve peace on our own, that we just need to keep pushing the spread science, or liberal democracy, or our “enlightened” western values, is more dangerous and more likely to hasten what we fear than reminding ourselves of the fallen nature of man and restricting ourselves to the preaching of gospel, while eschewing the preaching of progress.

In the end, attempting to eliminate World War III may paradoxically hasten its arrival…`


We Are Not Saved

If you prefer to listen rather than read, this blog is available as a podcast here. Or if you want to listen to just this post:

Or download the MP3


The harvest is past, the summer is ended, and we are not saved.

Jeremiah 8:20

When I was a boy. I couldn’t imagine anything beyond the year 2000. I’m not sure how much of that had to do with the supposed importance of the beginning of a new millennium, how much of it is just due to the difficulty of extrapolation in general, and how much of it was due to my religious upbringing. (Let’s get that out of the way right up front. Yes, I am LDS/Mormon.)

It’s 2016 and we’re obviously well past the year 2000 and 16 years into the future I couldn’t imagine. For me, at least, it definitely is The Future, and any talk about living in the future is almost always followed by an observation that we were promised flying cars and spaceships and colonies on the moon. This observation is then followed by the obligatory lament that none of these promises have materialized. Of course moon colonies and flying cars are all promises made when I was a boy. Now we have a new set of promises: artificial intelligence, fusion reactors, and an end to aging, to name just a few. One might ask why the new promises are any more likely to be realized than the old promises. And here we see the first hint of the theme of this blog, But before we dive into that, I need to lay a little more groundwork.

I have already mentioned my religious beliefs, and these will be a major part of this blog (though in a different way than you might expect.) In addition to that I will also be drawing heavily from the writings of Nassim Nicholas Taleb. Taleb’s best known book is The Black Swan. For Taleb a black swan is something which is hard to predict and has a massive impact. Black swans can come in two forms: positive and negative. A positive black swan might be investing in a startup that later ends up being worth a billion dollars. A negative black swan, on the other hand, might be something like a war. Of course there are thousands of potential black swans of both types, and as Taleb says, “A Black Swan for the turkey is not a Black Swan for the butcher.”

The things I mentioned above, AI, fusion and immortality, are all expected to be positive black swans, though, of course, it’s impossible to be certain. Some very distinguished people have warned that artificial intelligence could mean the end of humanity. But for the moment we’re going to assume that they all represent positive black swans.

In addition to being positive black swans, these advancements could also be viewed as technological singularities. Here I use the term a bit more broadly than is common. Generally when people talk about the singularity they are using the term with respect to artificial intelligence. But as originally used (back in 1958) the singularity referred to technology progressing to a point where human affairs would be unrecognizable. In other words these developments will have such a big impact that we can’t imagine what life is like afterwards. AI, fusion and immortality all fall into this category, but they are certainly by no means the only technology that could create a singularity. I would argue that the internet is an excellent example of a singularity. Certainly people saw it coming, and and some of those even correctly predicted some aspects of it (just as, if we ever achieve AI, there will no doubt be some predictions which will also prove true.) But no one predicted anything like Facebook or other social media sites and those sites have ended up overshadowing the rest of the internet. My favorite observation about the internet illustrates the point:

If someone from the 1950s suddenly appeared today, what would be the most difficult thing to explain to them about life today?

I possess a device, in my pocket, that is capable of accessing the entirety of information known to man.

I use it to look at pictures of cats and get in arguments with strangers.

Everything I have said so far deserves, and will eventually get, a deeper examination, what I’m aiming for now is just the basic idea that one possibility for the future is a technological singularity. Something which would change the world in ways we can’t imagine, and if proponents are to be believed, it would be a change for the better.

If, on the one hand, we have the possibility of a positive black swans, technological singularities and utopias, is there also the possibility of negative black swans, technological disasters and dystopias on the other hand? Of course that’s a possibility. We could be struck by a comet or annihilate each other in a nuclear war or end up decimated by disease.

Which will it be? Will we be saved by a technological singularity or wiped out by a nuclear war? (Perhaps you will argue that there’s no reason why it couldn’t be both. Or maybe instead you prefer to argue that it will be neither. I don’t think both or neither are realistic possibilities, though my reasoning for that conclusion will have to wait for a future post.)

It’s The Future and two paths lie ahead of us, the singularity or the apocalypse, and this blog will argue for apocalypse. Many people have already stopped reading or are prepared to dismiss everything I’ve said because I have already mentioned that I’m Mormon. Obviously this informs my philosophy and worldview, but I will not use, “Because it says so in the Book of Mormon” as a step in any of my arguments, which is not to say that you will agree with my conclusions. In fact I expect this blog to be fairly controversial. The original Jeremiah had a pretty rough time, but it wasn’t his job to be popular, it was his job to warn of the impending Babylonian captivity.

I am not a prophet like Jeremiah, and I am not warning against any specific calamity. While I consider myself to be a disciple of Jesus Christ, as I have already mentioned, this blog will be at least as much informed by my being a disciple of Taleb. And as such I am not willing to make any specific predictions except to say that negative black swans are on the horizon. That much I know. And if I’m wrong? One of the themes of this blog will be that if you choose to prepare for the calamities and they do not happen, then you haven’t lost much, but if you are not prepared and calamities occur, then you might very well lose everything. As Taleb says in one of my favorite quotes:

If you have extra cash in the bank (in addition to stockpiles of tradable goods such as cans of Spam and hummus and gold bars in the basement), you don’t need to know with precision which event will cause potential difficulties. It could be a war, a revolution, an earthquake, a recession, an epidemic, a terrorist attack, the secession of the state of New Jersey, anything—you do not need to predict much, unlike those who are in the opposite situation, namely, in debt. Those, because of their fragility, need to predict with more, a lot more, accuracy.

I have already mentioned Taleb as a major influence. To that I will add John Michael Greer, the archdruid. He joins me (or rather I join him) in predicting the apocalypse, but he does not expect things to suddenly transition from where we are to a Mad Max style wasteland (which interestingly enough is the title of the next movie.) Rather he puts forward the idea of a catabolic collapse. The term catabolism broadly refers to a metabolic condition where the body starts consuming itself to stay alive. Applied to a civilization the idea is that as a civilization matures it gets to the point where it spends more than it “makes” and eventually the only way to support that spending is to start selling off or cannibalizing assets. In other words, along with Greer, I do not think that civilization will be wiped out in one fell swoop by an unconstrained exchange of nukes, and if it is than nothing will matter. I think it will be a slow-decline, broken up by a series of mini collapses.

All of this will be discussed in due time, suffice it to say that despite the religious overtones, when I talk about the apocalypse, you should not be visualizing The Walking Dead, The Road, or even Left Behind. But the things I discuss may nevertheless seem pretty apocalyptic. Earlier this week I stayed up late watching the Brexit vote come in. In the aftermath of that people are using words like terrifying, bombshell, flipping out, and furthermore talking about a global recession, all in response to the vote to Leave. If people are that scared about Britain leaving the EU I think we’re in for a lot of apocalypses.

You may be wondering how this is different than any other doom and gloom blog, and here, at last we return to the scripture I started with, which gives us the title and theme of the blog. Alongside all of the other religions of the world, including my own, there is a religion of progress, and indeed progress over the last several centuries has been remarkable.

These many years of progress represent the summer of civilization. And out of that summer we have assembled a truly staggering harvest. We have conquered diseases, split the atom, invented the integrated circuit and been to the moon. But if you look closely you will realize that our harvest is basically at an end. And despite the fantastic wealth we have accumulated, we are not saved. But in contemplating this harvest it is easier than ever before to see why we need to be saved. We understand the vastness of the universe, the potential of technology and the promise of the eternities. The fact that we are not wise enough to grasp any of it, makes our pain all the more acute.

And this is the difference between this blog and other doom and gloom blogs. Another blog may talk about the inevitable collapse of the United States because of the national debt, or runaway global warming, or cultural tension. Someone with faith in continued scientific progress may ignore all of that, assuming that once we’re able to upload our brains into a computer that none of it will matter. Thus, anyone who talks about about potential scenarios of doom without also talking about potential advances and singularities, is only addressing half of the issue. In other words you cannot talk about civilizational collapse without talking about why technology and progress cannot prevent it. They are opposite sides of the same coin.

That’s the core focus, but this blog will range over all manner of subjects including but not limited to:

  • Fermi’s Paradox
  • Roman History
  • Antifragility
  • Environmental Collapse
  • Philosophy
  • Current Politics
  • Book Reviews
  • War and conflict
  • Science Fiction
  • Religion
  • Artificial Intelligence
  • Mormon apologetics

As in the time of Jeremiah, disaster, cataclysms and destruction lurk on the horizon, and it becometh every man who hath been warned to warn his neighbor.

The harvest is past, the summer is ended, and we are not saved.