Category: <span>Predictions</span>

Don’t Make the Second Mistake

If you prefer to listen rather than read, this blog is available as a podcast here. Or if you want to listen to just this post:

Or download the MP3


Several years ago, when my oldest son had only been driving for around a year, he set out to take care of some things in an unfamiliar area about 30 minutes north of where we live. Of course he was using Google Maps, and as he neared his destination he realized he was about to miss his turn. Panicking, he immediately cranked the wheel of our van hard to the right, and actually ended up undershooting the turn, running into a curb and popping the front passenger side tire. 

He texted me and I explained where the spare was, and then over several other texts I guided him in putting it on. When he was finally done I told him not to take the van on the freeway because the spare wasn’t designed to go over 55. An hour later when he wasn’t home I tried calling him thinking that if he was driving I didn’t want him trying to text. After a couple of rings it went to voicemail, which seemed weird, so after a few minutes I tried texting him. He responded with this message:

I just got in another accident with another driver I’m so so so sorry. I have his license plate number, what else do I need to do?

Obviously my first question was whether he was alright. He said he was and that the van was still drivable (as it turned out, just barely…) He had been trying to get home without using the freeway and had naturally ended up in a part of town he was unfamiliar with. Arriving at an intersection, and already flustered by the blown tire and by how long it was taking, he thought it was a four-way stop, but instead only the street he was on had a stop sign. In his defence, there was a railroad crossing right next to the intersection on the other street, and so everything necessary to stop cross traffic was there, it just wasn’t active. Nor did it act anything like a four way stop.

In any event, after determining that no one else was stopped at what he thought were the other stop signs he proceeded and immediately got hit on the passenger side by someone coming down the other street. As I said the van was drivable, but just barely, and the insurance didn’t end up totaling it, but once again just barely. As it turns out the other driver was in a rental car, and as a side note, being hit by a rental car with full coverage in an accident with no injuries led to the other driver being very chill and understanding about the whole thing, so that was nice. Though I imagine the rental car company got every dime out of our insurance, certainly our rates went up, by a lot.

Another story…

While I was on my LDS mission in the Netherlands, my Dad wrote to me and related the following incident. He had been called over to my Uncle’s house to help him repair a snowmobile (in those days snowmobiles spent at least as much time being fixed as being ridden). As part of the repair they ended up needing to do some welding, but my dad only had his oxy acetylene setup with him. What he really needed was his arc welder, but that would mean towing the snowmobile trailer all the way back to his house on the other side of town, which seemed like a lot of effort for a fairly simple weld. He just needed to reattach something to the bulkhead. 

In order to do this with an oxy acetylene welder you had to put enough heat into the steel for it to start melting. Unfortunately on the other side of the bulkhead was the gas line to the carburetor, and as it started absorbing heat the line melted and gasoline poured out on to the hot steel immediately catching on fire. 

With a continual stream of gasoline pouring onto the fire, panic ensued, but it quickly became apparent that they needed to get the snowmobile out of the garage to keep the house from catching on fire. So my Father and Uncle grabbed the trailer and began to drag it into the driveway. Unfortunately the welder was still on the trailer, and it was pulling on the welding cart which had, among other things, a tank full of pure oxygen. My Dad saw this and tried to get my Uncle to stop, but he was far too focused on the fire to pay attention to my Father’s warnings, and so the tank tipped over.

You may not initially understand why this is so bad. Well, when an oxygen tank falls over the valve can snap off. In fact when you’re not using them there’s a special attachment you screw on to cover the valve which doesn’t prevent it from snapping off, but prevents it from becoming a missile if it does. Because, that’s what happens, the pressurized gas turns the big metal cylinder into a giant and very dangerous missile. But beyond that it would have filled the garage they were working in, the garage that already had a significant gasoline fire going with pure oxygen. Whether the fuel air bomb thus created would have been worse or better than the missile which had been created at the same time is hard to say, but both would have been really bad.

Fortunately the valve didn’t snap off, and they were able to get the snowmobile out into the driveway where a man passing by jumped out of his car with a fire extinguisher and put out the blaze. At which point my Father towed the trailer with the snowmobile over to his house, got out his arc welder, and had the weld done in about 30 seconds of actual welding.

What do both of these stories have in common? The panic, haste, and unfamiliar situation caused by making one mistake directly led to making more mistakes, and in both cases the mistakes which followed ended up being worse than the original mistake. Anyone, upon surveying the current scene would agree that mistakes have been made recently. Mistakes that have led to panic, hasty decisions, and most of all put us in very unfamiliar situations. When this happens people are likely to make additional mistakes, and this is true not only for individuals at intersections, and small groups working in garages, but also true at the level of nations, whether those nations are battling pandemics or responding to a particularly egregious example of police brutality or both at the same time.

If everyone acknowledges that mistakes have been made (which I think is indisputable) and further grants that the chaos caused by an initial mistake makes further mistakes more likely (less indisputable, but still largely unobjectionable I would assume). Where does that leave us? Saying that further mistakes are going to happen is straightforward enough, but it’s still a long way from that to identifying those mistakes before we make them, and farther still from identifying the mistakes to actually preventing them, since the power to prevent has to overlap with the insight to identify, which is, unfortunately, rarely the case. 

As you might imagine, I am probably not in a position to do much to prevent further mistakes. But you might at least hope that I could lend a hand in identifying them. I will do some of that, but this post, including the two stories I led with, is going to be more about pointing out that such mistakes are almost certainly going to happen, and our best strategy might be to ensure that such mistakes are not catastrophic. If actions were obviously mistakes we wouldn’t take those actions, we only take them because in advance they seem like good ideas. Accordingly this post is about lessening the chance that seemingly good actions will end up being mistakes later, and if they do end up being mistakes, making sure that they’re manageable mistakes rather than catastrophic mistakes. How do we do that?

The first principle I want to put forward is identifying the unknowns. Another way of framing this is asking, “What’s the worst that could happen?” Let me offer two competing examples drawn from current events:

First, masks: Imagine, if, to take an example from a previous post, the US had had a 30 day stockpile of masks for everyone in America, and when the pandemic broke out it had made them available and strongly recommended that people wear them. What’s the worst that could have happened? I’m struggling to come up with anything. I imagine that we might have seen some reaction from hardcore libertarians despite the fact that it was a recommendation, not a requirement. But the worst case is at best mild social unrest, and probably nothing at all.

Next, defunding the police: Now imagine that Minneapolis goes ahead with it’s plan to defund the police, what’s the worst that could happen there? I pick on Steven Pinker a lot, but maybe I can make it up to him a little bit by including a quote of his that has been making the rounds recently:

As a young teenager in proudly peaceable Canada during the romantic 1960s, I was a true believer in Bakunin’s anarchism. I laughed off my parents’ argument that if the government ever laid down its arms all hell would break loose. Our competing predictions were put to the test at 8:00 a.m. on October 7, 1969, when the Montreal police went on strike. By 11:20 am, the first bank was robbed. By noon, most of the downtown stores were closed because of looting. Within a few more hours, taxi drivers burned down the garage of a limousine service that competed with them for airport customers, a rooftop sniper killed a provincial police officer, rioters broke into several hotels and restaurants, and a doctor slew a burglar in his suburban home. By the end of the day, six banks had been robbed, a hundred shops had been looted, twelve fires had been set, forty carloads of storefront glass had been broken, and three million dollars in property damage had been inflicted, before city authorities had to call in the army and, of course, the Mounties to restore order. This decisive empirical test left my politics in tatters (and offered a foretaste of life as a scientist).

Now recall this is just the worst case, I am not saying this is what will happen, in fact I would be surprised if it did, particularly over such a short period. Also, I am not even saying that I’m positive defunding the police is a bad idea. It’s definitely not what I would do, but there’s certainly some chance that it might be an improvement on what we’re currently doing. But just as there’s some chance it might be better, one has to acknowledge that there’s also some chance that it might be worse. Which takes me to the second point.

If something might be a mistake it would be good if we don’t end up all making the same mistake. I’m fine if Minneapolis wants to take the lead on figuring out what it means to defund the police. In fact from the perspective of social science I’m excited about the experiment. I would be far less excited if every municipality decides to do it at the same time. Accordingly my second point is, knowing some of the actions we’re going to take in the wake of an initial mistake are likely to be further mistakes we should avoid all taking the same actions, for fear we all land on an action which turns out to be a further mistake.

I’ve already made this point as far as police violence goes, but we can also see it with masks. For reasons that still leave me baffled the CDC had a policy minimizing masks going all the way back to 2009. But fortunately this was not the case in Southeast Asia, and during the pandemic we got to see how the countries where mask wearing was ubiquitous fared, as it turned out, pretty well. No imagine that the same bad advice had been the standard worldwide. Would it have taken us longer to figure out that masks worked well for protecting against COVID-19? Almost certainly. 

So the two rules I have for avoiding the “second mistake” are:

  1. Consider the worst case scenario of an action before you take it. In particular try to consider the decision in the absence of the first mistake. Or what the decision might look like with the benefit of hindsight. (One clever mind hack I came across asks you to act as if you’ve been sent back in time to fix a horrible mistake, you just don’t know what the mistake was.)
  2. Avoid having everyone take the same response to the initial mistake. It’s easy in the panic and haste caused by the initial mistake for everyone to default to the same response, but that just makes the initial mistake that much worse if everyone panics into making the same wrong decision.

There are other guidelines as well, and I’ll be discussing some of them in my next post, but these two represent an easy starting point. 

Finally, I know I’ve already provided a couple of examples, but there are obviously lots of other recent actions which could be taken or have been taken and you may be wondering what their mistake potential is. To be clear I’m not saying that any of these actions are a mistake, identifying mistakes in advance is really hard, I’m just going to look at them with respect to the standards above. 

Let’s start with actions which have been taken or might be taken with respect to the pandemic. 

  1. Rescue package: In response to the pandemic, the US passed a massive aid/spending bill. Adding quite a bit to a national debt that is already quite large. I have maintained for a while that the worst case scenario here is pretty bad. (The arguments around this are fairly deep, with the leading counter argument being that we don’t have to worry because such a failure is impossible.) Additionally while many governments did the same thing, I’m less worried here about doing the same thing everyone else did and more worried about doing the same thing we always do when panic ensues. That is, throw money at things. 
  2. Closing things down/Opening them back up: Both actions seemed to happen quite suddenly and in near unison, with the majority of states doing both nearly simultaneously.  I’ve already talked about how there seemed to be very little discussion of the economic effects in pre-pandemic planning and equally not much consideration for what to do in the event of a new outbreak after opening things back up. As far as everyone doing the same thing, as I’ve mentioned before I’m glad that Sweden didn’t shut things down, just like I’d be happy to see Minneapolis try a new path with the police.
  3. Social unrest: I first had the idea for this post before George Floyd’s death. And at the time it already seemed that people were using COVID as an excuse to further stoke political divisions. That rather than showing forth understanding to those who were harmed by the shutdown they were hurling criticisms. To be clear the worst case scenario on this tactic is a 2nd civil war. Also, not only is everyone making the same mistake of blaming the other side, but similar to spending it also seems to be our go-to tactic these days.

Moving on to the protests and the anger over police brutality:

  1. The protests themselves: This is another area where the worst case scenario is pretty bad. While we’ve had good luck recently with protests generally fizzling out before anything truly extreme happened, historically there have been lots of times where protests just kept getting bigger and bigger until governments were overthrown, cities burned and thousands died. Also while there have been some exceptions, it’s been remarkable how even worldwide everyone is doing the same thing, gathering downtown in big cities and protesting, and further how the protests all look very similar, with the police confrontations, the tearing down of statues, the yelling, etc.
  2. The pandemic: I try to be pretty even keeled about things, and it’s an open question whether I actually succeed, but the hypocrisy demonstrated by how quickly media and scientists changed their recommendations when the protests went from being anti-lockdown to anti police brutality was truly amazing both in how blatant and how partisan it was. Clearly there is a danger that the protests will contribute significantly to an increase in COVID cases, and it is difficult to see how arguments about the ability to do things virtually don’t apply here. Certainly whatever damage has been caused as a side effect of the protests would be far less if they had been conducted virtually… 
  3. Defunding the police: While this has already been touched on, the worst case scenario not only appears to be pretty bad, but very likely to occur as well. In particular everything I’ve seen since things started seems to indicate that the solution is to spend more money on policing rather than less. And yet nearly in lock stop most large cities have put forward plans to spend less money on the police

I confess that these observations are less hard and fast and certainly less scientific than I would have liked. But if it was easy to know how we would end up making the second mistake we wouldn’t make it. Certainly if my son had known the danger of that particular intersection he would have spent the time necessary to figure out it wasn’t a four way stop. Or if my father had known that using the oxy acetylene welder would catch the fuel on fire he would have taken the extra time to move things to his house so he could use the arc welder. And I am certain that when we look back on how we handled the pandemic and the protests that there will be things that turned out to be obvious mistakes. Mistakes which we wish we had avoided. But maybe, if we can be just a little bit wiser and a little less panicky, we can avoid making the second mistake.


It’s possible that you think it was a mistake to read this post, hopefully not, but if it was then I’m going to engage in my own hypocrisy and ask you to, this one time, make a second mistake and donate. To be fair the worst case scenario is not too bad, and everyone is definitely not doing it.


My Final Case Against Superforecasting (with criticisms considered, objections noted, and assumptions buttressed)

If you prefer to listen rather than read, this blog is available as a podcast here. Or if you want to listen to just this post:

Or download the MP3


I.

One of my recent posts, Pandemic Uncovers the Limitations of Superforecasting, generated quite a bit of pushback. And given that in-depth debate is always valuable and that this subject, at least for me, is a particularly important one. I thought I’d revisit it, and attempt to further answer some of the objections that were raised the first time around. While also clarifying some points that people misinterpreted or gave insufficient weight to. 

To begin with, you might wonder how anybody could be opposed to superforecasting, and what that opposition would be based on. Isn’t any effort to improve forecasting obviously a good thing? Well for me it’s an issue of survival and existential risk. And while questions of survival are muddier in the modern world than they were historically, I would hope that everyone would at least agree that it’s an area that requires extreme care and significant vigilance. That even if you are inclined to disagree with me, that questions of survival call for maximum scrutiny. Given that we’ve already survived the past, most of our potential difficulties lie in the future, and it would be easy to assume that being able to predict that future would go a long way towards helping us survive it, but that is where I and the superforecasters part company, and the crux of the argument.

Fortunately or unfortunately as the case may be, we are at this very moment undergoing a catastrophe, a catastrophe which at one point lay in the future, but not any more. A catastrophe we now wish our past selves and governments had done a better job preparing for. And here we come to the first issue: preparedness is different than prediction. An eventual pandemic was predicted about as well as anything could have been, prediction was not the problem. A point Alex Tabarrok made recently on Marginal Revolution:

The Coronavirus Pandemic may be the most warned about event in human history. Surprisingly, we even did something about it. President George W. Bush started a pandemic preparation plan and so did Governor Arnold Schwarzenegger in CA but in both cases when a pandemic didn’t happen in the next several years those plans withered away. We ignored the important in favor of the urgent.

It is evident that the US government finds it difficult to invest in long-term projects, perhaps especially in preparing for small probability events with very large costs. Pandemic preparation is exactly one such project. How can we improve the chances that we are better prepared next time?

My argument is that we need to be looking for the methodology that best addresses this question, and not merely how we can be better prepared for pandemics, but better prepared for all rare, high impact events.

Another term for such events is “black swans”, after the book by Nassim Nicholas Taleb, Which is the term I’ll be using going forward. (Though, Taleb himself would say that, at best, this is a grey swan, given how inevitable it was.) Tabarrok’s point, and mine, is that we need a methodology that best prepares us for black swans, and I would submit that superforecasting, despite its many successes, is not that method. And in fact it may play directly into some of the weaknesses of modernity that encourage black swans, and rather than helping to prepare for such events, superforecasting may in fact discourage such preparedness.

What are these weaknesses I’m talking about? Tabarrok touched on them when he noted that, “It is evident that the US government finds it difficult to invest in long-term projects, perhaps especially in preparing for small probability events with very large costs.” Why is this? Why were the US and California plans abandoned after only a few years? Because the modern world is built around the idea of continually increasing efficiency. And the problem is that there is a significant correlation between efficiency and fragility. A fragility which is manifested by this very lack of preparedness.

One of the posts leading up to the one where I criticized superforecasting was built around exactly this point, and related the story of how 3M considered maintaining a surge capacity for masks in the wake of SARS, but it was quickly apparent that such a move would be less efficient, and consequently worse for them and their stock price. The drive for efficiency led to them being less prepared, and I would submit that it’s this same drive that led to the “withering away” of the US and California pandemic plans. 

So how does superforecasting play into this? Well, how does anyone decide where gains in efficiency can be realized or conversely where they need to be more cautious? By forecasting. And if a company or a state hires the Good Judgement Project to tell them what the chances are of a pandemic in the next five years and GJP comes back with the number 5% (i.e. an essentially accurate prediction) are those states and companies going to use that small percentage to justify continuing their pandemic preparedness or are they going to use it to justify cutting it? I would assume the answer to that question is obvious, but if you disagree then I would ask you to recall that companies almost always have a significantly greater focus on maximizing efficiency/profit, than on preparing for “small probability events with very large costs”.

Accordingly the first issue I have with superforecasting is that it can be (and almost certainly is) used as a tool for increasing efficiency, which is basically the same as increasing fragility. That rather than being used as a tool for determining which things we should prepare for it’s used as an excuse to avoid preparing for black swans, including the one we’re in the middle of. It is by no means the only tool being used to avoid such preparedness, but that doesn’t let it off the hook.

Now I understand that the link between fragility and efficiency is not going to be as obvious to everyone as it is to me, and if you’re having trouble making the connection I would urge you to read Antifragile by Taleb, or at least the post I already mentioned. Also, even if you find the link tenuous I would hope that you would keep reading because not only are there more issues but some of them may serve to make the connection clearer. 

II.

If my previous objection represented my only problem with superforecasting then I would probably agree with people who say that as a discipline it is still, on net, beneficial. But beyond providing a tool that states and companies can use to justify ignoring potential black swans superforecasting is also less likely to consider the probability of such events in the first place. 

When I mentioned this point in my previous post, the people who disagreed with me had two responses. First they pointed out that the people making the forecasts had no input on the questions they were being asked to make forecasts on and consequently no ability to be selective about the predictions they were making. Second, and more broadly they claimed that I needed to do more research and that my assertions were not founded in a true understanding of how superforecasting worked.

In an effort to kill two birds with one stone, since that last post I have read Superforecasting: The Art and Science of Prediction by Phillip Tetlock and Dan Gardner. Which I have to assume comes as close to being the bible of superforecasting as anything. Obviously, like anyone, I’m going to suffer from confirmation bias, and I would urge you to take that into account when I offer my opinion on the book. With that caveat in place, here, from the book, is the first commandment of superforecasting:

1) Triage

Focus on questions where your hard work is likely to pay off. Don’t waste time either on easy “clocklike” questions (where simple rules of thumb can get you close to the right answer) or on impenetrable “cloud-like” questions (where even fancy statistical models can’t beat the dart-throwing chimp). Concentrate on questions in the Goldilocks zone of difficulty, where effort pays off the most.

For instance, “Who will win the presidential election twelve years out, in 2028?” is impossible to forecast now. Don’t even try. Could you have predicted in 1940 the winner of the election, twelve years out, in 1952? If you think you could have known it would be a then-unknown colonel in the United States Army, Dwight Eisenhower, you may be afflicted by one of the worst cases of hindsight bias ever documented by psychologists. 

The question which should immediately occur to everyone: are black swans more likely to be in or out the Goldilocks zone? It would seem that, almost by definition, they’re going to be outside of this zone. Also, just based on the book’s description of the zone and all the questions I’ve seen both in the book and elsewhere, it would seem clear they’re outside of the zone. Which is to say that even if such predictions are not misused, they’re unlikely to be made in the first place. 

All of this would appear to heavily incline superforecasting towards the streetlight effect, where the old drunk looks for his keys under the streetlight, not because that’s where he lost them, but because that’s where the light is the best. Now to be fair, it’s not a perfect analogy. With respect to superforecasting there are actually lots of useful keys under the streetlight, and the superforecasters are very good at finding them. But based on everything I have already said, it would appear that all of the really important keys are out there in the dark, and as long as superforecasters are finding keys under the streetlight what inducement do they have to venture out into the shadows looking for keys? No one is arguing that the superforecasters aren’t good, but this is one of those cases where the good is the enemy of the best. Or more precisely it makes the uncommon the enemy of the rare.

It would be appropriate to ask at this point, if superforecasting is good, then what is “best”, and I intend to dedicate a whole section to that topic before this post is over, but for the moment I’d like to direct your attention to Toby Ord, and his recent book The Precipice: Existential Risk and the Future of Humanity, which I recently finished. (I’ll have a review of it in my month end round up.) Ord is primarily concerned with existential risks, risks which could wipe out all of humanity. Or to put it another way the biggest and blackest swans. A comparison of his methodology with the methodology of superforecasting might be instructive.  

Oord spends a significant portion of the book talking about pandemics. On his list of eight anthropogenic risks, pandemics take up 25% of the spots (natural pandemics get one spot and artificial pandemics get the other). On the other hand, if one were to compile all of the forecasts made by the Good Judgement Project since the beginning, what percentage of them would be related to potential pandemics? I’d be very much surprised if it wasn’t significantly less than 1%. While such measures are crude, one method pays a lot more attention than the other, and in any accounting of why we weren’t prepared for the pandemic, a lack of attention would certainly have to be high on the list.

Then there are Oord’s numbers. He provides odds that various existential risks will wipe us all out in the next 100 years. The odds he gives for that happening with a naturally arising pandemic are 1 in 10,000, the odds for an engineered pandemic are 1 in 30. The foundation of superforecasting is the idea that we should grade people’s predictions. How does one grade predictions of existential risk? Clearly compiling a track record would be impossible, they’re essentially unfalsifiable, and beyond all that they’re well outside the Goldilocks zone. Personally I’d almost rather that Oord didn’t give odds and just spent his time screaming, “BE VERY, VERY AFRAID!” But he doesn’t, he provides odds and hopes that by providing numbers people will take him more seriously than if he just yells. 

From all this you might still be unclear why Oord is better than the superforecasters. It’s because our world is defined by black swan events, and we are currently living out an example of that: our current world is overwhelmingly defined by the pandemic. If you were to selectively remove knowledge of just it from someone trying to understand the world absolutely nothing would make sense. Everyone understands this when we’re talking about the present, but it also applies to all past forecasting we engaged in. 99% of all superforecasting predictions lent nothing to our understanding of this moment, but 25% of Oord’s did. Which is more important: getting our 80% predictions about uncommon events to 95% or gaining any awareness, no matter how small, of a rare event which will end up dominating the entire world?

III.

At their core all of the foregoing complaints boil down to the idea that the methodology of superforecasting fails to take into account impact. The impact of not having extra mask capacity if a pandemic arrives. The impact of keeping to the Goldilocks zone and overlooking black swans. The impact of being wrong vs. the impact of being right.

When I made this claim in the previous post, once again several people accused me of not doing my research. As I mentioned, since then I have read the canonical book on the subject, and I still didn’t come across anything that really spoke to this complaint. To be clear, Tetlock does mention Taleb’s objections, and I’ll get to that momentarily, but I’m actually starting to get the feeling that neither the people who had issues with the last point, nor Tetlock himself really grasp this point, though there’s a decent chance I’m the one who’s missing something. Which is another point I’ll get to before the end. But first I recently encountered an example I think might be useful. 

The movie Molly’s Game is about a series of illegal poker games run by Molly Bloom. The first set of games she runs is dominated by Player X, who encourages Molly to bring in fishes, bad players with lots of money. Accordingly, Molly is confused when Tobey Mcquire, Player X brings in Harlan Eustice, who ends up being a very skillful player. That is until one night when Eustice loses a hand to the worst player at the table. This sets him off, changing him from a calm and skillful player, into a compulsive and horrible player, and by the end of the night he’s down $1.2 million.

Let’s put some numbers on things and say that 99% of the time Eustice is conservative and successful and he mostly wins. That on average, conservative Eustice ends the night up by $10k. But, 1% of the time, Eustice is compulsive and horrible, and during those times he loses $1.2 million. And so our question is should he play poker at all? (And should Player X want him at the same table he’s at?) The math is straightforward, his expected return over 100 average games is -$210k. It would seem clear that the answer is “No, he shouldn’t play poker.”

But superforecasting doesn’t deal with the question of whether someone should “play poker” it works by considering a single question, answering that question and assigning a confidence level to the answer. So in this case they would be asked the question, “Will Harlan Eustice win money at poker tonight?” To which they would say, “Yes, he will, and my confidence level in that prediction is 99%.” That prediction is in fact accurate, and would result in a fantastic Brier score (the grading system for superforecasters), but by repeatedly following that advice Eustice eventually ends up destitute.

This is what I mean by impact, and why I’m concerned about the potential black swan blindness of superforecasting. When things depart from the status quo, when Eustice loses money, it’s often so dramatic that it overwhelms all of the times when things went according to expectations.  That the smartest behavior for Eustice, the recommended behavior, should be to never play poker regardless of the fact that 99% of the time he makes thousands of dollars an hour. Furthermore this example illustrates some subtleties of forecasting which often get overlooked:

  • If it’s a weekly poker game you might expect the 1% outcome to pop up every two years, but it could easily take five years, even if you keep the probability the same. And if the probability is off by even a little bit (small probabilities are notoriously hard to assess) it could take even longer to see. Which is to say that forecasting during that time would result in continually increasing confidence, and greater and greater black swan blindness.
  • The benefits of wins are straightforward and easy to quantify. But the damage associated with the one big loss is a lot more complicated and may carry all manner of second order effects. Harlan may go bankrupt, get divorced, or even have his legs broken by the mafia. All of which is to say that the -$210k expected reward is the best outcome. Bad things are generally worse than expected. (For example it’s been noted that even though people foresaw a potential pandemic, plans almost never touched on the economic disruption which would attend it, which ended up being the biggest factor of all.)

Unless you’re Eustice, you may not care about the above example, or you may think that it’s contrived, but in the realm of politics this sort of bet is fairly common. As an example cast your mind back to the Cuban Missile Crisis. Imagine that in addition to his advisors, that at that time Kennedy also could draw on the Good Judgement Project and superforecasting. Further imagine that the GJP comes back with the prediction that if we blockade Cuba that the Russians will back down, a prediction they’re 95% confident of.  Let’s further imagine that they called the odds perfectly. In that case, should the US have proceeded with the blockade? Or should we have backed down and let the USSR base missiles in Cuba? When you just look at that 95% the answer seems obvious. But shouldn’t some allowance be made for the fact that the remaining 5% contains the possibility of all out nuclear war?

As near as I can tell, that part isn’t explored very well by superforecasting. Generally they get a question, they provide the answer and assign a confidence level to that answer. There’s no methodology for saying that despite the 95% probability that such gambles are bad ideas because if we make enough of them eventually we’ll “go bust”. None of this is to say that we should have given up and submitted to Soviet domination because it’s better than a full on nuclear exchange. (Though there were certainly people who felt that way.) More that it was a complicated question with no great answer (though it might have been a good idea for the US to not to put missiles in Turkey.) But by providing a simple answer with a confidence level of 95% superforecasting gives decision makers every incentive to substitute the true, and very difficult questions of nuclear diplomacy with the easy question of whether to blockade. That rather than considering the difficult and long term question of whether Eustice should gamble at all, we’re substituting the easier question of just whether he should play poker tonight. 

In the end I don’t see any bright line between a superforecaster saying there’s a 95% chance the Cuban Missile Crisis will end peacefully if we blockade, or a 99% chance Eustice will win money if he plays poker tonight, and those statements being turned into a recommendation for taking those actions, when in reality both may turn out to be very bad ideas.

IV.

All of the foregoing is an essentially Talebian critique of superforecasting, and as I mentioned earlier, Tetlock is aware of this critique. In fact he calls it, “the strongest challenge to the notion of superforecasting.” And in the final analysis it may be that we differ merely in whether that challenge can be overcome or not. Tetlock thinks it can, I have serious doubts, particularly if the people using the forecasts are unaware of the issues I’ve raised. 

Frequently people confronted with Taleb’s ideas of extreme events and black swans end up countering that we can’t possibly prepare for all potential catastrophes. Tetlock is one of those people and he goes on to say that even if we can’t prepare for everything that we should still prepare for a lot of things, but that means we need to establish priorities, which takes us back to making forecasts in order to inform those priorities. I have a couple of responses to this. 

  1. It is not at all clear that the forecasts one would make about which black swans to be most worried about follow naturally from superforecasting. It’s likely that superforecasting with its emphasis on accuracy and making predictions in the Goldilocks zone systematically draws attention away from rare impactful events.  Oord makes forecasts, but his emphasis is on identifying these events rather making sure the odds he provides are accurate. 
  2. I think that people overestimate the cost of preparedness and how much preparing for one thing, makes you prepared for lots of things. One of my favorite quotes from Taleb illustrates the point:

If you have extra cash in the bank (in addition to stockpiles of tradable goods such as cans of Spam and hummus and gold bars in the basement), you don’t need to know with precision which event will cause potential difficulties. It could be a war, a revolution, an earthquake, a recession, an epidemic, a terrorist attack, the secession of the state of New Jersey, anything—you do not need to predict much, unlike those who are in the opposite situation, namely, in debt. Those, because of their fragility, need to predict with more, a lot more, accuracy. 

As Taleb points out stockpiling reserves of necessities blunts the impact of most crises. Not only that, but even preparation for rare events ends up being pretty cheap when compared to what we’re willing to spend once the crisis hits. As I pointed out in a previous post, we seem to be willing to spend trillions of dollars once the crisis hits, but we won’t spend a few million to prepare for crises in advance.  

Of course as I pointed at at the beginning having reserves is not something the modern world is great at. Because reserves are not efficient. Which is why the modern world is generally on the other side of Taleb’s statement, in debt and trying to ensure/increase the accuracy of their predictions. Does this last part not exactly describe the goal of superforecasting? I’m not saying it can’t be used in service of identifying what things to hold in reserve or what rare events to prepare for I’m saying that it will be used far more often in the opposite way, in a quest for additional efficiencies and as a consequence greater fragility.

Another criticism people had about the last episode was that it lacked recommendations for what to do instead. I’m not sure that lack was as great as some people said, but still, I could have done better. And the foregoing illustrates what I would do differently. As Tabarrok said at the beginning, “The Coronavirus Pandemic may be the most warned about event in human history.” And yet if we just consider masks our preparedness in terms of supplies and even knowledge was abysmal. We need more reserves, we need to select areas to be more robust and less efficient in, we need to identify black swans, and once we have, we should have credible long term plans for dealing with them which aren’t scrapped every couple of years. Perhaps there is some place for superforecasting in there, but that certainly doesn’t seem like where you would start.

Beyond that, there are always proposals for market based solutions. In fact the top comment on the reddit discussion of the previous article was, “Most of these criticisms are valid, but are solved by having markets.” I am definitely also in favor of this solution as well, but there’s a lot of things to consider in order for it to actually work. A few examples off the top of my head:

  1. What’s the market based solution to the Cuban Missile Crisis? How would we have used markets to navigate the Cold War with less risk? Perhaps a system where we offer prizes for people predicting crises in advance. So maybe if someone took the time to extensively research the “Russia puts missiles in Cuba” scenario, when that actually happens they gets a big reward?
  2. Of course there are prediction markets, which seems to be exactly what this situation calls for, but personally I’m not clear how they capture impact problem mentioned above, also they’re still missing more big calls than they should. Obviously part of the problem is that overregulation has rendered them far less useful than they could be, and I would certainly be in favor of getting rid of most if not all of those regulations.
  3. If you want the markets to reward someone for predicting a rare event, the easiest way to do that is to let them realize extreme profits when the event happens. Unfortunately we call that price gouging and most people are against it. 

The final solution I’ll offer is the solution we already had. The solution superforecasting starts off by criticizing. Loud pundits making improbable and extreme predictions. This solution was included in the last post, but people may not have thought I was serious. I am. There were a lot of individuals who freaked out every time there was a new disease outbreak, whether it was Ebola, SARS or Swine Flu. And not only were they some of the best people to listen to when the current crisis started, we should have been listening to them even before that about the kind of things to prepare for. And yes we get back to the idea that you can’t act on the recommendations of every pundit making extreme predictions, but they nevertheless provide a valuable signal about the kind of things we should prepare for, a signal which superforecasting rather than boosting actively works to suppress.

None of the above directly replaces superforecasting, but all of them end up in tension with it, and that’s the problem.

V.

It is my hope that I did a better job of pointing out the issues with superforecasting on this second go around. Which is not to say the first post was terrible, but I could have done some things better. And if you’ll indulge me a bit longer (and I realize if you’ve made it this far you have already indulged me a lot) a behind the scenes discussion might be interesting. 

It’s difficult to produce content for any length of time without wanting someone to see it, and so while ideally I would focus on writing things that pleased me, with no regard for any other audience, one can’t help but try the occasional experiment in increasing eyeballs. The previous superforecasting post was just such an experiment, in fact it was two experiments. 

The first experiment was one of title selection. Should you bother to do any research into internet marketing they will tell you that choosing your title is key. Accordingly, while it has since been changed to “limitations” the original title of the post was “Pandemic Uncovers the Ridiculousness of Superforecasting”. I was not entirely comfortable with the word “ridiculousness” but I decided to experiment with a more provocative word to see if it made any difference. And I’d have to say that it did. In their criticism of it, a lot of people mentioned that world or the attitude implied in the title in general. But it also seemed that more people read it in the first place because of the title. Leading to the perpetual conundrum: saying superforecasting is ridiculous was obviously going too far, but would the post have attracted fewer readers without that word? If we assume that the body of the post was worthwhile (which I do, or I wouldn’t have written it) is it acceptable to use a provocative title to get people to read something? Obviously the answer for the vast majority of the internet is a resounding yes, but I’m still not sure, and in any case I ended up changing it later.

The second experiment was less dramatic, and one that I conduct with most of my posts. While writing them I imagine an intended audience. In this case the intended audience was fans of Nassim Nicholas Taleb, in particular people I had met while at his Real World Risk Institute back in February. (By the way, they loved it.) It was only afterwards, when I posted it as a link in a comment on the Slate Star Codex reddit that it got significant attention from other people, who came to the post without some of the background values and assumptions of the audience I’d intended for. This meant that some of the things I could gloss over when talking to Taleb fans were major points of contention with SSC readers. This issue is less binary than the last one, and other than writing really long posts it’s not clear what to do about it, but it is an area that I hope I’ve improved on in this post, and which I’ll definitely focus on in the future.

In any event the back and forth was useful, and I hope that I’ve made some impact on people’s opinions on this topic. Certainly my own position has become more nuanced. That said if you still think there’s something I’m missing, some post I should read or video I should watch please leave it in the comments. I promise I will read/listen/watch it and report back. 


Things like this remind me of the importance of debate, of the grand conversation we’re all involved in. Thanks for letting me be part of it. If you would go so far as to say that I’m an important part of it consider donating. Even $1/month is surprisingly inspirational.


Pandemic Uncovers the Limitations of Superforecasting

If you prefer to listen rather than read, this blog is available as a podcast here. Or if you want to listen to just this post:

Or download the MP3


I.

As near as I can reconstruct, sometime in the mid-80s Phillip Tetlock decided to conduct a study on the accuracy of people who made their living “commenting or offering advice on political and economic trends”. The study lasted for around twenty years and involved 284 people. If you’re reading this blog you probably already know what the outcome of that study was, but just in case you don’t or need a reminder here’s a summary.

  • Over the course of those twenty years Tetlock collected 82,361 forecasts, and after comparing those forecasts to what actually happened he found:
  • The better known the expert the less reliable they were likely to be.
  • Their accuracy was inversely related to their self-confidence, and after a certain point their knowledge as well. (More actual knowledge about, say, Iran led them to make worse predictions about Iran than people who had less knowledge.)
  • Experts did no better at predicting than the average newspaper reader.
  • When asked to guess between three possible outcomes for a situation, status quo, getting better on some dimension, or getting worse, the actual expert predictions were less accurate than just naively assigning a ⅓ chance to each possibility.
  • Experts were largely rewarded for making bold and sensational predictions, rather than making predictions which later turned out to be true.

For those who had given any thought to the matter, Tetlock’s discovery that experts are frequently, or even usually wrong was not all that surprising. Certainly he wasn’t the first to point it out, though the rigor of his study was impressive, and he definitely helped spread the idea with his book Expert Political Judgement: How Good Is It? How Can We Know? Which was published in 2005. Had he stopped there we might be forever in his debt, but from pointing out that the experts were frequently wrong, he went on to wonder, is there anyone out there who might do better? And thus began the superforecaster/Good Judgement project.

Most people, when considering the quality of a prediction, only care about whether it was right or wrong, but in the initial study, and in the subsequent Good Judgement project, Tetlock also asks people to assign a confidence level to each prediction. Thus someone might say that they’re 90% sure that Iran will not build a nuclear weapon in 2020 or that they’re 99% sure that the Korean Peninsula will not be reunited. When these predictions are graded, the ideal is for 90% of the 90% predictions to turn out to be true, not 95% or 85%, in the former case they were under confident and in the latter case they were overconfident. (For obvious reasons the latter is far more common). Having thus defined a good forecast Tetlock set out to see if he could find such people, people who were better than average at making predictions. He did. And it became the subject of his next book Superforecasting: The Art and Science of Prediction.

The book’s primary purpose is to explain what makes a good forecaster and what makes a good forecast. As it turns out one of the key features of that was that superforecasters are far more likely to predict that things will continue as they have. While those forecasters who appear on TV and who were the subject of Tetlock’s initial study are far more likely to predict some spectacular new development. The reason for this should be obvious, that’s how you get noticed. That’s what gets the ratings. But if you’re more interested in being correct (at least more often than not) then you predict that things will basically be the same next year as they were this year. And I am not disparaging that, we should all want to be more correct than not, but trying to maximize your correctness does have one major weakness. And that is why, despite Tetlock’s decades long effort to improve forecasting, I am going to argue that Tetlock’s ideas and methodology have actually been a source of significant harm, and have made the world less prepared for future calamities rather than more.

II.

To illustrate what I mean, I need an example. This is not the first time I’ve written on this topic, I actually did a post on it back in January of 2017, and I’ll probably be borrowing from it fairly extensively, including re-using my example of a Tetlockian forecaster: Scott Alexander of Slate Star Codex

Now before I get into it, I want to make it clear that I like and respect Alexander A LOT, so much so that up until recently, and largely for free (there was a small Patreon) I read and recorded every post from his blog and distributed it as a podcast. The reason Alexander can be used as an example is that he’s so punctilious about trying to adhere to the “best practices” of rationality, which is precisely the position Tetlock’s methods hold at the moment. This post is an argument against that position, but at the moment they’re firmly ensconced.

Accordingly, Alexander does a near perfect job of not only making predictions but assigning a confidence level to each of them. Also, as is so often the case he beat me to the punch on making a post about this topic, and while his post touches on some of the things I’m going to bring up, I don’t think it goes far enough, or offers its conclusion quite as distinctly as I intend to do. 

As you might imagine, his post and mine were motivated by the pandemic, in particular the fact that traditional methods of prediction appeared to have been caught entirely flat footed, including the Superforecasters. Alexander mentions in his post that “On February 20th, Tetlock’s superforecasters predicted only a 3% chance that there would be 200,000+ coronavirus cases a month later (there were).” So by that metric the superforecasters failed, something both Alexander and I agree on, but I think it goes beyond just missing a single prediction. I think the pandemic illustrates a problem with this entire methodology. 

What is that methodology? Well, the goal of the Good Judgement project and similar efforts is to improve forecasting and predictions specifically by increasing the proportion of accurate predictions. This is their incentive structure, it’s how they’re graded, it’s how Alexander grades himself every year. This encourages two secondary behaviors, the first is the one I already mentioned, the easiest way to be correct is to predict that the status quo will continue, this is fine as far as it goes, the status quo largely does continue, but the flip side of that is a bias against extreme events. These events are extreme in large part because they’re improbable, thus if you want to be correct more often than not, such events are not going to get any attention. Meaning their skill set and their incentive structure are ill suited to extreme events (as evidenced by the 3% who correctly predicted the magnitude of the pandemic I mentioned above). 

The second incentive is to increase the number of their predictions. This might seem unobjectionable, why wouldn’t we want more data to evaluate them by? The problem is not all predictions are equally difficult. To give an example from Alexander’s list of predictions (and again it’s not my intention to pick on him, I’m using him as an example more for the things he does right than the things he does wrong) from his most recent list of predictions, out of 118, 80 were about things in his personal life, and only 38 were about issues the larger world might be interested in.

Indisputably it’s easier for someone to predict what their weight will be or whether they will lease the same car when their current lease is up, than it is to predict whether the Dow will end the year above 25,000. And even predicting whether one of his friends will still be in a relationship is probably easier as well, but more than that, the consequences of his personal predictions being incorrect are much less than the consequences of his (or other superforecasters) predictions about the world as a whole being wrong. 

III.

The first problem to emerge from all of this is that Alexander and the Superforecasters rate their accuracy by considering all of their predictions regardless of their importance or difficulty. Thus, if they completely miss the prediction mentioned above about the number of COVID-19 cases on March 20th, but are successful in predicting when British Airways will resume service to Mainland China their success will be judged to be 50%. Even though for nearly everyone the impact of the former event is far greater than the impact of the latter! And it’s worse than that, in reality there are a lot more “British Airways” predictions being made than predictions about the number of cases. Meaning they can be judged as largely successful despite missing nearly all of the really impactful events. 

This leads us to the biggest problem of all, the methodology of superforecasting has no system for determining impact. To put it another way, I’m sure that the Good Judgement project and other people following the Tetlockian methodology have made thousands of forecasts about the world. Let’s be incredibly charitable and assume that out of all these thousands of predictions, 99% were correct. That out of everything they made predictions about 99% of it came to pass. That sounds fantastic, but depending on what’s in the 1% of the things they didn’t predict, the world could still be a vastly different place than what they expected. And that assumes that their predictions encompass every possibility. In reality there are lots of very impactful things which they might never have considered assigning a probability to. That in fact they could actually be 100% correct about the stuff they predicted but still be caught entirely flat footed by the future because something happened they never even considered. 

As far as I can tell there were no advance predictions of the probability of a pandemic by anyone following the Tetlockian methodology, say in 2019 or earlier. Or any list where “pandemic” was #1 on the “list of things superforecasters think we’re unprepared for”, or really any indication at all that people who listened to superforecasters were more prepared for this than the average individual. But the Good Judgement Project did try their hand at both Brexit and Trump and got both wrong. This is what I mean by the impact of the stuff they were wrong about being greater than the stuff they were correct about. When future historians consider the last five years or even the last 10, I’m not sure what events they will rate as being the most important, but surely those three would have to be in the top 10. They correctly predicted a lot of stuff which didn’t amount to anything and missed predicting the few things that really mattered.

That is the weakness of trying to maximize being correct. While being more right than wrong is certainly desirable. In general the few things the superforecasters end up being wrong about are far more consequential than all things they’re right about. Also, I suspect this feeds into the classic cognitive bias, where it’s easy to ascribe everything they correctly predicted to skill while every time they were wrong gets put down to bad luck. Which is precisely what happens when something bad occurs.

Both now and during the financial crisis when experts are asked why they didn’t see it coming or why they weren’t better prepared they are prone to retort that these events are “black swans”. “Who could have known they would happen?” And as such, “There was nothing that could have been done!” This is the ridiculousness of superforecasting, of course pandemics and financial crises are going to happen, any review of history would reveal that few things are more certain. 

Nassim Nicholas Taleb, who came up with the term, has come to hate it for exactly this reason, people use it to excuse a lack of preparedness and inaction in general, when the concept is both more subtle and more useful. These people who throw up their hands and say “It was a black swan!” are making an essentially Tetlockian claim: “Mostly we can predict the future, except on a few rare occasions where we can’t, and those are impossible to do anything about.” The point of the Taleb’s black swan theory and to a greater extent his idea of being antifragile is to point out that you can’t predict the future at all, and when you convince yourself that you can it distracts you from hedging/lessening your exposure to/preparing for the really impactful events which are definitely coming.

From a historical perspective financial crashes and pandemics have happened a lot, business and governments really had no excuse for not making some preparation for the possibility that one or the other, or as we’re discovering, both, would happen. And yet they didn’t. I’m not claiming that this is entirely the fault of superforecasting. But superforecasting is part of the larger movement of convincing ourselves that we have tamed randomness, and banished the unexpected. And if there’s one lesson from the pandemic greater than all others it should be that we have not.

Superforecasting and the blindness to randomness are also closely related to the drive for efficiency I mentioned recently.  “There are people out there spouting extreme predictions of things which largely aren’t going to happen! People spend time worrying about these things when they could be spending that time bringing to pass the neoliberal utopia foretold by Steven Pinker!” Okay, I’m guessing that no one said that exact thing, but boiled down this is their essential message. 

I recognize that I’ve been pretty harsh here, and I also recognize that it might be possible to have the best of both worlds. To get the antifragility of Taleb with the rigor of Tetlock, indeed in Alexander’s recent post, that is basically what he suggests. That rather than take superforecasting predictions as some sort of gold standard that we should use them to do “cost benefit analysis and reason under uncertainty.” That, as the title of his post suggests, this was not a failure of prediction, but a failure of being prepared, suggesting that predicting the future can be different from preparing for the future. And I suppose they can be, the problem with this is that people are idiots, and they won’t disentangle these two ideas. For the vast majority of people and corporations and governments predicting the future and preparing for the future are the same thing. And when combined with a reward structure which emphasizes efficiency/fragility, the only thing they’re going to pay attention to is the rosy predictions of continued growth, not preparing for dire catastrophes which are surely coming.

To reiterate, superforecasting, by focusing on the number of correct predictions, without considering the greater impact of the predictions they get wrong, only that such missed predictions be few in number, has disentangled prediction from preparedness. What’s interesting is that while I understand the many issues with the system they’re trying to replace, of bloviating pundits making predictions which mostly didn’t come true, that system did not suffer from this same problem.

IV.

In the leadup to the pandemic there were many people predicting that it could end up being a huge catastrophe (including Taleb, who said it to my face) and that we should take draconian precautions. These were generally the same people who issued the same warnings about all previous new diseases, most of which ended up fizzling out before causing significant harm, for example Ebola. Most people are now saying we should have listened to them. At least with respect to COVID-19, but these are also generally the same people who dismissed previous worries as being pessimistic, or of panicking, or of straight up being crazy. It’s easy to see they were not, and this illustrates a very important point. Because of the nature of black swans and negative events, if you’re prepared for a black swan it only has to happen once for your caution to be worth it, but if you’re not prepared then in order for that to be a wise decision it has to NEVER happen. 

The financial crash of 2007-2008 represents an interesting example of this phenomenon. An enormous number of financial models was based on this premise that the US had never had a nationwide decline in housing prices. And it was a true and accurate position for decades, but the one year it wasn’t true made the dozens of years when it was true almost entirely inconsequential.

To take a more extreme example imagine that I’m one of these crazy people you’re always hearing about. I’m so crazy I don’t even get invited on TV. Because all I can talk about is the imminent nuclear war. As a consequence of these beliefs I’ve moved to a remote place and built a fallout shelter and stocked it with a bunch of food. Every year I confidently predict a nuclear war and every year people point me out as someone who makes outlandish predictions to get attention, because year after year I’m wrong. Until one year, I’m not. Just like with the financial crisis, it doesn’t matter how many times I was the crazy guy with a bunker in Wyoming, and everyone else was the sane defender of the status quo, because from the perspective of consequences they got all the consequences of being wrong despite years and years of being right, and I got all the benefits of being right despite years and years of being wrong.

The “crazy” people who freaked out about all the previous potential pandemics are in much the same camp. Assuming they actually took their own predictions seriously and were prepared, they got all the benefits of being right this one time despite many years of being wrong, and we got all the consequences of being wrong, in spite of years and years, of not only forecasts, but SUPER forecasts telling us there was no need to worry.


I’m predicting, with 90% confidence that you will not find this closing message to be clever. This is an easy prediction to make because once again I’m just using the methodology of predicting that the status quo will continue. Predicting that you’ll donate is the high impact rare event, and I hope that even if I’ve been wrong every other time, that this time I’m right.


Worries for a Post COVID-19 World

If you prefer to listen rather than read, this blog is available as a podcast here. Or if you want to listen to just this post:

Or download the MP3


It’s hard to imagine that the world will emerge from the COVID-19 pandemic without undergoing significant changes, and given that it’s hard to focus on anything else at the moment, I thought I’d write about some of those potential changes, as a way of talking about the thing we’re all focused on, but in a manner that’s less obsessed with the minutiae of what’s happening right this minute

To begin with there’s the issue of patience I mentioned in my last post. My first prediction is that special COVID-19 measures will still be in force two years from now, though not necessarily continuously. Meaning I’m not predicting that the current social distancing rules will still be in place two years from now, the prediction is more that two years from now you’ll still be able to read about an area that has reinstituted them after a local outbreak. Or to put it another way, COVID-19 will provoke significantly more worry than the flu even two years from now.

My next prediction is that some industries will never recover to their previous levels. In order of most damaged to least damaged these would be:

  1. Commercial Realty: From where I sit this seems like the perfect storm for commercial realty. You’ve got a generalized downturn that’s affecting all businesses. Then you have the demise of WeWork (the largest office tenant in places like NYC) which was already in trouble and now has stopped paying many of it’s leases. But, on top of all of that you have numerous businesses who have just been forced into letting people work from home and some percentage of those individuals and companies are going to realize it works better and for less money. I’m predicting a greater than 20% decrease in the value of commercial real estate by the time it’s all over.
  2. Movie theaters: I’m predicting 15% of movie theaters will never come back. More movies will have a digital only release, and such releases will get more marketing.
  3. Cruises: The golden age of cruises is over. I’m predicting whatever the cruise industry made in 2019 that it will be a long time before we see that amount again. (I’m figuring around a decade.)
  4. Conventions: I do think they will fully recover, but I predict that for the big conventions it will be 2023 before they regain their 2019 attendance numbers.
  5. Sports: I’m not a huge sports fan, so I’m less confident about a specific prediction, but I am predicting that sports will look different in some significant way. For example lower attendance, drop in value of sports franchises, leagues which never recover, etc. At a minimum I’m predicting that IF the NFL season starts on time it will do it without people in attendance at the stadiums

As you can tell most of these industries are ones that pack a large number of people together for a significant period of time, and regardless of whether I’m correct on every specific prediction, I see no way around the conclusion that large gatherings of people will be the last thing to return to a pre-pandemic normal

One thing that would help speed up this return to normalcy is if there’s a push to eventually test everyone, which is another prediction I made a while back, though I think it was on Twitter. (I’m dipping my toe in that lake, but it’s definitely not my preferred medium, however if you want to follow me I’m @Jeremiah820) When I say test everyone, I’m not saying 100%, or even 95%, but I’m talking about mass testing, where we’re doing orders of magnitude more than we’re doing right now. Along the lines of what’s proposed in this Manhattan Program for Testing article.

Of course one problem with doing that is coming up with the necessary reagents, and while this prediction is somewhat at odds with the last prediction, it seems to be ever more clear that when it comes down to it, the pandemic is a logistical problem. And that long term harm is going to mostly come from the delay in getting or being able to produce what we need. For example the fact that our mask supply was outsourced to Southeast Asian, and most of our drug manufacturing has been outsourced to there and India, and most of our antibiotics are made in China and Lombardy Italy (yeah the area that was hit the hardest). The biggest problem with testing everyone appears to be getting the necessary reagents, I’m not sure where the bottleneck is there, but that’s obviously one of the biggest ones of all. In theory you should be seeing an exponential increase in the amount of testing similar to the exponential growth of the number of diagnosis (since ever diagnosis needs a test) but instead the testing statistics are pretty lumpy, and in my own state, after an initial surge the number of tests being done has slipped back to the level they were two weeks ago.

Thus far we mostly talked about the immediate impact of the pandemic with its associated lockdown, but I’m also very interested in what the world looks like after things have calmed down. (I hesitate to use the phrase “returned to normal” because it’s going to be a long time before that happens.) I already mentioned in my last post that I think this is going to have a significant impact on US-China relations, and in case it wasn’t clear I’m predicting that they’ll get worse. As to how exactly they will get worse, I predict that on the US side the narrative that it’s all China’s fault will become more and more entrenched, with greater calls to move manufacturing out of China, and more support for Trump’s tariffs. On the Chinese side, I expect they’re going to try and take advantage of the weakness (perceived or real, it’s hard to say) of the US and Europe to sew up their control of the South China Sea, and maybe make more significant moves towards re-incorporating Taiwan. 

Turning to more domestic concerns, I expect that we’ll spend at least a little more money on preparedness, though it will still be entirely overwhelmed (by several orders of magnitude) by the money we’re spending trying to cure the problem after it’s happened rather than preventing it before it does. Also I fear that we’ll fall into the traditional trap where we’re well prepared for the last crisis, but then actually end up spending less money on other potential crises. As a concrete prediction I think the budget for the CDC will go up, but that budgets for things like nuclear non-proliferation and infrastructure hardening against EMPs, etc. will remain flat or actually go down. 

Also on the domestic front, this is more of a hope than a prediction, but I would expect that there will be a push towards having more redundancy. That we will see greater domestic production of certain critical emergency supplies, perhaps tax credits for maintaining surge capacity (as I mentioned in a previous post), and possibly even an antitrust philosophy which is less about predatory monopolies, and more about making industries robust. That we will work to make things a little less efficient in exchange for making them less fragile

From here we move on to more fringe issues, though in spite of their fringe character these next couple of predictions are actually the ones I feel the most confident about. To start with I have some predictions to make concerning the types of conspiracy theories this crisis will spawn. Now obviously, because of the time in which we live, there are already a whole host of conspiracy theories about COVID-19. But my prediction is that when things finally calm down that there will be one theory in particular which will end up claiming the bulk of the attention. The theory that COVID-19 was a conspiracy to allow the government to significantly increase its power and in particular its ability to conduct surveillance. As far as specifics the number of people who currently identify as “truthers” (9/11 conspiracy theorists) currently stands at 20% I predict that the number of COVID conspiracy theorists will be at least 30%

But civil libertarians are not the only ones who see more danger in the response to the pandemic than in the pandemic itself. I’m also noticing that a surprising number of Christians view it as a huge threat to religion as well. With many of them feeling that the declaration of churches as “non-essential” is very troubling just on it’s face, and that furthermore it’s a violation of the First Amendment. This mostly doesn’t include Mormons, and we were in fact one of the first denominations to shut everything down. But despite this I do have a certain amount of sympathy for the position, particularly if the worst accusations turn out to be true. Despite my sympathies I am in total agreement that megachurches should not continue conducting meetings, that in fact meetings in general over a few people are a bad idea. But consider this claim:

Christian churches worldwide have suffered the greatest, most catastrophic blow in their entire history, and – such is the feebleness of modern faith – have barely noticed (and barely even protested). 

There are many enforced closures and lock-downs of many institutions and buildings in England now; but there are none, I think, so severe and so absolute as the lock-down of Church of England churches.

Take a look for yourself – browse around. 

The instructions make clear that nobody should enter a church building, not even the vicar (even the church yard is supposed to be locked) – except in the case of some kind of material emergency like a gas leak. And, of course: all Christian activities must cease.

This is specifically directed at the church’s Christian activities. As a telling example, a funeral can be conducted in secular buildings, but the use of church buildings for a religious funeral is explicitly forbidden.

Except, wait for it… Church buildings can be used for non-Christian activities – such as blood donation, food banks or as night shelters… 

English churches are therefore – by official decree – now deconsecrated shells.

Church buildings are specifically closed for all religious activities – because these are allegedly too dangerous to allow; but at the same time churches are declared to be safe-enough, and allowed to remain open, for various ‘essential’ secular activities.

What could be clearer than that? 

I’ve looked at the link, and the claims seem largely true, though sensationalized, and in some cases it looks like the things banned by the Church of England were banned by the state a few days later. But you can see where it might seem like churches are being especially singled out for additional restrictions. And, while I’m sympathetic. I do not think this means that there’s some sort of wide-ranging conspiracy. But this doesn’t mean that other people won’t, and conspiracy theories have been created from evidence more slender than this. (Also stuff like this PVP Comic doesn’t help.) Which leads to another prediction, the pandemic will worsen relations between Christians (especially evangelicals) and mainstream governmental agencies (the bureaucracy and more middle of the road candidates). 

A metric for whether this comes to pass is somewhat difficult to specify, but insofar as Trump is seen as out of the mainstream, and as bucking consensus as far as the pandemic, one measure might be if his share of the evangelical vote goes up. Though I agree there could be lots of reasons for that. Which is to say I feel pretty confident in this prediction, but I wouldn’t blame you if you questioned whether I had given you enough for it to truly be graded.

Finally, in a frightening combination of fringe concerns, eschatology, things with low probability, and apocalyptic pandemics, we arrive at my last prediction. But first an observation, have you noticed how many stories there have been about the reduction in pollution and greenhouse gases as a result of the pandemic? If you have, does it give you any ideas? Was one of those ideas, “Man, if I was a radical environmentalist, I think I’d seriously consider engineering a pandemic just like this one as a way of saving the planet!”? No? Maybe it’s just me that had this idea, but let’s assume that in a world of seven billion people more than one person would have had this idea.

Certainly, even before the pandemic, there was a chance that someone would intentionally engineer a pandemic, and I don’t think I’m stretching things too much to imagine that a radical environmentalist might be the one inclined to do it, though you could also imagine someone from the voluntary human extinction movement deciding to start an involuntary human extinction movement via this method. My speculation would be that seeing COVID-19 with its associated effects on pollution and greenhouse gases has made this scenario more likely

How likely? Still unlikely, but more likely than we’re probably comfortable with. A recent book by Toby Ord, titled The Precipice (which I have yet to read but plan to soon) is entirely devoted to existential risks. And Ord gives an engineered pandemic a 1 in 30 chance of wiping out all of humanity in the next 100 years. From this conclusion two questions follow, the first, closely related to my prediction: These odds were assigned before the pandemic, have they gone up since then? And the second question: if there’s a 1 in 30 chance of an engineered pandemic killing EVERYONE, what are the chances of a pandemic which is 10x worse than COVID-19, but doesn’t kill everyone. Less than 1 in 30 just by the nature of compound probability. But is it 1 in 10? 1 in 5?

My prediction doesn’t concern those odds. My prediction is about whether someone will make an attempt. This attempt might end up being stopped by the authorities, or it might be equivalent to the sarin gas attack on the Tokyo Subway, or it might be worse than COVID-19. My final prediction is that in the next 20 years there is a 20% chance that someone will attempt to engineer a disease with the intention of dramatically reducing the number of humans. Let’s hope that I’m mistaken.


For those who care about such things I would assign a confidence level of 75% for all of the other predictions except the two about conspiracy theories, my confidence level there is 90%. My confidence level that someone will become a donor based on this message is 10%, so less than the chances of an artificial plague, and once again, I hope I’m wrong. 


Predictions: Looking Back to 2019 and Forward to 2020

If you prefer to listen rather than read, this blog is available as a podcast here. Or if you want to listen to just this post:

Or download the MP3


At the beginning of 2017 I made some predictions. These were not predictions just for the coming year, but rather predictions for the next 100 years. A list of black swans that I thought either would or would not come to pass. (War? Yes. AI Singularity? No.) Two years later I haven’t been wrong or right yet about any of them, but that’s what I expected, they are black swans after all. But I still feel the need, on the occasion of the new year, to comment on the future, which means that in the absence of anything new to say about my 100 year predictions, I’ve had to turn to more specific predictions. Which is what I did last year. And like everyone else (myself included) you’re probably wondering how I did. 

I started off by predicting: All of my long-standing predictions continue to hold up, with some getting a little more likely and some a little less, but none in serious danger.

After doing my annual review of them (something I would recommend, particularly if you weren’t around when I initially made those predictions) this continues to be true. As one example, I predicted that immortality would never be achieved. My impression has always been that transhumanists considered this one of the easier goals to accomplish, and yet we’ve actually been going the opposite direction for several years, with life expectancy falling year after year, including the most recent numbers.

As I was writing this, the news about GPT-2s ability to play chess came out. Which, I’ll have to admit, does appear to be a major step towards falsifying my long term prediction that we will never have a general AI that can do everything a human can do, but I still think we’ve got a long way to go, farther than most people think.

I went on to predict: Populism will be the dominant force in the West for the foreseeable future. Globalism is on the decline if not effectively dead already.

I will confess that I’m not entirely sure why I limited it to “the West”. Surely this was and is true. The historic general election win by the Tories to finally push Brexit through, the not quite dead Yellow Vests Movement in France and the popularity of Sanders, Warren and Trump in the run up to the election are all examples of this. But it’s really outside of the West where populism made itself felt in 2019. One example of that, of course, are the ongoing protests in Hong Kong, as well as protests in such diverse places as Columbia, Sudan and Iran. But it’s the protests in Chile and India that I want to focus on. 

The fascinating thing about the Chilean protests is that Chile was one of the wealthiest countries in South America, and seemed to be doing great, at least from a globalist perspective. But then, because of a 4% rate increase in public transportation fees in the capital of Santiago, mass protests broke out, encompassing over a million people and involving demands for a new constitution. I used the term “globalist perspective” just now, which felt kind of clunky, but it also gets at what I mean. From the perspective of the free flow of capital and metrics like GDP and trade, Chile was doing great. Beyond that Chile was ranked 28th out of 162 countries on the freedom index, so it had good institutions as well. But for some reason, even with all that, there was another level on which it’s citizens felt things were going horribly. It’s an interesting question to consider if things are actually going horribly, or if the modern world has created unrealistic expectations, but neither is particularly encouraging, and of the two, unrealistic expectations may be worse.

Turning to India, I ended last year’s post by quoting from Tyler Cowen, “Hindu nationalism [is] on the rise, [but] India seems to be evolving intellectually in a multiplicity of directions, few of them familiar to most Americans.” I think he was correct, but also “Hindu nationalism” is a very close cousin, or even a sibling to Hindu populism, and, as is so often the case, an increase in one kind of populism has led to increases in other sorts of populism. In India’s case to increased expressions of Muslim populism. Which has resulted in huge rallies taking place in the major cities over the last few weeks in protest of an immigration law.

Speaking more generally, my sense is that these populist uprisings come in waves. There was the Arab Spring. (Apparently Chile is part of the Latin America Spring.) There was the whole wave of governments changing immediately after the fall of the Soviet Union, which included Tiananmen Square. (Which unfortunately did not result in a change of government.) In 1968 there were worldwide protests and if you want to go really far back there were the revolutions of 1848. It seems clear that we’re currently seeing another wave. (Are they coming more frequently?) And the big question is whether or not this wave has crested yet. My prediction is that it hasn’t, that 2020 will see a spreading and intensification of such protests. 

My next prediction concerned the fight against global warming, and I predicted: Carbon taxes are going to be difficult to implement, and will not see widespread adoption.

Like many of my predictions this is more long term, but still accurate. To the best of my knowledge while there was lots of sturm und drang about climate change, mostly involving Greta Thunberg, I don’t recall major climate change related policies being implemented by any government, and certainly not by the US and China, the two biggest emitters. Of course, looking back this prediction once again relates back to populism, in particular the Yellow Vest Movement, who demanded that the government not go ahead with the scheduled 2019 increase to the carbon tax, which is in fact exactly what happened. Also Alberta repealed its carbon tax in 2019. On further reflection, this particular prediction seems too specific to be something I add to the list of things I continue to track, but it does seem largely correct.

From there I went on to predict: Social media will continue to change politics rapidly and in unforeseen ways.

When people talk about the protests mentioned above social media always comes into play. And in fact it’s difficult to imagine that the Hong Kong protests could have lasted as long as they have without the presence of social platforms like Telegram and the like. And it’s difficult to imagine how the Chilean protests could have formed so quickly and over something which otherwise seems so minor in the absence of social media.

But of course the true test will be the 2020 election. And this is where I continue to maintain that we can’t yet predict how social media will impact things. I would be surprised if some of the avenues for abuse which existed in 2016 hadn’t been closed down, but I would be equally surprised if new avenues of abuse don’t open up.

My next prediction was perhaps my most specific: There will be a US recession before the next election. It will make things worse.

Despite its specificity, I could have done better. What I was getting at is that a softening economy will be a factor in the next election. This might take the form of a formal recession (that is negative GDP growth for two successive quarters) or it might be a more general loss of consumer confidence without being a formal recession. In particular I could see a recession starting before the election, but not having the time to wrack up the full two quarters of negative growth before the election actually takes place. 

In any event I stand by this prediction, though I continue to be surprised by the growth of the economy. As you may have heard the US is currently in the longest economic expansion in history. And if I’m wrong, and the economy continues to grow up through the election, then I’ll make a further prediction, Trump will be re-elected. The Economist agrees with me, in their capsule review of the coming year:

Having survived the impeachment process, Donald Trump will be re-elected president if the American economy remains strong and the opposition Democrats nominate a candidate who is perceived to be too far to the left. The economy is, however, weakening, and a slump of some kind in 2020 is all but certain, lengthening Mr Trump’s odds.

As long as we’re on the subject of the economy, I came across something else that was very alarming the other day. 

Waves of debt accumulation have been a recurrent feature of the global economy over the past fifty years. In emerging and developing countries, there have been four major debt waves since 1970. The first three waves ended in financial crises—the Latin American debt crisis of the 1980s, the Asia financial crisis of the late 1990s, and the global financial crisis of 2007-2009.

A fourth wave of debt began in 2010 and debt has reached $55 trillion in 2018, making it the largest, broadest and fastest growing of the four. While debt financing can help meet urgent development needs such as basic infrastructure, much of the current debt wave is taking riskier forms. Low-income countries are increasingly borrowing from creditors outside the traditional Paris Club lenders, notably from China. Some of these lenders impose non-disclosure clauses and collateral requirements that obscure the scale and nature of debt loads. There are concerns that governments are not as effective as they need to be in investing the loans in physical and human capital. In fact, in many developing countries, public investment has been falling even as debt burdens rise. 

That’s from a World Bank Report. Make of it what you will, but the current conditions certainly sounds like previous conditions which ended in crisis and catastrophe, and if the report is to be believed conditions are much worse now than on the previous three occasions. I understand that if it does happen there’s some chance it won’t affect the US, but given how interconnected the world economy is, that doesn’t seem particularly likely. I guess we’ll have to wait and see.

I should mention that one of my long term predictions is that: The US government’s debt will eventually be the source of a gigantic global meltdown. And while the debt mentioned in the report is mostly in countries outside of the US, it is in the same ballpark.

Moving on, my next prediction was: Authoritarianism is on the rise elsewhere, particularly in Russia and China.

I would think that the Hong Kong protests are definitive proof of rising authoritarianism in China or at least continuing authoritarianism. But on top of that 2019 saw an increase in the repression of the Uyghurs, most notably their internment in re-education camps, and this in spite of the greater visibility and condemnation these camps have collected. But what about Russia? Here things seem to have been quieter than I expected, and I will admit that I was too pessimistic when it came to Russia. Though they are still plenty authoritarian, and it will be interesting to see what happens as it gets closer to the end of Putin’s term in 2024.

Those two countries aside, I actually argued that authoritarianism is on the rise generally, and this seems to be confirmed by Freedom House, which said that in 2018 that freedom declined in 68 countries while only increasing in 50, and that this continues 13 consecutive years of decline. You did read that correctly, I gave the numbers for 2018, because those are the most recent numbers available, but I’m predicting that when the 2019 numbers come in, that they’ll also show a net decline in freedom.

My final specific prediction from last year was: The jockeying for regional power in the Middle East will intensify.

Well, if this didn’t happen in 2019 (and I think it did) then it certainly happened in 2020 when the US killed Qasem Soleimani. Though to be fair, while the killing definitely checks the “intensify” box, it’s not quite as good at checking the “regional power” box. Though any move that knocks Iran down a peg has to be good news for at least one of the other powers in the region, which creates a strong suspicion that the US’s increasing aggressiveness towards Iran might be on behalf of one or more of those other powers.

Still, it was the US who did it, and it’s really in that context that it’s the most interesting. What does the Soleimani killing say about ongoing American hegemony? First, it’s hard, but not impossible to imagine any president other than Trump ordering the strike. (Apparently the Pentagon was “stunned” when he chose that option.) Second and more speculatively, I would argue this illustrates that, while the ability of the US military to project force wherever it wants is still alive and well, such force projection is going to become increasingly complicated and precarious.

At this point it’s tempting to go on a tangent and discuss the wisdom or foolishness of killing Soleimani, though I don’t know that it’s really clearly one or the other. He was clearly a bad guy, and the type of warfare he specialized in was particularly loathsome. That said does killing any one person, regardless of how important, really do much to slow things down? 

Perhaps the biggest argument for it being foolish would have to be the precedent it sets. Adding the option of using drones to surgically kill foreign leaders you don’t like, seems both dangerous and destabilizing, but is it also inevitable? Probably, though I am sympathetic to the idea that Trump set the precedent and opened the gates earlier than Clinton (or any of a hundred other presidential candidates you might imagine.)

That covers all of my previous predictions to one degree or another, along with adding a few more and now you probably want some new predictions. In particular, everyone wants to know who’s going to win the 2020 presidential election, so I guess I’ll start with that. To begin with I’m predicting that the Democrats are going to end up having a brokered convention. Okay, not actually, but I really hope it happens, I have long thought that it would be the most interesting thing that could happen for a political junkie like me. But it hasn’t happened since 1952, and since then both parties have put a lot of things in place to keep them from happening, because brokered conventions look bad for the party. That said, some of these things, like superdelegates, have been recently been weakened. Also Democrats allocate delegates proportionally rather than winner take all like the Republicans. Finally, it does seem that recently we’ve been getting closer. Certainly there was talk of it when Obama secured the nomination in 2008, and then again in 2016 when they were trying to figure out how to stop Trump.. So fingers crossed for 2020.

If it’s not going to be a brokered convention, then the candidate will have to come out of the primaries, which may be even harder to predict than who would emerge from a convention fight. Which is to say I honestly have no idea who’s going to end up as the Democratic candidate. Which makes it difficult to predict the winner in November. Since I basically agree with The Economist quote above, there is a real danger of Trump winning if they nominate Sanders or Warren. I know the last election felt chaotic, but I think 2020 will be more chaotic by a significant margin. 

All that said, gun to my head, I think Biden will squeak into being the Democratic nominee and then beat Trump when the economy softens just before the election. And I hope that this will bring a measure of calm to the country, but also I have serious doubts about Biden (my favorite recent description of him is confused old man) and I know that a lot of people really think he’s going to collapse during the election and hand it to Trump. Which, if you’re one of the Democrats voting in the primary, would be a bad thing. 

A lot hinges on whether Bloomberg is going to make a dent in the race. I kind of like Bloomberg. I think technocrats are overrated in general, but given the alternative, a competent technocrat could be very refreshing, and I can see why he entered the race. With Biden’s many gaffes there does seem to be a definite dearth of candidates in that category. Unfortunately, despite dropping a truly staggering amount of money he’s still polling fifth. In any case, there’s a lot of moving parts, and any number of things can happen, still, on top of my prediction that Biden will squeak in as the Democratic nominee, I’m predicting that even if he doesn’t a Democrat will win the 2020 election. But I guess we’ll have to wait and see. 

In summary, I’m predicting:

  • Everything I predicted in 2017.
  • A continuation of my predictions from last year with some pivots:
    • More populism, less globalism. Specifically that protests will get worse in 2020.
    • No significant reduction in global CO2 emissions (a drop of greater than 5%)
    • Social media will continue to have an unpredictable effect on politics, but the effect will be negative.
    • That the US economy will soften enough to cause Trump to lose.
    • That the newest wave of debt accumulation will cause enormous problems (at least as bad as the other three waves) by the end of the decade.
    • Authoritarianism will continue to increase and liberal democracy will continue its retreat.
    • The Middle East will get worse.

     

  • Biden will squeak into the Democractic nomination.
  • The Democrats will win in 2020.

As long as we’re talking about the election and conditions this time next year, I should interject a quick tangent. I was out to lunch with a friend of mine the other day and he predicted that Trump will lose the election, but that in between the election and the inauguration Russia will convince North Korea to use one of their nukes to create a high altitude EMP which will take out most of the electronics in the US, resulting in a nationwide descent into chaos. This will allow Trump to declare martial law, suspending the results of the election and the inauguration of the new president. And then, to cap it all off, Trump will use the crisis as an excuse to invite in Russian troops as peacekeepers. After hearing this I offered him 1000-1 odds that this specific scenario would not happen. He decided to put down $10, so at this point next year, I’ll either be $10 richer, or I’ll have to scrounge up the equivalent of $10,000 in gold while dealing with the collapse of America and a very weird real-life version of Red Dawn.

I will say though, as someone with a passion for catastrophe, I give his prediction for 2020 full marks for effort. It is certainly far and away the most vivid scenario for the 2020 election that I have heard. And, speaking of vivid catastrophes. With my new focus on eschatology, one imagines that I should make some eschatological predictions as well. But of course I can’t. And that’s kind of the whole point. If I was able to predict massive catastrophes in advance then presumably lots of people could do it, and some of those people would be in a position to stop those catastrophes. Meaning that true catastrophes are only what can’t be predicted, or what can’t be stopped even if someone could predict them. That may in fact be fundamental to the definition of eschatology no matter how you slice it, going all the way back to the New Testament

Watch therefore, for ye know neither the day nor the hour wherein the Son of man cometh. 

This injunction applies not only to the Son of Man but also to giant asteroids, terrorist nukes and even the election of Donald Trump, and it’s going to be the subject of my next post.


I have one final prediction, that my monthly patreon donations will be higher at the end of 2020 than at the start. I know what you’re thinking, why that snarky, arrogant… In fact saying it makes you not want to donate, but then everyone has to feel the same way, which ends up being a large coordination problem. On the other hand it just takes one person to make the prediction true, and that person could be you! 


Worrying Too Much About the Last Thing and Not Enough About the Next Thing

If you prefer to listen rather than read, this blog is available as a podcast here. Or if you want to listen to just this post:

Or download the MP3


As I mentioned in my last post one of the books I read last month was Alone: Britain, Churchill, and Dunkirk: Defeat into Victory, by Michael Korda which covers the beginnings of World War II from the surrender of the Sudetenland up through the retreat from Dunkirk. As I mentioned one of the things that struck me the most from reading the book was the assertion that before the war France had a reputation as the “world’s preeminent military power”. And that in large part the disaster which befell the allies was due to a severe underestimation of German military might (after all, hadn’t they lost the last war?) and a severe overestimation of the opposing might of the French. 

As someone who knows how that all turned out (France defeated in a stunning six weeks) the idea that pre-World War II France might ever have been considered the “world’s preeminent military power” seems ridiculous, and yet according to Korda that was precisely what most people thought. It’s difficult to ignore how it all turned out, but if you attempt it, you might be able to see where that reputation might have developed. Not only had they grimly held on for over four years in some of the worst combat conditions ever, and, as I said, eventually triumphed. But apparently the genius and success of Napoleon lingered on as well, even at a remove of 130 years.

Because of this reputation, at various points both the British and the Germans, though on opposite sides of things, made significant strategic decisions based on the French’s perceived martial prowess. The biggest effect of these decisions was wasting resources that could have been better spent elsewhere. In the British case they kept sending over more and more planes, convinced that, just as in World War I, the French line would eventually hold if they just had a little more help. This almost ended in disaster since, later, during the Battle of Britain, they needed every plane they could get their hands on. On the German side, and this is more speculative, it certainly seems possible that the ease with which the Germans defeated the French contributed to the disastrous decision to invade Russia. Particularly if the French had the better reputation militarily, which seems to have been the case. Closer to the events of the book, the Germans certainly prioritized dealing with the French over crushing the remnants of the British forces that were trapped at Dunkirk. Who knows how things would have gone had they reversed those priorities.

This shouldn’t be surprising, people frequently end up fighting the last war, and in fact the exact period the book describes contains one of the best examples of that, the Maginot Line. World War I had been a war of static defense, World War II, or at least the Battle of France, was all about mobility. Regular readers may remember that I recently mentioned that the Maginot line kind of got a bad rap, and indeed it does, and in particular I don’t think that it should be used as an example for why walls have never worked. But all of this is another example of the more general principle I want to illustrate. People’s attitudes are shaped by examples they can easily call to mind, rather than by considering all possibilities. And in particular people are bad at accounting for the fact that if something just happened, it’s possible that it is in fact the thing least likely to happen again. The name for this, is Availability Bias or the Availability Heuristic, and it was first uncovered by Daniel Kahneman and Amos Tversky. Wikipedia explains it thusly:

The availability heuristic is a mental shortcut that occurs when people make judgments about the probability of events on the basis of how easy it is to think of examples. The availability heuristic operates on the notion that, “if you can think of it, it must be important.” The availability of consequences associated with an action is positively related to perceptions of the magnitude of the consequences of that action. In other words, the easier it is to recall the consequences of something, the greater we perceive these consequences to be. Sometimes, this heuristic is beneficial, but the frequencies at which events come to mind are usually not accurate reflections of the probabilities of such events in real life.

As I was reading Alone, and mulling over the idea of France as the “world’s preeminent military power”, and realizing that it represented something of an availability bias, it also occurred to me that we might be doing something similar when it comes to ideology, in particular the ideologies we’re worried about. From where I sit there’s a lot of worry about nazis, and fascists more broadly. And to be fair I’m sure there are nazis out there, and their ideology is pretty repugnant, but how much of our worry is based on the horrors inflicted by the Nazis in World War II and how much of our worry is based on the power and influence they actually possess right now? In other words, how much of it is based on the reputation they built up in the past, and how much is based on 2019 reality? My argument would be that it’s far more the former than the latter.

In making this argument, I don’t imagine it’s going to take much to convince anyone reading this that the Nazis were uniquely horrible. And that further whatever reputation they have is deserved. But all of this should be a point in favor of my position. Yes they were scary, no one is arguing with that, but it doesn’t naturally follow that they are scary now. To begin with, we generally implement the best safeguards against terrifying things which have happened recently. Is there any reason to suspect that we haven’t done that with fascism? It’s hard to imagine how we could have more thoroughly crushed the countries from which it sprang. But, you may counter, “We’re not worried about Germany and Japan! We’re worried about fascists and nazis here!” Well allow me to borrow a couple of points from a previous post, where I also touched on this issue.

-Looking at the sub-reddits most associated with the far right the number of subscribers to the biggest (r/The_Donald) is 538,762 while r/aww a subreddit dedicated to cute animals sits at 16,360,969

-If we look at the two biggest far-right rallies, Charlottesville and a rally shortly after that, in Boston. The number of demonstrators was always completely overwhelmed by the number of counter demonstrators. The Charlottesville rally was answered by 130 counter rallies held all over the nation the very next day. And the Boston free speech rally had 25 “far right demonstrators in attendance” as compared to 40,000 counter-protestors.

Neither of these statistics makes it seem like we’re on the verge of tipping over into fascism anytime soon. Nevertheless, I’m guessing there are people who are going to continue to object, pointing out that whatever else you want to say about disparity and protests or historical fascism. Donald Trump got elected!

I agree this is a big data point, 62,984,828 people did vote for Trump, and whatever the numbers might be for Charlottesville and Boston, 63 million people is not a number we can ignore. Clearly Trump has a lot of support. But I think anyone who makes this point is skipping over one very critical question. Is Trump a nazi? Or a fascist? Or a white supremacist? Or even a white nationalist? I don’t think he is. And I think to whatever extent people apply those labels to him or his supporters they’re doing it precisely for the reason I just mentioned. All of those groups were recently very powerful and very scary. They are not doing it because those terms reflect the reality of 2019. They use those labels because they’re maximally impactful, not because they’re maximally accurate. 

Lots of people have pointed out that Trump isn’t Hitler and that the US is unlikely to descend into Facsism anytime soon (here’s Tyler Cowen making that argument.) Though fewer than you might think (which, once again, supports my point). But I’d like to point out five reasons for why it’s very unlikely which probably don’t get as much press as they should.

  1. Any path to long standing power requires some kind of unassailable base. In most cases this ends up being the military. What evidence is there that Trump is popular enough there (or really anywhere) to pull off some sort of fascist coup?
  2. As our prime example it’s useful to look at all the places that supported Hitler. In particular people don’t realize that he had huge support in academia. I think it’s fair to say that the exact opposite situation exists now.
  3. People look at Nazi Germany somewhat in isolation. You can’t understand Nazi Germany without understanding how bad things got in the Weimar Republic. No similar situation exists in America.
  4. Even though it probably goes without saying I haven’t seen very many people mentioning the fact that Trump isn’t anywhere close to being as effective a leader as Hitler was. In particular look at Trump’s lieutenants vs. Hitlers.
  5. Finally feet on the ground matter. The fact that there were 25 people on one side (the side people are worried about) and 40,000 on the other does matter. 

I’d like to expand on this last point a little bit. Recently over on Slate Star Codex, Scott Alexander put forth the idea that LGBT rights represents the most visible manifestation of a new civic religion. That over the last few years the country has started replacing the old civic religion of reverence for the founders and the constitution with a new one reverencing the pursuit of social justice. He made this point mostly through the methodology of comparing the old “rite” of the 4th of July parade, with the new “rite” of the Gay Pride Parade. There’s a lot to be said about that comparison, most of which I’ll leave for another time, but this does bring up one question which is very germane to our current discussion: under what standard are the two examples Alexander offers up civic religions but not Nazism? I don’t think there is one, in fact I think Nazism was clearly a civic religion. To go farther is there anyone who has taken power, particularly through revolution or coup, without being able to draw on a religion of some sort, civic or otherwise? What civic religion would Trump draw on if he was going to bring fascism to the United States? I understand that an argument could be made that Trump took advantage of the old civic religion of patriotism in order to be elected, but it’s hard to see how he would go on to repurpose that same religion to underpin a descent into fascism, especially given how resilient this religion has been in the past to that exact threat.

Additionally, if any major change is going to require the backing of a civic religion why would we worry about patriotism which has been around for a long time without any noticeable fascist proclivities, and is, in any case, starting to lose much of its appeal, when there’s a bold and vibrant new civic religion with most of the points I mentioned above on it’s side. Let’s go through them again:

  1. An unassailable base: No, social justice warriors, despite the warrior part, do not have control over the military, but they’ve got a pretty rabid base, and as I’ve argued before, the courts are largely on their side as well.
  2. Broad support: It’s hard to imagine how academia could be more supportive. In fact it’s hard to find any place that’s not supportive. Certainly corporations have aligned themselves solidly on the side of social justice.
  3. Drawing strength from earlier set-backs and tragedy: Hitler was undoing the wrongs of the Treaty of Versailles and the weakness of the Weimar Republic. Whatever you think about the grievances of poor white Trump supporters there are nothing compared to the (perceived) wrongs of those clamoring for social justice. 
  4. Effective leadership: This may in fact be the only thing holding them back, but there’s a field of 24 candidates out there, some of whom seem pretty galvanizing. 
  5. Feet on the ground: See my point above about the 130 counter rallies. 

To be clear, I am not arguing that social justice is headed for a future with as much death and destruction as World War II era Nazis. I don’t know what’s going to happen in the future, perhaps it will be just as all of its proponents claim, the dawn of a never ending age of peace, harmony and prosperity. I sure hope so. That said we do have plenty of examples of ideologies which started out with the best of intentions but which ended up committing untold atrocities. Obviously communism is a great example, but you could also toss just about every revolution ever into that bucket as well. 

Where does all of this leave us? First it seems unlikely that nazis and fascists are very well positioned to cause the kind of large scale problems we should really be worried about. Also, there’s plenty of reasons to believe that our biases would push us towards overstating the danger, on top of that. Beyond all that there is a least one ideology which appears better positioned for a dramatic rise in power, meaning that if we’re just interested in taking precautions at a minimum we should add them to the list alongside the fascists. Which is to say that I’m not trying to talk you out of worrying about fascists, I’m trying to talk you into being more broad minded when you consider where dangers might emerge. 

Yes this is only one, and probably reflects my own biases, but there are certainly others as well. At the turn of the last century everyone was worried about anarchists. As well they might be in 1901 they managed to assassinate President Mckinley (what have the American fascists done that’s as bad as that?) And there are people who say that even today we should worry more about anarchism than fascism. Other people seem unduly fascinated with the dangers and evils of libertarianism (sample headline, Rise of the techno-Libertarians: The 5 most socially destructive aspects of Silicon Valley). If there is a weaker major political movement than the libertarians I’m not aware of it, but fine, add them to the list too. But above all, whatever your list is and how ever you make it, spend less time worrying about the last thing and more time worrying about the next thing.


I will say that out of all the things to worry about bloggers carry the least potential danger of anything. Though maybe if one of us had a bunch of money? If you want to see how dangerous I can actually get, consider donating.